Apr 04 01:55:19 crc systemd[1]: Starting Kubernetes Kubelet... Apr 04 01:55:19 crc restorecon[4680]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:19 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Apr 04 01:55:20 crc restorecon[4680]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Apr 04 01:55:20 crc restorecon[4680]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Apr 04 01:55:20 crc kubenswrapper[4681]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 04 01:55:20 crc kubenswrapper[4681]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Apr 04 01:55:20 crc kubenswrapper[4681]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 04 01:55:20 crc kubenswrapper[4681]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 04 01:55:20 crc kubenswrapper[4681]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 04 01:55:20 crc kubenswrapper[4681]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.917649 4681 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924500 4681 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924532 4681 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924541 4681 feature_gate.go:330] unrecognized feature gate: InsightsConfig Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924566 4681 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924577 4681 feature_gate.go:330] unrecognized feature gate: OVNObservability Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924586 4681 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924595 4681 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924604 4681 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924612 4681 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924620 4681 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924629 4681 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924637 4681 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924645 4681 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924655 4681 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924664 4681 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924672 4681 feature_gate.go:330] unrecognized feature gate: Example Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924679 4681 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924687 4681 feature_gate.go:330] unrecognized feature gate: SignatureStores Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924695 4681 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924703 4681 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924710 4681 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924718 4681 feature_gate.go:330] unrecognized feature gate: GatewayAPI Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924725 4681 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924733 4681 feature_gate.go:330] unrecognized feature gate: PinnedImages Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924740 4681 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924748 4681 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924756 4681 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924764 4681 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924787 4681 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924798 4681 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924807 4681 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924815 4681 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924825 4681 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924835 4681 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924843 4681 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924852 4681 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924861 4681 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924869 4681 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924880 4681 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924890 4681 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924898 4681 feature_gate.go:330] unrecognized feature gate: PlatformOperators Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924906 4681 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924915 4681 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924923 4681 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924931 4681 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924939 4681 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924946 4681 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924954 4681 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924963 4681 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924972 4681 feature_gate.go:330] unrecognized feature gate: NewOLM Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924980 4681 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924987 4681 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.924995 4681 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.925003 4681 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.925010 4681 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.925020 4681 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.925028 4681 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.925035 4681 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.925043 4681 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.925050 4681 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.925058 4681 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.925065 4681 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.925073 4681 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.925080 4681 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.925103 4681 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.925111 4681 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.925118 4681 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.925126 4681 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.925133 4681 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.925141 4681 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.925148 4681 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927092 4681 flags.go:64] FLAG: --address="0.0.0.0" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927116 4681 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927139 4681 flags.go:64] FLAG: --anonymous-auth="true" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927151 4681 flags.go:64] FLAG: --application-metrics-count-limit="100" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927162 4681 flags.go:64] FLAG: --authentication-token-webhook="false" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927171 4681 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927182 4681 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927193 4681 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927202 4681 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927212 4681 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927221 4681 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927231 4681 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927240 4681 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927249 4681 flags.go:64] FLAG: --cgroup-root="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927258 4681 flags.go:64] FLAG: --cgroups-per-qos="true" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927294 4681 flags.go:64] FLAG: --client-ca-file="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927302 4681 flags.go:64] FLAG: --cloud-config="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927311 4681 flags.go:64] FLAG: --cloud-provider="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927319 4681 flags.go:64] FLAG: --cluster-dns="[]" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927337 4681 flags.go:64] FLAG: --cluster-domain="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927346 4681 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927355 4681 flags.go:64] FLAG: --config-dir="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927364 4681 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927373 4681 flags.go:64] FLAG: --container-log-max-files="5" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927385 4681 flags.go:64] FLAG: --container-log-max-size="10Mi" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927394 4681 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927403 4681 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927412 4681 flags.go:64] FLAG: --containerd-namespace="k8s.io" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927421 4681 flags.go:64] FLAG: --contention-profiling="false" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927442 4681 flags.go:64] FLAG: --cpu-cfs-quota="true" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927451 4681 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927461 4681 flags.go:64] FLAG: --cpu-manager-policy="none" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927469 4681 flags.go:64] FLAG: --cpu-manager-policy-options="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927480 4681 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927489 4681 flags.go:64] FLAG: --enable-controller-attach-detach="true" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927498 4681 flags.go:64] FLAG: --enable-debugging-handlers="true" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927507 4681 flags.go:64] FLAG: --enable-load-reader="false" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927515 4681 flags.go:64] FLAG: --enable-server="true" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927524 4681 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927547 4681 flags.go:64] FLAG: --event-burst="100" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927556 4681 flags.go:64] FLAG: --event-qps="50" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927565 4681 flags.go:64] FLAG: --event-storage-age-limit="default=0" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927574 4681 flags.go:64] FLAG: --event-storage-event-limit="default=0" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927583 4681 flags.go:64] FLAG: --eviction-hard="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927593 4681 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927602 4681 flags.go:64] FLAG: --eviction-minimum-reclaim="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927610 4681 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927619 4681 flags.go:64] FLAG: --eviction-soft="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927628 4681 flags.go:64] FLAG: --eviction-soft-grace-period="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927637 4681 flags.go:64] FLAG: --exit-on-lock-contention="false" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927646 4681 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927655 4681 flags.go:64] FLAG: --experimental-mounter-path="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927664 4681 flags.go:64] FLAG: --fail-cgroupv1="false" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927673 4681 flags.go:64] FLAG: --fail-swap-on="true" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927682 4681 flags.go:64] FLAG: --feature-gates="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927697 4681 flags.go:64] FLAG: --file-check-frequency="20s" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927706 4681 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927716 4681 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927725 4681 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927733 4681 flags.go:64] FLAG: --healthz-port="10248" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927742 4681 flags.go:64] FLAG: --help="false" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927751 4681 flags.go:64] FLAG: --hostname-override="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927759 4681 flags.go:64] FLAG: --housekeeping-interval="10s" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927768 4681 flags.go:64] FLAG: --http-check-frequency="20s" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927777 4681 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927797 4681 flags.go:64] FLAG: --image-credential-provider-config="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927807 4681 flags.go:64] FLAG: --image-gc-high-threshold="85" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927864 4681 flags.go:64] FLAG: --image-gc-low-threshold="80" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927874 4681 flags.go:64] FLAG: --image-service-endpoint="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927883 4681 flags.go:64] FLAG: --kernel-memcg-notification="false" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927892 4681 flags.go:64] FLAG: --kube-api-burst="100" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927901 4681 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927912 4681 flags.go:64] FLAG: --kube-api-qps="50" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927920 4681 flags.go:64] FLAG: --kube-reserved="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927929 4681 flags.go:64] FLAG: --kube-reserved-cgroup="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927938 4681 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927947 4681 flags.go:64] FLAG: --kubelet-cgroups="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927956 4681 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927965 4681 flags.go:64] FLAG: --lock-file="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927973 4681 flags.go:64] FLAG: --log-cadvisor-usage="false" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927982 4681 flags.go:64] FLAG: --log-flush-frequency="5s" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.927991 4681 flags.go:64] FLAG: --log-json-info-buffer-size="0" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928005 4681 flags.go:64] FLAG: --log-json-split-stream="false" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928013 4681 flags.go:64] FLAG: --log-text-info-buffer-size="0" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928022 4681 flags.go:64] FLAG: --log-text-split-stream="false" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928031 4681 flags.go:64] FLAG: --logging-format="text" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928039 4681 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928049 4681 flags.go:64] FLAG: --make-iptables-util-chains="true" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928059 4681 flags.go:64] FLAG: --manifest-url="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928067 4681 flags.go:64] FLAG: --manifest-url-header="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928079 4681 flags.go:64] FLAG: --max-housekeeping-interval="15s" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928088 4681 flags.go:64] FLAG: --max-open-files="1000000" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928099 4681 flags.go:64] FLAG: --max-pods="110" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928107 4681 flags.go:64] FLAG: --maximum-dead-containers="-1" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928116 4681 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928130 4681 flags.go:64] FLAG: --memory-manager-policy="None" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928139 4681 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928149 4681 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928158 4681 flags.go:64] FLAG: --node-ip="192.168.126.11" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928166 4681 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928185 4681 flags.go:64] FLAG: --node-status-max-images="50" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928207 4681 flags.go:64] FLAG: --node-status-update-frequency="10s" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928216 4681 flags.go:64] FLAG: --oom-score-adj="-999" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928226 4681 flags.go:64] FLAG: --pod-cidr="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928235 4681 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928255 4681 flags.go:64] FLAG: --pod-manifest-path="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928289 4681 flags.go:64] FLAG: --pod-max-pids="-1" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928300 4681 flags.go:64] FLAG: --pods-per-core="0" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928308 4681 flags.go:64] FLAG: --port="10250" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928317 4681 flags.go:64] FLAG: --protect-kernel-defaults="false" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928326 4681 flags.go:64] FLAG: --provider-id="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928335 4681 flags.go:64] FLAG: --qos-reserved="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928344 4681 flags.go:64] FLAG: --read-only-port="10255" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928353 4681 flags.go:64] FLAG: --register-node="true" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928362 4681 flags.go:64] FLAG: --register-schedulable="true" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928370 4681 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928393 4681 flags.go:64] FLAG: --registry-burst="10" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928401 4681 flags.go:64] FLAG: --registry-qps="5" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928410 4681 flags.go:64] FLAG: --reserved-cpus="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928419 4681 flags.go:64] FLAG: --reserved-memory="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928430 4681 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928442 4681 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928451 4681 flags.go:64] FLAG: --rotate-certificates="false" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928460 4681 flags.go:64] FLAG: --rotate-server-certificates="false" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928468 4681 flags.go:64] FLAG: --runonce="false" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928477 4681 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928486 4681 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928498 4681 flags.go:64] FLAG: --seccomp-default="false" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928507 4681 flags.go:64] FLAG: --serialize-image-pulls="true" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928515 4681 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928524 4681 flags.go:64] FLAG: --storage-driver-db="cadvisor" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928533 4681 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928542 4681 flags.go:64] FLAG: --storage-driver-password="root" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928550 4681 flags.go:64] FLAG: --storage-driver-secure="false" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928559 4681 flags.go:64] FLAG: --storage-driver-table="stats" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928568 4681 flags.go:64] FLAG: --storage-driver-user="root" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928576 4681 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928598 4681 flags.go:64] FLAG: --sync-frequency="1m0s" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928607 4681 flags.go:64] FLAG: --system-cgroups="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928616 4681 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928630 4681 flags.go:64] FLAG: --system-reserved-cgroup="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928638 4681 flags.go:64] FLAG: --tls-cert-file="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928647 4681 flags.go:64] FLAG: --tls-cipher-suites="[]" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928663 4681 flags.go:64] FLAG: --tls-min-version="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928671 4681 flags.go:64] FLAG: --tls-private-key-file="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928680 4681 flags.go:64] FLAG: --topology-manager-policy="none" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928689 4681 flags.go:64] FLAG: --topology-manager-policy-options="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928711 4681 flags.go:64] FLAG: --topology-manager-scope="container" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928721 4681 flags.go:64] FLAG: --v="2" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928733 4681 flags.go:64] FLAG: --version="false" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928744 4681 flags.go:64] FLAG: --vmodule="" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928755 4681 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.928764 4681 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929051 4681 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929064 4681 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929072 4681 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929080 4681 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929088 4681 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929096 4681 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929106 4681 feature_gate.go:330] unrecognized feature gate: Example Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929114 4681 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929121 4681 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929129 4681 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929139 4681 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929149 4681 feature_gate.go:330] unrecognized feature gate: GatewayAPI Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929157 4681 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929164 4681 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929172 4681 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929180 4681 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929187 4681 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929195 4681 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929203 4681 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929210 4681 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929221 4681 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929229 4681 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929236 4681 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929244 4681 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929251 4681 feature_gate.go:330] unrecognized feature gate: PinnedImages Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929259 4681 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929296 4681 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929304 4681 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929315 4681 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929325 4681 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929334 4681 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929343 4681 feature_gate.go:330] unrecognized feature gate: PlatformOperators Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929354 4681 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929362 4681 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929370 4681 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929377 4681 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929385 4681 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929392 4681 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929400 4681 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929408 4681 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929415 4681 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929422 4681 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929430 4681 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929438 4681 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929446 4681 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929453 4681 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929460 4681 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929468 4681 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929476 4681 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929487 4681 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929496 4681 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929504 4681 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929512 4681 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929520 4681 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929529 4681 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929536 4681 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929545 4681 feature_gate.go:330] unrecognized feature gate: InsightsConfig Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929553 4681 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929560 4681 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929568 4681 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929576 4681 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929584 4681 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929591 4681 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929598 4681 feature_gate.go:330] unrecognized feature gate: OVNObservability Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929609 4681 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929617 4681 feature_gate.go:330] unrecognized feature gate: SignatureStores Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929624 4681 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929634 4681 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929643 4681 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929652 4681 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.929661 4681 feature_gate.go:330] unrecognized feature gate: NewOLM Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.930759 4681 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.941886 4681 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.941931 4681 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942047 4681 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942059 4681 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942068 4681 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942078 4681 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942086 4681 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942094 4681 feature_gate.go:330] unrecognized feature gate: GatewayAPI Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942102 4681 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942110 4681 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942118 4681 feature_gate.go:330] unrecognized feature gate: Example Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942126 4681 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942133 4681 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942141 4681 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942148 4681 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942156 4681 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942164 4681 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942171 4681 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942179 4681 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942186 4681 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942195 4681 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942204 4681 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942212 4681 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942220 4681 feature_gate.go:330] unrecognized feature gate: PinnedImages Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942228 4681 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942236 4681 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942246 4681 feature_gate.go:330] unrecognized feature gate: OVNObservability Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942254 4681 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942285 4681 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942294 4681 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942304 4681 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942315 4681 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942325 4681 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942334 4681 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942342 4681 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942351 4681 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942395 4681 feature_gate.go:330] unrecognized feature gate: SignatureStores Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942403 4681 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942411 4681 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942418 4681 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942427 4681 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942434 4681 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942443 4681 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942450 4681 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942458 4681 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942468 4681 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942481 4681 feature_gate.go:330] unrecognized feature gate: NewOLM Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942490 4681 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942498 4681 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942518 4681 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942531 4681 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942541 4681 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942552 4681 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942563 4681 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942572 4681 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942583 4681 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942591 4681 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942600 4681 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942607 4681 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942615 4681 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942622 4681 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942630 4681 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942639 4681 feature_gate.go:330] unrecognized feature gate: InsightsConfig Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942647 4681 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942655 4681 feature_gate.go:330] unrecognized feature gate: PlatformOperators Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942662 4681 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942670 4681 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942680 4681 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942690 4681 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942699 4681 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942707 4681 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942716 4681 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.942725 4681 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.942739 4681 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943175 4681 feature_gate.go:330] unrecognized feature gate: Example Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943190 4681 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943202 4681 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943211 4681 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943220 4681 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943230 4681 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943238 4681 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943246 4681 feature_gate.go:330] unrecognized feature gate: PinnedImages Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943253 4681 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943289 4681 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943297 4681 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943306 4681 feature_gate.go:330] unrecognized feature gate: OVNObservability Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943314 4681 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943322 4681 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943334 4681 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943344 4681 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943353 4681 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943361 4681 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943368 4681 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943376 4681 feature_gate.go:330] unrecognized feature gate: GatewayAPI Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943384 4681 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943395 4681 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943404 4681 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943413 4681 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943431 4681 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943439 4681 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943447 4681 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943455 4681 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943463 4681 feature_gate.go:330] unrecognized feature gate: SignatureStores Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943470 4681 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943478 4681 feature_gate.go:330] unrecognized feature gate: PlatformOperators Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943487 4681 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943494 4681 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943501 4681 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943509 4681 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943517 4681 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943524 4681 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943532 4681 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943539 4681 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943546 4681 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943556 4681 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943565 4681 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943573 4681 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943581 4681 feature_gate.go:330] unrecognized feature gate: InsightsConfig Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943589 4681 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943596 4681 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943605 4681 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943612 4681 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943620 4681 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943627 4681 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943634 4681 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943643 4681 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943650 4681 feature_gate.go:330] unrecognized feature gate: NewOLM Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943658 4681 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943665 4681 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943673 4681 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943680 4681 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943688 4681 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943695 4681 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943703 4681 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943711 4681 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943719 4681 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943726 4681 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943734 4681 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943742 4681 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943749 4681 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943757 4681 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943764 4681 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943772 4681 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943779 4681 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Apr 04 01:55:20 crc kubenswrapper[4681]: W0404 01:55:20.943787 4681 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.943798 4681 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.944020 4681 server.go:940] "Client rotation is on, will bootstrap in background" Apr 04 01:55:20 crc kubenswrapper[4681]: E0404 01:55:20.950160 4681 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.956936 4681 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.957645 4681 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.959076 4681 server.go:997] "Starting client certificate rotation" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.959112 4681 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.959347 4681 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.987987 4681 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 04 01:55:20 crc kubenswrapper[4681]: E0404 01:55:20.990914 4681 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.71:6443: connect: connection refused" logger="UnhandledError" Apr 04 01:55:20 crc kubenswrapper[4681]: I0404 01:55:20.991059 4681 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.011411 4681 log.go:25] "Validated CRI v1 runtime API" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.051798 4681 log.go:25] "Validated CRI v1 image API" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.053969 4681 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.063066 4681 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-04-04-01-50-29-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.063098 4681 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.081352 4681 manager.go:217] Machine: {Timestamp:2026-04-04 01:55:21.077319096 +0000 UTC m=+0.743094246 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:4891c636-dd7d-42bd-b5a2-f8934586e626 BootID:49e1a4fd-53f8-4e31-ae85-567f85d79a05 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:32:f5:ac Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:32:f5:ac Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:5c:74:b5 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:3b:fb:d4 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:c6:6d:1d Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:ba:25:b8 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:0e:98:5e:8d:0b:6d Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:66:2b:0c:23:bc:de Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.081619 4681 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.081744 4681 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.083459 4681 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.083672 4681 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.083722 4681 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.083982 4681 topology_manager.go:138] "Creating topology manager with none policy" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.083995 4681 container_manager_linux.go:303] "Creating device plugin manager" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.084593 4681 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.084628 4681 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.084810 4681 state_mem.go:36] "Initialized new in-memory state store" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.084967 4681 server.go:1245] "Using root directory" path="/var/lib/kubelet" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.089853 4681 kubelet.go:418] "Attempting to sync node with API server" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.089888 4681 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.089956 4681 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.089978 4681 kubelet.go:324] "Adding apiserver pod source" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.089991 4681 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 04 01:55:21 crc kubenswrapper[4681]: W0404 01:55:21.104747 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.71:6443: connect: connection refused Apr 04 01:55:21 crc kubenswrapper[4681]: E0404 01:55:21.104876 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.71:6443: connect: connection refused" logger="UnhandledError" Apr 04 01:55:21 crc kubenswrapper[4681]: W0404 01:55:21.104833 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.71:6443: connect: connection refused Apr 04 01:55:21 crc kubenswrapper[4681]: E0404 01:55:21.104962 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.71:6443: connect: connection refused" logger="UnhandledError" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.105803 4681 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.107090 4681 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.108682 4681 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.111060 4681 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.111082 4681 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.111090 4681 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.111098 4681 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.111111 4681 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.111119 4681 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.111128 4681 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.111138 4681 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.111148 4681 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.111156 4681 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.111170 4681 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.111177 4681 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.112852 4681 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.113396 4681 server.go:1280] "Started kubelet" Apr 04 01:55:21 crc systemd[1]: Started Kubernetes Kubelet. Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.116413 4681 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.116473 4681 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.117134 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.71:6443: connect: connection refused Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.117759 4681 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.117827 4681 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.117835 4681 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 04 01:55:21 crc kubenswrapper[4681]: E0404 01:55:21.118010 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.118068 4681 volume_manager.go:287] "The desired_state_of_world populator starts" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.118081 4681 volume_manager.go:289] "Starting Kubelet Volume Manager" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.118125 4681 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 04 01:55:21 crc kubenswrapper[4681]: E0404 01:55:21.118838 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.71:6443: connect: connection refused" interval="200ms" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.119036 4681 server.go:460] "Adding debug handlers to kubelet server" Apr 04 01:55:21 crc kubenswrapper[4681]: W0404 01:55:21.119785 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.71:6443: connect: connection refused Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.120167 4681 factory.go:55] Registering systemd factory Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.120347 4681 factory.go:221] Registration of the systemd container factory successfully Apr 04 01:55:21 crc kubenswrapper[4681]: E0404 01:55:21.120292 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.71:6443: connect: connection refused" logger="UnhandledError" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.121920 4681 factory.go:153] Registering CRI-O factory Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.122088 4681 factory.go:221] Registration of the crio container factory successfully Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.122319 4681 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.122470 4681 factory.go:103] Registering Raw factory Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.122652 4681 manager.go:1196] Started watching for new ooms in manager Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.124228 4681 manager.go:319] Starting recovery of all containers Apr 04 01:55:21 crc kubenswrapper[4681]: E0404 01:55:21.131191 4681 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.71:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18a304962a2dce14 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.113357844 +0000 UTC m=+0.779132964,LastTimestamp:2026-04-04 01:55:21.113357844 +0000 UTC m=+0.779132964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.144742 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.144868 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.144893 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.144914 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.144933 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.144952 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.144972 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145023 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145078 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145156 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145175 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145193 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145210 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145234 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145251 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145302 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145323 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145345 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145363 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145382 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145412 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145431 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145450 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145479 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145500 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145521 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145545 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145567 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145586 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145605 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145626 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145687 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145707 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145727 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145748 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145768 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145787 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145810 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145828 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145858 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145878 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145900 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145922 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145944 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145964 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.145984 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146003 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146024 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146044 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146109 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146133 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146156 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146186 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146210 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146232 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146254 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146301 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146321 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146340 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146361 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146382 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146402 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146423 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146444 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146463 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146484 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146505 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146526 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146547 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146567 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146587 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146608 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146628 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146649 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146669 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146689 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146720 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146740 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146805 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146828 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146848 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146869 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146890 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146912 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146932 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146952 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.146979 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.147011 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.147042 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.147067 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.147089 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.147110 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.147132 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.147151 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.147170 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.147195 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.147214 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.147236 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.147255 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.147308 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.147327 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.150461 4681 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.150531 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.150564 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.150588 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.150625 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.150653 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.150679 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.150702 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.150727 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.150796 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.150825 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.150853 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.150879 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.150909 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.150934 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.150958 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151015 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151038 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151060 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151081 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151100 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151119 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151139 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151159 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151179 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151200 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151221 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151241 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151261 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151380 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151399 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151420 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151444 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151467 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151489 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151512 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151532 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151552 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151573 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151593 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151614 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151636 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151657 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151711 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151730 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151751 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151771 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151794 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151824 4681 manager.go:324] Recovery completed Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151844 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151865 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151888 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151908 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151927 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151946 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151967 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.151986 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152009 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152028 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152048 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152081 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152100 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152122 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152143 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152163 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152183 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152203 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152226 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152245 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152324 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152345 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152365 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152384 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152404 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152422 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152442 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152462 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152482 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152501 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152522 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152541 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152561 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152581 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152601 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152621 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152642 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152662 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152682 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152702 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152722 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152743 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152765 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152785 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152806 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152829 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152851 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152881 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152902 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152922 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152943 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152962 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.152984 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.153005 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.153025 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.153045 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.153065 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.153092 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.153121 4681 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.153146 4681 reconstruct.go:97] "Volume reconstruction finished" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.153165 4681 reconciler.go:26] "Reconciler: start to sync state" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.161909 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.163867 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.163929 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.163962 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.165113 4681 cpu_manager.go:225] "Starting CPU manager" policy="none" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.165197 4681 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.165319 4681 state_mem.go:36] "Initialized new in-memory state store" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.186435 4681 policy_none.go:49] "None policy: Start" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.188660 4681 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.188739 4681 state_mem.go:35] "Initializing new in-memory state store" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.195307 4681 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.199197 4681 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.199400 4681 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.199554 4681 kubelet.go:2335] "Starting kubelet main sync loop" Apr 04 01:55:21 crc kubenswrapper[4681]: E0404 01:55:21.199692 4681 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 04 01:55:21 crc kubenswrapper[4681]: W0404 01:55:21.202312 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.71:6443: connect: connection refused Apr 04 01:55:21 crc kubenswrapper[4681]: E0404 01:55:21.202430 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.71:6443: connect: connection refused" logger="UnhandledError" Apr 04 01:55:21 crc kubenswrapper[4681]: E0404 01:55:21.218669 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.257784 4681 manager.go:334] "Starting Device Plugin manager" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.257865 4681 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.257882 4681 server.go:79] "Starting device plugin registration server" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.258511 4681 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.258534 4681 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.260976 4681 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.261303 4681 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.261334 4681 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 04 01:55:21 crc kubenswrapper[4681]: E0404 01:55:21.273158 4681 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.300320 4681 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.300429 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.301901 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.301999 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.302014 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.302326 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.302462 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.302530 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.304167 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.304211 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.304229 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.304314 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.304375 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.304391 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.304592 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.304722 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.304763 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.306072 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.306127 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.306150 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.306437 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.307773 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.307900 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.310736 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.310796 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.310820 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.312447 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.312596 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.312622 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.312836 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.313983 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.314014 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.314083 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.314618 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.314836 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.315967 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.316199 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.316446 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.316565 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.316769 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.316791 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.317216 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.317460 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.318704 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.318743 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.318769 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:21 crc kubenswrapper[4681]: E0404 01:55:21.319634 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.71:6443: connect: connection refused" interval="400ms" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.356649 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.356724 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.356778 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.356813 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.356855 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.356894 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.356927 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.356961 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.356998 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.357149 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.357189 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.357253 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.357354 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.357387 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.357422 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.359067 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.360529 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.360706 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.360861 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.361007 4681 kubelet_node_status.go:76] "Attempting to register node" node="crc" Apr 04 01:55:21 crc kubenswrapper[4681]: E0404 01:55:21.361633 4681 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.71:6443: connect: connection refused" node="crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.459182 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.459259 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.459338 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.459428 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.459452 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.459475 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.459559 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.459589 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.459565 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.459560 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.459654 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.459676 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.459692 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.459727 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.459747 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.459678 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.459805 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.459776 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.459766 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.459855 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.459879 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.459881 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.459962 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.460034 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.460056 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.460108 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.460064 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.460141 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.460159 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.460254 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.561900 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.564772 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.564901 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.564971 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.565085 4681 kubelet_node_status.go:76] "Attempting to register node" node="crc" Apr 04 01:55:21 crc kubenswrapper[4681]: E0404 01:55:21.565881 4681 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.71:6443: connect: connection refused" node="crc" Apr 04 01:55:21 crc kubenswrapper[4681]: E0404 01:55:21.626937 4681 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.71:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18a304962a2dce14 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.113357844 +0000 UTC m=+0.779132964,LastTimestamp:2026-04-04 01:55:21.113357844 +0000 UTC m=+0.779132964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.643170 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.654098 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.677382 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.696462 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: W0404 01:55:21.701390 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-dd2052c2dfa73b4e34a5ed1a3a1d721948bd89e518938be0cbade7f19c679e41 WatchSource:0}: Error finding container dd2052c2dfa73b4e34a5ed1a3a1d721948bd89e518938be0cbade7f19c679e41: Status 404 returned error can't find the container with id dd2052c2dfa73b4e34a5ed1a3a1d721948bd89e518938be0cbade7f19c679e41 Apr 04 01:55:21 crc kubenswrapper[4681]: W0404 01:55:21.704553 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-286b7a0df86bad37ce178ed6f61f27820c76b488c0dc6bfa55685a279268bb9e WatchSource:0}: Error finding container 286b7a0df86bad37ce178ed6f61f27820c76b488c0dc6bfa55685a279268bb9e: Status 404 returned error can't find the container with id 286b7a0df86bad37ce178ed6f61f27820c76b488c0dc6bfa55685a279268bb9e Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.707237 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 01:55:21 crc kubenswrapper[4681]: W0404 01:55:21.713000 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-81f4c896163ab4232b6a29699443d14223bbae01bb5c385b6370c8228f4fea3a WatchSource:0}: Error finding container 81f4c896163ab4232b6a29699443d14223bbae01bb5c385b6370c8228f4fea3a: Status 404 returned error can't find the container with id 81f4c896163ab4232b6a29699443d14223bbae01bb5c385b6370c8228f4fea3a Apr 04 01:55:21 crc kubenswrapper[4681]: E0404 01:55:21.721252 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.71:6443: connect: connection refused" interval="800ms" Apr 04 01:55:21 crc kubenswrapper[4681]: W0404 01:55:21.728620 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-19c519979f6e1b2cac663e9f643a6bcc154d8e8085bd7dc950d599305f8087aa WatchSource:0}: Error finding container 19c519979f6e1b2cac663e9f643a6bcc154d8e8085bd7dc950d599305f8087aa: Status 404 returned error can't find the container with id 19c519979f6e1b2cac663e9f643a6bcc154d8e8085bd7dc950d599305f8087aa Apr 04 01:55:21 crc kubenswrapper[4681]: W0404 01:55:21.731133 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-21d0d5bdc4470a643c977a6b5152e44960abb5ed1c00b54bbcbfd326becebbde WatchSource:0}: Error finding container 21d0d5bdc4470a643c977a6b5152e44960abb5ed1c00b54bbcbfd326becebbde: Status 404 returned error can't find the container with id 21d0d5bdc4470a643c977a6b5152e44960abb5ed1c00b54bbcbfd326becebbde Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.966344 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.967523 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.967559 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.967571 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:21 crc kubenswrapper[4681]: I0404 01:55:21.967593 4681 kubelet_node_status.go:76] "Attempting to register node" node="crc" Apr 04 01:55:21 crc kubenswrapper[4681]: E0404 01:55:21.967889 4681 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.71:6443: connect: connection refused" node="crc" Apr 04 01:55:22 crc kubenswrapper[4681]: W0404 01:55:22.104127 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.71:6443: connect: connection refused Apr 04 01:55:22 crc kubenswrapper[4681]: E0404 01:55:22.104213 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.71:6443: connect: connection refused" logger="UnhandledError" Apr 04 01:55:22 crc kubenswrapper[4681]: I0404 01:55:22.118451 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.71:6443: connect: connection refused Apr 04 01:55:22 crc kubenswrapper[4681]: I0404 01:55:22.204841 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"81f4c896163ab4232b6a29699443d14223bbae01bb5c385b6370c8228f4fea3a"} Apr 04 01:55:22 crc kubenswrapper[4681]: I0404 01:55:22.208695 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"286b7a0df86bad37ce178ed6f61f27820c76b488c0dc6bfa55685a279268bb9e"} Apr 04 01:55:22 crc kubenswrapper[4681]: I0404 01:55:22.210910 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"dd2052c2dfa73b4e34a5ed1a3a1d721948bd89e518938be0cbade7f19c679e41"} Apr 04 01:55:22 crc kubenswrapper[4681]: I0404 01:55:22.213157 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"21d0d5bdc4470a643c977a6b5152e44960abb5ed1c00b54bbcbfd326becebbde"} Apr 04 01:55:22 crc kubenswrapper[4681]: I0404 01:55:22.215289 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"19c519979f6e1b2cac663e9f643a6bcc154d8e8085bd7dc950d599305f8087aa"} Apr 04 01:55:22 crc kubenswrapper[4681]: W0404 01:55:22.289824 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.71:6443: connect: connection refused Apr 04 01:55:22 crc kubenswrapper[4681]: E0404 01:55:22.289889 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.71:6443: connect: connection refused" logger="UnhandledError" Apr 04 01:55:22 crc kubenswrapper[4681]: W0404 01:55:22.437440 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.71:6443: connect: connection refused Apr 04 01:55:22 crc kubenswrapper[4681]: E0404 01:55:22.438062 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.71:6443: connect: connection refused" logger="UnhandledError" Apr 04 01:55:22 crc kubenswrapper[4681]: E0404 01:55:22.522945 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.71:6443: connect: connection refused" interval="1.6s" Apr 04 01:55:22 crc kubenswrapper[4681]: W0404 01:55:22.686803 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.71:6443: connect: connection refused Apr 04 01:55:22 crc kubenswrapper[4681]: E0404 01:55:22.686925 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.71:6443: connect: connection refused" logger="UnhandledError" Apr 04 01:55:22 crc kubenswrapper[4681]: I0404 01:55:22.768492 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:22 crc kubenswrapper[4681]: I0404 01:55:22.770181 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:22 crc kubenswrapper[4681]: I0404 01:55:22.770220 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:22 crc kubenswrapper[4681]: I0404 01:55:22.770229 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:22 crc kubenswrapper[4681]: I0404 01:55:22.770285 4681 kubelet_node_status.go:76] "Attempting to register node" node="crc" Apr 04 01:55:22 crc kubenswrapper[4681]: E0404 01:55:22.770771 4681 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.71:6443: connect: connection refused" node="crc" Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.067554 4681 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Apr 04 01:55:23 crc kubenswrapper[4681]: E0404 01:55:23.069122 4681 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.71:6443: connect: connection refused" logger="UnhandledError" Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.118161 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.71:6443: connect: connection refused Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.221590 4681 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509" exitCode=0 Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.221683 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509"} Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.221718 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.223172 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.223222 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.223241 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.225102 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2f6196c5b3a05c942055f25e31a350843406ea37406e1af62c782dc9f8a4f1c2"} Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.225138 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b5fc946678318218c1dc93f59aea7fc760172225cdb94efee9ea779b0a3ff53c"} Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.225157 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9a52ccf1a40f8ddec3fbfd455dfaae3a08af9e38f488b774ae61283eb539214d"} Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.227088 4681 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b" exitCode=0 Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.227169 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b"} Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.227237 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.228658 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.228707 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.228755 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.229867 4681 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc" exitCode=0 Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.229978 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc"} Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.229998 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.230842 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.230877 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.230895 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.232251 4681 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="d279e74261f5a470aec96a2cb3111a60135cc342c6d922552da3b315ab10a134" exitCode=0 Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.232313 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"d279e74261f5a470aec96a2cb3111a60135cc342c6d922552da3b315ab10a134"} Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.232644 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.233171 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.234114 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.234165 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.234184 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.234294 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.234347 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:23 crc kubenswrapper[4681]: I0404 01:55:23.234368 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:23 crc kubenswrapper[4681]: W0404 01:55:23.973344 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.71:6443: connect: connection refused Apr 04 01:55:23 crc kubenswrapper[4681]: E0404 01:55:23.973459 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.71:6443: connect: connection refused" logger="UnhandledError" Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.118422 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.71:6443: connect: connection refused Apr 04 01:55:24 crc kubenswrapper[4681]: E0404 01:55:24.124737 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.71:6443: connect: connection refused" interval="3.2s" Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.239515 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"8b4cec6cb4073050c86228bfabf2ec38203bb000808aa283fbab1f7af8756c36"} Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.239594 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.240737 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.240783 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.240793 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.242065 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0a4d1e6ca047390c262fe05912f5de2fb5d4fa0f6d51f4462d9b602901aa99dc"} Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.242102 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.242114 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2d9640effcf3619f733c3f5082251165355776fabe5d8c3ade7f06409ec459d0"} Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.242129 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ce18a007870156d09b31ab38dd939beba4fc47418b8e07d70945eb4838f93130"} Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.242791 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.242821 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.242830 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.245147 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7096a59f168cc8b9e7c61688d8ae41239ce4d62f3f0aa859bdb8586c85afe752"} Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.245230 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.246767 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.246796 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.246805 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.249254 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878"} Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.249319 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d"} Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.249329 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725"} Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.249338 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba"} Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.252034 4681 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250" exitCode=0 Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.252094 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250"} Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.252252 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.253303 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.253335 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.253345 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.370976 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.372178 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.372216 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.372227 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:24 crc kubenswrapper[4681]: I0404 01:55:24.372305 4681 kubelet_node_status.go:76] "Attempting to register node" node="crc" Apr 04 01:55:24 crc kubenswrapper[4681]: E0404 01:55:24.372818 4681 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.71:6443: connect: connection refused" node="crc" Apr 04 01:55:24 crc kubenswrapper[4681]: W0404 01:55:24.464247 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.71:6443: connect: connection refused Apr 04 01:55:24 crc kubenswrapper[4681]: E0404 01:55:24.464355 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.71:6443: connect: connection refused" logger="UnhandledError" Apr 04 01:55:24 crc kubenswrapper[4681]: W0404 01:55:24.916753 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.71:6443: connect: connection refused Apr 04 01:55:24 crc kubenswrapper[4681]: E0404 01:55:24.916838 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.71:6443: connect: connection refused" logger="UnhandledError" Apr 04 01:55:25 crc kubenswrapper[4681]: I0404 01:55:25.259407 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"219b180a18aa29f229c3acf24a32abe91e65d5f64d16954b387bf4d3fe8278fb"} Apr 04 01:55:25 crc kubenswrapper[4681]: I0404 01:55:25.259515 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:25 crc kubenswrapper[4681]: I0404 01:55:25.260590 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:25 crc kubenswrapper[4681]: I0404 01:55:25.260624 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:25 crc kubenswrapper[4681]: I0404 01:55:25.260636 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:25 crc kubenswrapper[4681]: I0404 01:55:25.263624 4681 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57" exitCode=0 Apr 04 01:55:25 crc kubenswrapper[4681]: I0404 01:55:25.263717 4681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 04 01:55:25 crc kubenswrapper[4681]: I0404 01:55:25.263722 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:25 crc kubenswrapper[4681]: I0404 01:55:25.263748 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:25 crc kubenswrapper[4681]: I0404 01:55:25.263757 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57"} Apr 04 01:55:25 crc kubenswrapper[4681]: I0404 01:55:25.263749 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:25 crc kubenswrapper[4681]: I0404 01:55:25.263805 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:25 crc kubenswrapper[4681]: I0404 01:55:25.265991 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:25 crc kubenswrapper[4681]: I0404 01:55:25.266015 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:25 crc kubenswrapper[4681]: I0404 01:55:25.266024 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:25 crc kubenswrapper[4681]: I0404 01:55:25.266072 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:25 crc kubenswrapper[4681]: I0404 01:55:25.266091 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:25 crc kubenswrapper[4681]: I0404 01:55:25.266104 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:25 crc kubenswrapper[4681]: I0404 01:55:25.266134 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:25 crc kubenswrapper[4681]: I0404 01:55:25.266202 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:25 crc kubenswrapper[4681]: I0404 01:55:25.266227 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:25 crc kubenswrapper[4681]: I0404 01:55:25.266312 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:25 crc kubenswrapper[4681]: I0404 01:55:25.266373 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:25 crc kubenswrapper[4681]: I0404 01:55:25.266400 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:25 crc kubenswrapper[4681]: I0404 01:55:25.959796 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 01:55:26 crc kubenswrapper[4681]: I0404 01:55:26.278892 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c"} Apr 04 01:55:26 crc kubenswrapper[4681]: I0404 01:55:26.279001 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370"} Apr 04 01:55:26 crc kubenswrapper[4681]: I0404 01:55:26.279030 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566"} Apr 04 01:55:26 crc kubenswrapper[4681]: I0404 01:55:26.279053 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:26 crc kubenswrapper[4681]: I0404 01:55:26.279210 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:26 crc kubenswrapper[4681]: I0404 01:55:26.279055 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f"} Apr 04 01:55:26 crc kubenswrapper[4681]: I0404 01:55:26.279595 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 01:55:26 crc kubenswrapper[4681]: I0404 01:55:26.280258 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:26 crc kubenswrapper[4681]: I0404 01:55:26.280324 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:26 crc kubenswrapper[4681]: I0404 01:55:26.280340 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:26 crc kubenswrapper[4681]: I0404 01:55:26.280461 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:26 crc kubenswrapper[4681]: I0404 01:55:26.280519 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:26 crc kubenswrapper[4681]: I0404 01:55:26.280542 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:26 crc kubenswrapper[4681]: I0404 01:55:26.376778 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 01:55:26 crc kubenswrapper[4681]: I0404 01:55:26.376995 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:26 crc kubenswrapper[4681]: I0404 01:55:26.379251 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:26 crc kubenswrapper[4681]: I0404 01:55:26.379344 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:26 crc kubenswrapper[4681]: I0404 01:55:26.379369 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:26 crc kubenswrapper[4681]: I0404 01:55:26.387743 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 01:55:27 crc kubenswrapper[4681]: I0404 01:55:27.099538 4681 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Apr 04 01:55:27 crc kubenswrapper[4681]: I0404 01:55:27.292880 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3"} Apr 04 01:55:27 crc kubenswrapper[4681]: I0404 01:55:27.293066 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:27 crc kubenswrapper[4681]: I0404 01:55:27.293085 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:27 crc kubenswrapper[4681]: I0404 01:55:27.293635 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:27 crc kubenswrapper[4681]: I0404 01:55:27.294539 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:27 crc kubenswrapper[4681]: I0404 01:55:27.294593 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:27 crc kubenswrapper[4681]: I0404 01:55:27.294616 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:27 crc kubenswrapper[4681]: I0404 01:55:27.294624 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:27 crc kubenswrapper[4681]: I0404 01:55:27.294665 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:27 crc kubenswrapper[4681]: I0404 01:55:27.294689 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:27 crc kubenswrapper[4681]: I0404 01:55:27.295495 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:27 crc kubenswrapper[4681]: I0404 01:55:27.295574 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:27 crc kubenswrapper[4681]: I0404 01:55:27.295600 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:27 crc kubenswrapper[4681]: I0404 01:55:27.573874 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:27 crc kubenswrapper[4681]: I0404 01:55:27.575170 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:27 crc kubenswrapper[4681]: I0404 01:55:27.575223 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:27 crc kubenswrapper[4681]: I0404 01:55:27.575246 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:27 crc kubenswrapper[4681]: I0404 01:55:27.575336 4681 kubelet_node_status.go:76] "Attempting to register node" node="crc" Apr 04 01:55:28 crc kubenswrapper[4681]: I0404 01:55:28.116514 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 01:55:28 crc kubenswrapper[4681]: I0404 01:55:28.272743 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Apr 04 01:55:28 crc kubenswrapper[4681]: I0404 01:55:28.294852 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:28 crc kubenswrapper[4681]: I0404 01:55:28.294961 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:28 crc kubenswrapper[4681]: I0404 01:55:28.295968 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:28 crc kubenswrapper[4681]: I0404 01:55:28.296026 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:28 crc kubenswrapper[4681]: I0404 01:55:28.296049 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:28 crc kubenswrapper[4681]: I0404 01:55:28.296174 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:28 crc kubenswrapper[4681]: I0404 01:55:28.296214 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:28 crc kubenswrapper[4681]: I0404 01:55:28.296236 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:28 crc kubenswrapper[4681]: I0404 01:55:28.379869 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 01:55:28 crc kubenswrapper[4681]: I0404 01:55:28.638478 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 01:55:28 crc kubenswrapper[4681]: I0404 01:55:28.638632 4681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 04 01:55:28 crc kubenswrapper[4681]: I0404 01:55:28.638689 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:28 crc kubenswrapper[4681]: I0404 01:55:28.640137 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:28 crc kubenswrapper[4681]: I0404 01:55:28.640178 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:28 crc kubenswrapper[4681]: I0404 01:55:28.640189 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:29 crc kubenswrapper[4681]: I0404 01:55:29.169974 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 01:55:29 crc kubenswrapper[4681]: I0404 01:55:29.298202 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:29 crc kubenswrapper[4681]: I0404 01:55:29.298622 4681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 04 01:55:29 crc kubenswrapper[4681]: I0404 01:55:29.298670 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:29 crc kubenswrapper[4681]: I0404 01:55:29.298710 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:29 crc kubenswrapper[4681]: I0404 01:55:29.302926 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:29 crc kubenswrapper[4681]: I0404 01:55:29.302996 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:29 crc kubenswrapper[4681]: I0404 01:55:29.303023 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:29 crc kubenswrapper[4681]: I0404 01:55:29.302945 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:29 crc kubenswrapper[4681]: I0404 01:55:29.303227 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:29 crc kubenswrapper[4681]: I0404 01:55:29.303279 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:29 crc kubenswrapper[4681]: I0404 01:55:29.303117 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:29 crc kubenswrapper[4681]: I0404 01:55:29.303381 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:29 crc kubenswrapper[4681]: I0404 01:55:29.303398 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:29 crc kubenswrapper[4681]: I0404 01:55:29.807713 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 01:55:30 crc kubenswrapper[4681]: I0404 01:55:30.171402 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Apr 04 01:55:30 crc kubenswrapper[4681]: I0404 01:55:30.300977 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:30 crc kubenswrapper[4681]: I0404 01:55:30.301183 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:30 crc kubenswrapper[4681]: I0404 01:55:30.303104 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:30 crc kubenswrapper[4681]: I0404 01:55:30.303134 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:30 crc kubenswrapper[4681]: I0404 01:55:30.303147 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:30 crc kubenswrapper[4681]: I0404 01:55:30.304029 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:30 crc kubenswrapper[4681]: I0404 01:55:30.304200 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:30 crc kubenswrapper[4681]: I0404 01:55:30.304359 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:31 crc kubenswrapper[4681]: E0404 01:55:31.273411 4681 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Apr 04 01:55:31 crc kubenswrapper[4681]: I0404 01:55:31.639336 4681 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 01:55:31 crc kubenswrapper[4681]: I0404 01:55:31.639448 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 04 01:55:35 crc kubenswrapper[4681]: I0404 01:55:35.118584 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Apr 04 01:55:35 crc kubenswrapper[4681]: W0404 01:55:35.300062 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Apr 04 01:55:35 crc kubenswrapper[4681]: I0404 01:55:35.300495 4681 trace.go:236] Trace[2106842999]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (04-Apr-2026 01:55:25.298) (total time: 10001ms): Apr 04 01:55:35 crc kubenswrapper[4681]: Trace[2106842999]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (01:55:35.300) Apr 04 01:55:35 crc kubenswrapper[4681]: Trace[2106842999]: [10.001475465s] [10.001475465s] END Apr 04 01:55:35 crc kubenswrapper[4681]: E0404 01:55:35.300784 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Apr 04 01:55:36 crc kubenswrapper[4681]: E0404 01:55:36.134577 4681 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:36Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Apr 04 01:55:36 crc kubenswrapper[4681]: E0404 01:55:36.137319 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:36Z is after 2026-02-23T05:33:13Z" interval="6.4s" Apr 04 01:55:36 crc kubenswrapper[4681]: E0404 01:55:36.138790 4681 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:36Z is after 2026-02-23T05:33:13Z" node="crc" Apr 04 01:55:36 crc kubenswrapper[4681]: W0404 01:55:36.146173 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:36Z is after 2026-02-23T05:33:13Z Apr 04 01:55:36 crc kubenswrapper[4681]: E0404 01:55:36.146332 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:36Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Apr 04 01:55:36 crc kubenswrapper[4681]: I0404 01:55:36.148465 4681 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Apr 04 01:55:36 crc kubenswrapper[4681]: I0404 01:55:36.148560 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Apr 04 01:55:36 crc kubenswrapper[4681]: W0404 01:55:36.148527 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:36Z is after 2026-02-23T05:33:13Z Apr 04 01:55:36 crc kubenswrapper[4681]: E0404 01:55:36.148679 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:36Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Apr 04 01:55:36 crc kubenswrapper[4681]: I0404 01:55:36.151071 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:36Z is after 2026-02-23T05:33:13Z Apr 04 01:55:36 crc kubenswrapper[4681]: W0404 01:55:36.158474 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:36Z is after 2026-02-23T05:33:13Z Apr 04 01:55:36 crc kubenswrapper[4681]: E0404 01:55:36.158576 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:36Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Apr 04 01:55:36 crc kubenswrapper[4681]: I0404 01:55:36.161839 4681 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Apr 04 01:55:36 crc kubenswrapper[4681]: I0404 01:55:36.161891 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Apr 04 01:55:36 crc kubenswrapper[4681]: E0404 01:55:36.167757 4681 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:36Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.18a304962a2dce14 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.113357844 +0000 UTC m=+0.779132964,LastTimestamp:2026-04-04 01:55:21.113357844 +0000 UTC m=+0.779132964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:55:36 crc kubenswrapper[4681]: I0404 01:55:36.317527 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Apr 04 01:55:36 crc kubenswrapper[4681]: I0404 01:55:36.319575 4681 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="219b180a18aa29f229c3acf24a32abe91e65d5f64d16954b387bf4d3fe8278fb" exitCode=255 Apr 04 01:55:36 crc kubenswrapper[4681]: I0404 01:55:36.319632 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"219b180a18aa29f229c3acf24a32abe91e65d5f64d16954b387bf4d3fe8278fb"} Apr 04 01:55:36 crc kubenswrapper[4681]: I0404 01:55:36.319831 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:36 crc kubenswrapper[4681]: I0404 01:55:36.320902 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:36 crc kubenswrapper[4681]: I0404 01:55:36.320935 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:36 crc kubenswrapper[4681]: I0404 01:55:36.320944 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:36 crc kubenswrapper[4681]: I0404 01:55:36.321382 4681 scope.go:117] "RemoveContainer" containerID="219b180a18aa29f229c3acf24a32abe91e65d5f64d16954b387bf4d3fe8278fb" Apr 04 01:55:37 crc kubenswrapper[4681]: I0404 01:55:37.125543 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:37Z is after 2026-02-23T05:33:13Z Apr 04 01:55:37 crc kubenswrapper[4681]: I0404 01:55:37.323575 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Apr 04 01:55:37 crc kubenswrapper[4681]: I0404 01:55:37.325625 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cd1b5bccbb728df0a2470cd4cf2e5c0a0695dd1b444f6a16fd9a150422fd15a3"} Apr 04 01:55:37 crc kubenswrapper[4681]: I0404 01:55:37.325789 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:37 crc kubenswrapper[4681]: I0404 01:55:37.326576 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:37 crc kubenswrapper[4681]: I0404 01:55:37.326615 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:37 crc kubenswrapper[4681]: I0404 01:55:37.326624 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:38 crc kubenswrapper[4681]: I0404 01:55:38.125713 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:38Z is after 2026-02-23T05:33:13Z Apr 04 01:55:38 crc kubenswrapper[4681]: I0404 01:55:38.126336 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 01:55:38 crc kubenswrapper[4681]: I0404 01:55:38.312845 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Apr 04 01:55:38 crc kubenswrapper[4681]: I0404 01:55:38.313087 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:38 crc kubenswrapper[4681]: I0404 01:55:38.315065 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:38 crc kubenswrapper[4681]: I0404 01:55:38.315157 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:38 crc kubenswrapper[4681]: I0404 01:55:38.315183 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:38 crc kubenswrapper[4681]: I0404 01:55:38.332052 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Apr 04 01:55:38 crc kubenswrapper[4681]: I0404 01:55:38.333037 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Apr 04 01:55:38 crc kubenswrapper[4681]: I0404 01:55:38.335763 4681 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cd1b5bccbb728df0a2470cd4cf2e5c0a0695dd1b444f6a16fd9a150422fd15a3" exitCode=255 Apr 04 01:55:38 crc kubenswrapper[4681]: I0404 01:55:38.335809 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"cd1b5bccbb728df0a2470cd4cf2e5c0a0695dd1b444f6a16fd9a150422fd15a3"} Apr 04 01:55:38 crc kubenswrapper[4681]: I0404 01:55:38.335849 4681 scope.go:117] "RemoveContainer" containerID="219b180a18aa29f229c3acf24a32abe91e65d5f64d16954b387bf4d3fe8278fb" Apr 04 01:55:38 crc kubenswrapper[4681]: I0404 01:55:38.336010 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:38 crc kubenswrapper[4681]: I0404 01:55:38.336992 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:38 crc kubenswrapper[4681]: I0404 01:55:38.337071 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:38 crc kubenswrapper[4681]: I0404 01:55:38.337100 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:38 crc kubenswrapper[4681]: I0404 01:55:38.338120 4681 scope.go:117] "RemoveContainer" containerID="cd1b5bccbb728df0a2470cd4cf2e5c0a0695dd1b444f6a16fd9a150422fd15a3" Apr 04 01:55:38 crc kubenswrapper[4681]: E0404 01:55:38.338526 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Apr 04 01:55:38 crc kubenswrapper[4681]: I0404 01:55:38.341557 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Apr 04 01:55:38 crc kubenswrapper[4681]: I0404 01:55:38.341716 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:38 crc kubenswrapper[4681]: I0404 01:55:38.342591 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:38 crc kubenswrapper[4681]: I0404 01:55:38.342636 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:38 crc kubenswrapper[4681]: I0404 01:55:38.342649 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:38 crc kubenswrapper[4681]: I0404 01:55:38.346950 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 01:55:38 crc kubenswrapper[4681]: I0404 01:55:38.810337 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 01:55:38 crc kubenswrapper[4681]: W0404 01:55:38.903902 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:38Z is after 2026-02-23T05:33:13Z Apr 04 01:55:38 crc kubenswrapper[4681]: E0404 01:55:38.903980 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:38Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Apr 04 01:55:39 crc kubenswrapper[4681]: I0404 01:55:39.122345 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:39Z is after 2026-02-23T05:33:13Z Apr 04 01:55:39 crc kubenswrapper[4681]: I0404 01:55:39.342769 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Apr 04 01:55:39 crc kubenswrapper[4681]: I0404 01:55:39.346108 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:39 crc kubenswrapper[4681]: I0404 01:55:39.347574 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:39 crc kubenswrapper[4681]: I0404 01:55:39.347636 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:39 crc kubenswrapper[4681]: I0404 01:55:39.347655 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:39 crc kubenswrapper[4681]: I0404 01:55:39.348534 4681 scope.go:117] "RemoveContainer" containerID="cd1b5bccbb728df0a2470cd4cf2e5c0a0695dd1b444f6a16fd9a150422fd15a3" Apr 04 01:55:39 crc kubenswrapper[4681]: E0404 01:55:39.348817 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Apr 04 01:55:39 crc kubenswrapper[4681]: I0404 01:55:39.817035 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 01:55:39 crc kubenswrapper[4681]: I0404 01:55:39.817246 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:39 crc kubenswrapper[4681]: I0404 01:55:39.818950 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:39 crc kubenswrapper[4681]: I0404 01:55:39.819028 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:39 crc kubenswrapper[4681]: I0404 01:55:39.819056 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:40 crc kubenswrapper[4681]: I0404 01:55:40.122736 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:40Z is after 2026-02-23T05:33:13Z Apr 04 01:55:40 crc kubenswrapper[4681]: I0404 01:55:40.348728 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:40 crc kubenswrapper[4681]: I0404 01:55:40.349997 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:40 crc kubenswrapper[4681]: I0404 01:55:40.350098 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:40 crc kubenswrapper[4681]: I0404 01:55:40.350114 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:40 crc kubenswrapper[4681]: I0404 01:55:40.350937 4681 scope.go:117] "RemoveContainer" containerID="cd1b5bccbb728df0a2470cd4cf2e5c0a0695dd1b444f6a16fd9a150422fd15a3" Apr 04 01:55:40 crc kubenswrapper[4681]: E0404 01:55:40.351217 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Apr 04 01:55:41 crc kubenswrapper[4681]: I0404 01:55:41.122609 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:41Z is after 2026-02-23T05:33:13Z Apr 04 01:55:41 crc kubenswrapper[4681]: E0404 01:55:41.273596 4681 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Apr 04 01:55:41 crc kubenswrapper[4681]: I0404 01:55:41.639148 4681 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 01:55:41 crc kubenswrapper[4681]: I0404 01:55:41.639256 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 01:55:42 crc kubenswrapper[4681]: I0404 01:55:42.122905 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:42Z is after 2026-02-23T05:33:13Z Apr 04 01:55:42 crc kubenswrapper[4681]: I0404 01:55:42.539794 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:42 crc kubenswrapper[4681]: I0404 01:55:42.541876 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:42 crc kubenswrapper[4681]: I0404 01:55:42.541990 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:42 crc kubenswrapper[4681]: I0404 01:55:42.542019 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:42 crc kubenswrapper[4681]: I0404 01:55:42.542065 4681 kubelet_node_status.go:76] "Attempting to register node" node="crc" Apr 04 01:55:42 crc kubenswrapper[4681]: E0404 01:55:42.543492 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:42Z is after 2026-02-23T05:33:13Z" interval="7s" Apr 04 01:55:42 crc kubenswrapper[4681]: E0404 01:55:42.547240 4681 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:42Z is after 2026-02-23T05:33:13Z" node="crc" Apr 04 01:55:43 crc kubenswrapper[4681]: W0404 01:55:43.037909 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:43Z is after 2026-02-23T05:33:13Z Apr 04 01:55:43 crc kubenswrapper[4681]: E0404 01:55:43.038043 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:43Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Apr 04 01:55:43 crc kubenswrapper[4681]: I0404 01:55:43.122239 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:43Z is after 2026-02-23T05:33:13Z Apr 04 01:55:43 crc kubenswrapper[4681]: I0404 01:55:43.424595 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 01:55:43 crc kubenswrapper[4681]: I0404 01:55:43.424968 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:43 crc kubenswrapper[4681]: I0404 01:55:43.426749 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:43 crc kubenswrapper[4681]: I0404 01:55:43.426818 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:43 crc kubenswrapper[4681]: I0404 01:55:43.426848 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:43 crc kubenswrapper[4681]: I0404 01:55:43.427747 4681 scope.go:117] "RemoveContainer" containerID="cd1b5bccbb728df0a2470cd4cf2e5c0a0695dd1b444f6a16fd9a150422fd15a3" Apr 04 01:55:43 crc kubenswrapper[4681]: E0404 01:55:43.428016 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Apr 04 01:55:44 crc kubenswrapper[4681]: I0404 01:55:44.123509 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:44Z is after 2026-02-23T05:33:13Z Apr 04 01:55:44 crc kubenswrapper[4681]: I0404 01:55:44.512568 4681 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Apr 04 01:55:44 crc kubenswrapper[4681]: E0404 01:55:44.518760 4681 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:44Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Apr 04 01:55:45 crc kubenswrapper[4681]: I0404 01:55:45.123993 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:45Z is after 2026-02-23T05:33:13Z Apr 04 01:55:46 crc kubenswrapper[4681]: W0404 01:55:46.077208 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:46Z is after 2026-02-23T05:33:13Z Apr 04 01:55:46 crc kubenswrapper[4681]: E0404 01:55:46.077343 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:46Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Apr 04 01:55:46 crc kubenswrapper[4681]: I0404 01:55:46.121723 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:46Z is after 2026-02-23T05:33:13Z Apr 04 01:55:46 crc kubenswrapper[4681]: E0404 01:55:46.172047 4681 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:46Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.18a304962a2dce14 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.113357844 +0000 UTC m=+0.779132964,LastTimestamp:2026-04-04 01:55:21.113357844 +0000 UTC m=+0.779132964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:55:46 crc kubenswrapper[4681]: W0404 01:55:46.804367 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:46Z is after 2026-02-23T05:33:13Z Apr 04 01:55:46 crc kubenswrapper[4681]: E0404 01:55:46.804500 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:46Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Apr 04 01:55:47 crc kubenswrapper[4681]: I0404 01:55:47.121153 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:47Z is after 2026-02-23T05:33:13Z Apr 04 01:55:47 crc kubenswrapper[4681]: W0404 01:55:47.503751 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:47Z is after 2026-02-23T05:33:13Z Apr 04 01:55:47 crc kubenswrapper[4681]: E0404 01:55:47.503879 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:47Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Apr 04 01:55:48 crc kubenswrapper[4681]: I0404 01:55:48.123184 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:48Z is after 2026-02-23T05:33:13Z Apr 04 01:55:49 crc kubenswrapper[4681]: I0404 01:55:49.124155 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:49Z is after 2026-02-23T05:33:13Z Apr 04 01:55:49 crc kubenswrapper[4681]: I0404 01:55:49.548599 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:49 crc kubenswrapper[4681]: E0404 01:55:49.549030 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:49Z is after 2026-02-23T05:33:13Z" interval="7s" Apr 04 01:55:49 crc kubenswrapper[4681]: I0404 01:55:49.550644 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:49 crc kubenswrapper[4681]: I0404 01:55:49.550896 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:49 crc kubenswrapper[4681]: I0404 01:55:49.551060 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:49 crc kubenswrapper[4681]: I0404 01:55:49.551245 4681 kubelet_node_status.go:76] "Attempting to register node" node="crc" Apr 04 01:55:49 crc kubenswrapper[4681]: E0404 01:55:49.554645 4681 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:49Z is after 2026-02-23T05:33:13Z" node="crc" Apr 04 01:55:50 crc kubenswrapper[4681]: I0404 01:55:50.122806 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:50Z is after 2026-02-23T05:33:13Z Apr 04 01:55:51 crc kubenswrapper[4681]: I0404 01:55:51.123250 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:51Z is after 2026-02-23T05:33:13Z Apr 04 01:55:51 crc kubenswrapper[4681]: E0404 01:55:51.274111 4681 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Apr 04 01:55:51 crc kubenswrapper[4681]: I0404 01:55:51.639777 4681 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 01:55:51 crc kubenswrapper[4681]: I0404 01:55:51.640172 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 01:55:51 crc kubenswrapper[4681]: I0404 01:55:51.640507 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 01:55:51 crc kubenswrapper[4681]: I0404 01:55:51.640913 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:51 crc kubenswrapper[4681]: I0404 01:55:51.642615 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:51 crc kubenswrapper[4681]: I0404 01:55:51.642823 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:51 crc kubenswrapper[4681]: I0404 01:55:51.643017 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:51 crc kubenswrapper[4681]: I0404 01:55:51.643878 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"b5fc946678318218c1dc93f59aea7fc760172225cdb94efee9ea779b0a3ff53c"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Apr 04 01:55:51 crc kubenswrapper[4681]: I0404 01:55:51.644324 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://b5fc946678318218c1dc93f59aea7fc760172225cdb94efee9ea779b0a3ff53c" gracePeriod=30 Apr 04 01:55:52 crc kubenswrapper[4681]: I0404 01:55:52.121627 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:52Z is after 2026-02-23T05:33:13Z Apr 04 01:55:52 crc kubenswrapper[4681]: I0404 01:55:52.390564 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Apr 04 01:55:52 crc kubenswrapper[4681]: I0404 01:55:52.391090 4681 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="b5fc946678318218c1dc93f59aea7fc760172225cdb94efee9ea779b0a3ff53c" exitCode=255 Apr 04 01:55:52 crc kubenswrapper[4681]: I0404 01:55:52.391147 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"b5fc946678318218c1dc93f59aea7fc760172225cdb94efee9ea779b0a3ff53c"} Apr 04 01:55:52 crc kubenswrapper[4681]: I0404 01:55:52.391187 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4f7eb0459a3d135d52a1c4168c3def80940997d4a4286ffa23d2abdde13b7df7"} Apr 04 01:55:52 crc kubenswrapper[4681]: I0404 01:55:52.391363 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:52 crc kubenswrapper[4681]: I0404 01:55:52.392405 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:52 crc kubenswrapper[4681]: I0404 01:55:52.392452 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:52 crc kubenswrapper[4681]: I0404 01:55:52.392470 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:53 crc kubenswrapper[4681]: I0404 01:55:53.123020 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:53Z is after 2026-02-23T05:33:13Z Apr 04 01:55:54 crc kubenswrapper[4681]: I0404 01:55:54.122956 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:54Z is after 2026-02-23T05:33:13Z Apr 04 01:55:55 crc kubenswrapper[4681]: I0404 01:55:55.122548 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:55Z is after 2026-02-23T05:33:13Z Apr 04 01:55:56 crc kubenswrapper[4681]: I0404 01:55:56.123783 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:56Z is after 2026-02-23T05:33:13Z Apr 04 01:55:56 crc kubenswrapper[4681]: E0404 01:55:56.178858 4681 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:56Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.18a304962a2dce14 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.113357844 +0000 UTC m=+0.779132964,LastTimestamp:2026-04-04 01:55:21.113357844 +0000 UTC m=+0.779132964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:55:56 crc kubenswrapper[4681]: I0404 01:55:56.555713 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:56 crc kubenswrapper[4681]: E0404 01:55:56.556994 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:56Z is after 2026-02-23T05:33:13Z" interval="7s" Apr 04 01:55:56 crc kubenswrapper[4681]: I0404 01:55:56.557600 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:56 crc kubenswrapper[4681]: I0404 01:55:56.557653 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:56 crc kubenswrapper[4681]: I0404 01:55:56.557666 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:56 crc kubenswrapper[4681]: I0404 01:55:56.557696 4681 kubelet_node_status.go:76] "Attempting to register node" node="crc" Apr 04 01:55:56 crc kubenswrapper[4681]: E0404 01:55:56.561023 4681 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:56Z is after 2026-02-23T05:33:13Z" node="crc" Apr 04 01:55:57 crc kubenswrapper[4681]: I0404 01:55:57.121519 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:57Z is after 2026-02-23T05:33:13Z Apr 04 01:55:58 crc kubenswrapper[4681]: I0404 01:55:58.123150 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:58Z is after 2026-02-23T05:33:13Z Apr 04 01:55:58 crc kubenswrapper[4681]: I0404 01:55:58.200876 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:58 crc kubenswrapper[4681]: I0404 01:55:58.202419 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:58 crc kubenswrapper[4681]: I0404 01:55:58.202510 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:58 crc kubenswrapper[4681]: I0404 01:55:58.202536 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:58 crc kubenswrapper[4681]: I0404 01:55:58.203562 4681 scope.go:117] "RemoveContainer" containerID="cd1b5bccbb728df0a2470cd4cf2e5c0a0695dd1b444f6a16fd9a150422fd15a3" Apr 04 01:55:58 crc kubenswrapper[4681]: W0404 01:55:58.309948 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:58Z is after 2026-02-23T05:33:13Z Apr 04 01:55:58 crc kubenswrapper[4681]: E0404 01:55:58.310080 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:58Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Apr 04 01:55:58 crc kubenswrapper[4681]: I0404 01:55:58.638651 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 01:55:58 crc kubenswrapper[4681]: I0404 01:55:58.638789 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:58 crc kubenswrapper[4681]: I0404 01:55:58.640246 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:58 crc kubenswrapper[4681]: I0404 01:55:58.640313 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:58 crc kubenswrapper[4681]: I0404 01:55:58.640330 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:59 crc kubenswrapper[4681]: I0404 01:55:59.123192 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:59Z is after 2026-02-23T05:33:13Z Apr 04 01:55:59 crc kubenswrapper[4681]: I0404 01:55:59.170255 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 01:55:59 crc kubenswrapper[4681]: I0404 01:55:59.414246 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Apr 04 01:55:59 crc kubenswrapper[4681]: I0404 01:55:59.415034 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Apr 04 01:55:59 crc kubenswrapper[4681]: I0404 01:55:59.417745 4681 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b5342533989ccf2aa5c75068fa098a8b696cfea93d6a2d98450cf4af942b5119" exitCode=255 Apr 04 01:55:59 crc kubenswrapper[4681]: I0404 01:55:59.417789 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b5342533989ccf2aa5c75068fa098a8b696cfea93d6a2d98450cf4af942b5119"} Apr 04 01:55:59 crc kubenswrapper[4681]: I0404 01:55:59.417847 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:59 crc kubenswrapper[4681]: I0404 01:55:59.417847 4681 scope.go:117] "RemoveContainer" containerID="cd1b5bccbb728df0a2470cd4cf2e5c0a0695dd1b444f6a16fd9a150422fd15a3" Apr 04 01:55:59 crc kubenswrapper[4681]: I0404 01:55:59.417968 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:55:59 crc kubenswrapper[4681]: I0404 01:55:59.418877 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:59 crc kubenswrapper[4681]: I0404 01:55:59.418916 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:59 crc kubenswrapper[4681]: I0404 01:55:59.418932 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:59 crc kubenswrapper[4681]: I0404 01:55:59.418965 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:55:59 crc kubenswrapper[4681]: I0404 01:55:59.418980 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:55:59 crc kubenswrapper[4681]: I0404 01:55:59.418992 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:55:59 crc kubenswrapper[4681]: I0404 01:55:59.419689 4681 scope.go:117] "RemoveContainer" containerID="b5342533989ccf2aa5c75068fa098a8b696cfea93d6a2d98450cf4af942b5119" Apr 04 01:55:59 crc kubenswrapper[4681]: E0404 01:55:59.419864 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Apr 04 01:56:00 crc kubenswrapper[4681]: I0404 01:56:00.122016 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:56:00Z is after 2026-02-23T05:33:13Z Apr 04 01:56:00 crc kubenswrapper[4681]: I0404 01:56:00.424409 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Apr 04 01:56:01 crc kubenswrapper[4681]: I0404 01:56:01.123225 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:56:01Z is after 2026-02-23T05:33:13Z Apr 04 01:56:01 crc kubenswrapper[4681]: E0404 01:56:01.274592 4681 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Apr 04 01:56:01 crc kubenswrapper[4681]: I0404 01:56:01.638768 4681 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 01:56:01 crc kubenswrapper[4681]: I0404 01:56:01.638891 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 01:56:01 crc kubenswrapper[4681]: W0404 01:56:01.787452 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:56:01Z is after 2026-02-23T05:33:13Z Apr 04 01:56:01 crc kubenswrapper[4681]: E0404 01:56:01.787610 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:56:01Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Apr 04 01:56:01 crc kubenswrapper[4681]: I0404 01:56:01.900334 4681 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Apr 04 01:56:01 crc kubenswrapper[4681]: E0404 01:56:01.906848 4681 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:56:01Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Apr 04 01:56:01 crc kubenswrapper[4681]: E0404 01:56:01.908081 4681 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Apr 04 01:56:02 crc kubenswrapper[4681]: I0404 01:56:02.121384 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:56:02Z is after 2026-02-23T05:33:13Z Apr 04 01:56:03 crc kubenswrapper[4681]: I0404 01:56:03.123617 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:56:03Z is after 2026-02-23T05:33:13Z Apr 04 01:56:03 crc kubenswrapper[4681]: I0404 01:56:03.424844 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 01:56:03 crc kubenswrapper[4681]: I0404 01:56:03.425317 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:56:03 crc kubenswrapper[4681]: I0404 01:56:03.427511 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:03 crc kubenswrapper[4681]: I0404 01:56:03.427588 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:03 crc kubenswrapper[4681]: I0404 01:56:03.427618 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:03 crc kubenswrapper[4681]: I0404 01:56:03.428666 4681 scope.go:117] "RemoveContainer" containerID="b5342533989ccf2aa5c75068fa098a8b696cfea93d6a2d98450cf4af942b5119" Apr 04 01:56:03 crc kubenswrapper[4681]: E0404 01:56:03.429039 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Apr 04 01:56:03 crc kubenswrapper[4681]: I0404 01:56:03.562287 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:56:03 crc kubenswrapper[4681]: I0404 01:56:03.563576 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:03 crc kubenswrapper[4681]: I0404 01:56:03.563605 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:03 crc kubenswrapper[4681]: I0404 01:56:03.563614 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:03 crc kubenswrapper[4681]: I0404 01:56:03.563634 4681 kubelet_node_status.go:76] "Attempting to register node" node="crc" Apr 04 01:56:03 crc kubenswrapper[4681]: E0404 01:56:03.564961 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:56:03Z is after 2026-02-23T05:33:13Z" interval="7s" Apr 04 01:56:03 crc kubenswrapper[4681]: E0404 01:56:03.567864 4681 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:56:03Z is after 2026-02-23T05:33:13Z" node="crc" Apr 04 01:56:04 crc kubenswrapper[4681]: I0404 01:56:04.121885 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:56:04Z is after 2026-02-23T05:33:13Z Apr 04 01:56:04 crc kubenswrapper[4681]: W0404 01:56:04.937816 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:56:04Z is after 2026-02-23T05:33:13Z Apr 04 01:56:04 crc kubenswrapper[4681]: E0404 01:56:04.937925 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:56:04Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Apr 04 01:56:05 crc kubenswrapper[4681]: I0404 01:56:05.120536 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:56:05Z is after 2026-02-23T05:33:13Z Apr 04 01:56:06 crc kubenswrapper[4681]: I0404 01:56:06.123350 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.186164 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18a304962a2dce14 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.113357844 +0000 UTC m=+0.779132964,LastTimestamp:2026-04-04 01:55:21.113357844 +0000 UTC m=+0.779132964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.197955 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18a304962d31106e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.163903086 +0000 UTC m=+0.829678246,LastTimestamp:2026-04-04 01:55:21.163903086 +0000 UTC m=+0.829678246,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.205862 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18a304962d31a24c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.163940428 +0000 UTC m=+0.829715588,LastTimestamp:2026-04-04 01:55:21.163940428 +0000 UTC m=+0.829715588,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.211566 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18a304962d322504 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.163973892 +0000 UTC m=+0.829749052,LastTimestamp:2026-04-04 01:55:21.163973892 +0000 UTC m=+0.829749052,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.215969 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18a3049633735e55 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.268911701 +0000 UTC m=+0.934686831,LastTimestamp:2026-04-04 01:55:21.268911701 +0000 UTC m=+0.934686831,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.220171 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18a304962d31106e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18a304962d31106e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.163903086 +0000 UTC m=+0.829678246,LastTimestamp:2026-04-04 01:55:21.301974153 +0000 UTC m=+0.967749273,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.224170 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18a304962d31a24c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18a304962d31a24c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.163940428 +0000 UTC m=+0.829715588,LastTimestamp:2026-04-04 01:55:21.302009846 +0000 UTC m=+0.967784966,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.229617 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18a304962d322504\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18a304962d322504 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.163973892 +0000 UTC m=+0.829749052,LastTimestamp:2026-04-04 01:55:21.302021823 +0000 UTC m=+0.967796943,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.233691 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18a304962d31106e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18a304962d31106e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.163903086 +0000 UTC m=+0.829678246,LastTimestamp:2026-04-04 01:55:21.304193477 +0000 UTC m=+0.969968597,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.237593 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18a304962d31a24c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18a304962d31a24c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.163940428 +0000 UTC m=+0.829715588,LastTimestamp:2026-04-04 01:55:21.304219692 +0000 UTC m=+0.969994812,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.242020 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18a304962d322504\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18a304962d322504 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.163973892 +0000 UTC m=+0.829749052,LastTimestamp:2026-04-04 01:55:21.304238988 +0000 UTC m=+0.970014108,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.246615 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18a304962d31106e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18a304962d31106e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.163903086 +0000 UTC m=+0.829678246,LastTimestamp:2026-04-04 01:55:21.304335969 +0000 UTC m=+0.970111099,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.251256 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18a304962d31a24c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18a304962d31a24c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.163940428 +0000 UTC m=+0.829715588,LastTimestamp:2026-04-04 01:55:21.30438416 +0000 UTC m=+0.970159290,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.255457 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18a304962d322504\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18a304962d322504 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.163973892 +0000 UTC m=+0.829749052,LastTimestamp:2026-04-04 01:55:21.304399967 +0000 UTC m=+0.970175097,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.260653 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18a304962d31106e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18a304962d31106e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.163903086 +0000 UTC m=+0.829678246,LastTimestamp:2026-04-04 01:55:21.306105292 +0000 UTC m=+0.971880462,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.265435 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18a304962d31a24c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18a304962d31a24c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.163940428 +0000 UTC m=+0.829715588,LastTimestamp:2026-04-04 01:55:21.306141785 +0000 UTC m=+0.971916945,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.269252 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18a304962d322504\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18a304962d322504 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.163973892 +0000 UTC m=+0.829749052,LastTimestamp:2026-04-04 01:55:21.306162971 +0000 UTC m=+0.971938141,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.275021 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18a304962d31106e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18a304962d31106e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.163903086 +0000 UTC m=+0.829678246,LastTimestamp:2026-04-04 01:55:21.310779635 +0000 UTC m=+0.976554795,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.280038 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18a304962d31a24c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18a304962d31a24c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.163940428 +0000 UTC m=+0.829715588,LastTimestamp:2026-04-04 01:55:21.31080791 +0000 UTC m=+0.976583070,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.286879 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18a304962d322504\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18a304962d322504 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.163973892 +0000 UTC m=+0.829749052,LastTimestamp:2026-04-04 01:55:21.310830695 +0000 UTC m=+0.976605855,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.291351 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18a304962d31106e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18a304962d31106e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.163903086 +0000 UTC m=+0.829678246,LastTimestamp:2026-04-04 01:55:21.312478162 +0000 UTC m=+0.978253292,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.296782 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18a304962d31a24c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18a304962d31a24c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.163940428 +0000 UTC m=+0.829715588,LastTimestamp:2026-04-04 01:55:21.312613065 +0000 UTC m=+0.978388205,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.303811 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18a304962d322504\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18a304962d322504 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.163973892 +0000 UTC m=+0.829749052,LastTimestamp:2026-04-04 01:55:21.312631751 +0000 UTC m=+0.978406881,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.310634 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18a304962d31106e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18a304962d31106e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.163903086 +0000 UTC m=+0.829678246,LastTimestamp:2026-04-04 01:55:21.313962031 +0000 UTC m=+0.979737171,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.315497 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18a304962d31a24c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18a304962d31a24c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.163940428 +0000 UTC m=+0.829715588,LastTimestamp:2026-04-04 01:55:21.314007412 +0000 UTC m=+0.979782542,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.320913 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18a304964e0598a9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.714702505 +0000 UTC m=+1.380477655,LastTimestamp:2026-04-04 01:55:21.714702505 +0000 UTC m=+1.380477655,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.324582 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18a304964e0daa59 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.715231321 +0000 UTC m=+1.381006481,LastTimestamp:2026-04-04 01:55:21.715231321 +0000 UTC m=+1.381006481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.330330 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18a304964e4f56cb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.719535307 +0000 UTC m=+1.385310457,LastTimestamp:2026-04-04 01:55:21.719535307 +0000 UTC m=+1.385310457,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.331768 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18a304964f383134 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.734795572 +0000 UTC m=+1.400570692,LastTimestamp:2026-04-04 01:55:21.734795572 +0000 UTC m=+1.400570692,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.335969 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18a304964f478943 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:21.735801155 +0000 UTC m=+1.401576315,LastTimestamp:2026-04-04 01:55:21.735801155 +0000 UTC m=+1.401576315,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.341357 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18a3049674b7f727 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:22.363926311 +0000 UTC m=+2.029701471,LastTimestamp:2026-04-04 01:55:22.363926311 +0000 UTC m=+2.029701471,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.345562 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18a3049674b94dbe openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:22.364014014 +0000 UTC m=+2.029789174,LastTimestamp:2026-04-04 01:55:22.364014014 +0000 UTC m=+2.029789174,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.349868 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18a3049674f198e5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:22.367703269 +0000 UTC m=+2.033478419,LastTimestamp:2026-04-04 01:55:22.367703269 +0000 UTC m=+2.033478419,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.354300 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18a3049674feb71a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:22.36856297 +0000 UTC m=+2.034338120,LastTimestamp:2026-04-04 01:55:22.36856297 +0000 UTC m=+2.034338120,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.358798 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18a30496750c976b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:22.369472363 +0000 UTC m=+2.035247513,LastTimestamp:2026-04-04 01:55:22.369472363 +0000 UTC m=+2.035247513,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.363714 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18a3049675b7632e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:22.380665646 +0000 UTC m=+2.046440786,LastTimestamp:2026-04-04 01:55:22.380665646 +0000 UTC m=+2.046440786,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.367744 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18a3049675d16233 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:22.382369331 +0000 UTC m=+2.048144461,LastTimestamp:2026-04-04 01:55:22.382369331 +0000 UTC m=+2.048144461,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.372023 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18a3049675d8ca00 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:22.382854656 +0000 UTC m=+2.048629786,LastTimestamp:2026-04-04 01:55:22.382854656 +0000 UTC m=+2.048629786,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.376575 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18a3049675e556e3 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:22.383677155 +0000 UTC m=+2.049452325,LastTimestamp:2026-04-04 01:55:22.383677155 +0000 UTC m=+2.049452325,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.381007 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18a304967637e1f0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:22.389086704 +0000 UTC m=+2.054861834,LastTimestamp:2026-04-04 01:55:22.389086704 +0000 UTC m=+2.054861834,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.386175 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18a30496764ca02e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:22.390446126 +0000 UTC m=+2.056221276,LastTimestamp:2026-04-04 01:55:22.390446126 +0000 UTC m=+2.056221276,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.392605 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18a3049688b09829 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:22.698987561 +0000 UTC m=+2.364762711,LastTimestamp:2026-04-04 01:55:22.698987561 +0000 UTC m=+2.364762711,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.399588 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18a3049689906ab8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:22.713655992 +0000 UTC m=+2.379431122,LastTimestamp:2026-04-04 01:55:22.713655992 +0000 UTC m=+2.379431122,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.404589 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18a3049689a3aa57 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:22.714917463 +0000 UTC m=+2.380692623,LastTimestamp:2026-04-04 01:55:22.714917463 +0000 UTC m=+2.380692623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.411928 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18a304969610dce7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:22.923400423 +0000 UTC m=+2.589175573,LastTimestamp:2026-04-04 01:55:22.923400423 +0000 UTC m=+2.589175573,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.418333 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18a304969c285c1b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:23.025603611 +0000 UTC m=+2.691378771,LastTimestamp:2026-04-04 01:55:23.025603611 +0000 UTC m=+2.691378771,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.425241 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18a304969c4e1308 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:23.028075272 +0000 UTC m=+2.693850432,LastTimestamp:2026-04-04 01:55:23.028075272 +0000 UTC m=+2.693850432,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.432873 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18a30496a80aafbd openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:23.224985533 +0000 UTC m=+2.890760693,LastTimestamp:2026-04-04 01:55:23.224985533 +0000 UTC m=+2.890760693,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.440480 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18a30496a8814d34 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:23.232759092 +0000 UTC m=+2.898534222,LastTimestamp:2026-04-04 01:55:23.232759092 +0000 UTC m=+2.898534222,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.449596 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18a30496a8abc6ab openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:23.235542699 +0000 UTC m=+2.901317859,LastTimestamp:2026-04-04 01:55:23.235542699 +0000 UTC m=+2.901317859,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.456151 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18a30496a9087644 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:23.241616964 +0000 UTC m=+2.907392104,LastTimestamp:2026-04-04 01:55:23.241616964 +0000 UTC m=+2.907392104,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.462547 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18a30496ab368758 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:23.278190424 +0000 UTC m=+2.943965544,LastTimestamp:2026-04-04 01:55:23.278190424 +0000 UTC m=+2.943965544,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.468815 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18a30496ac3f9aad openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:23.295562413 +0000 UTC m=+2.961337543,LastTimestamp:2026-04-04 01:55:23.295562413 +0000 UTC m=+2.961337543,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.474958 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18a30496b6085d15 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:23.459714325 +0000 UTC m=+3.125489475,LastTimestamp:2026-04-04 01:55:23.459714325 +0000 UTC m=+3.125489475,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.479921 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18a30496b6169a7e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:23.46064755 +0000 UTC m=+3.126422690,LastTimestamp:2026-04-04 01:55:23.46064755 +0000 UTC m=+3.126422690,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.485754 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18a30496b619e860 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:23.460864096 +0000 UTC m=+3.126639206,LastTimestamp:2026-04-04 01:55:23.460864096 +0000 UTC m=+3.126639206,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.492658 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18a30496b623e77a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:23.461519226 +0000 UTC m=+3.127294356,LastTimestamp:2026-04-04 01:55:23.461519226 +0000 UTC m=+3.127294356,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.499526 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18a30496b72a1db3 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:23.478703539 +0000 UTC m=+3.144478669,LastTimestamp:2026-04-04 01:55:23.478703539 +0000 UTC m=+3.144478669,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.506602 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18a30496b73b88b1 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:23.479845041 +0000 UTC m=+3.145620161,LastTimestamp:2026-04-04 01:55:23.479845041 +0000 UTC m=+3.145620161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.512832 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18a30496b758456a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:23.481728362 +0000 UTC m=+3.147503482,LastTimestamp:2026-04-04 01:55:23.481728362 +0000 UTC m=+3.147503482,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.518757 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18a30496b785a265 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:23.484701285 +0000 UTC m=+3.150476405,LastTimestamp:2026-04-04 01:55:23.484701285 +0000 UTC m=+3.150476405,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.525038 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18a30496b7a8071a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:23.48695529 +0000 UTC m=+3.152730400,LastTimestamp:2026-04-04 01:55:23.48695529 +0000 UTC m=+3.152730400,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.531241 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18a30496b7bb66ca openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:23.48822497 +0000 UTC m=+3.154000090,LastTimestamp:2026-04-04 01:55:23.48822497 +0000 UTC m=+3.154000090,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.536531 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18a30496c53ab829 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:23.714672681 +0000 UTC m=+3.380447841,LastTimestamp:2026-04-04 01:55:23.714672681 +0000 UTC m=+3.380447841,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.538432 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18a30496c5662a69 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:23.717519977 +0000 UTC m=+3.383295137,LastTimestamp:2026-04-04 01:55:23.717519977 +0000 UTC m=+3.383295137,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.543425 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18a30496c6a78874 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:23.738581108 +0000 UTC m=+3.404356258,LastTimestamp:2026-04-04 01:55:23.738581108 +0000 UTC m=+3.404356258,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.549151 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18a30496c6c5fd5d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:23.740577117 +0000 UTC m=+3.406352277,LastTimestamp:2026-04-04 01:55:23.740577117 +0000 UTC m=+3.406352277,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.554378 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18a30496c7251aca openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:23.74681057 +0000 UTC m=+3.412585700,LastTimestamp:2026-04-04 01:55:23.74681057 +0000 UTC m=+3.412585700,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.559396 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18a30496c783a039 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:23.753005113 +0000 UTC m=+3.418780243,LastTimestamp:2026-04-04 01:55:23.753005113 +0000 UTC m=+3.418780243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.564675 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18a30496d3e278bf openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:23.960547519 +0000 UTC m=+3.626322639,LastTimestamp:2026-04-04 01:55:23.960547519 +0000 UTC m=+3.626322639,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.570461 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18a30496d4774aa0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:23.970300576 +0000 UTC m=+3.636075696,LastTimestamp:2026-04-04 01:55:23.970300576 +0000 UTC m=+3.636075696,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.576303 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18a30496d4c6bf54 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:23.975507796 +0000 UTC m=+3.641282916,LastTimestamp:2026-04-04 01:55:23.975507796 +0000 UTC m=+3.641282916,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.580926 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18a30496d56936de openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:23.98615523 +0000 UTC m=+3.651930350,LastTimestamp:2026-04-04 01:55:23.98615523 +0000 UTC m=+3.651930350,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.589654 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18a30496d5784c22 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:23.987143714 +0000 UTC m=+3.652918834,LastTimestamp:2026-04-04 01:55:23.987143714 +0000 UTC m=+3.652918834,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.596219 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18a30496df5a73cd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:24.152959949 +0000 UTC m=+3.818735069,LastTimestamp:2026-04-04 01:55:24.152959949 +0000 UTC m=+3.818735069,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.602641 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18a30496e059f96e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:24.169705838 +0000 UTC m=+3.835480958,LastTimestamp:2026-04-04 01:55:24.169705838 +0000 UTC m=+3.835480958,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.607325 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18a30496e06b6e59 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:24.170849881 +0000 UTC m=+3.836625001,LastTimestamp:2026-04-04 01:55:24.170849881 +0000 UTC m=+3.836625001,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.614011 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18a30496e5669ac0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:24.254419648 +0000 UTC m=+3.920194768,LastTimestamp:2026-04-04 01:55:24.254419648 +0000 UTC m=+3.920194768,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.619783 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18a30496eb88a200 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:24.357313024 +0000 UTC m=+4.023088144,LastTimestamp:2026-04-04 01:55:24.357313024 +0000 UTC m=+4.023088144,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.625136 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18a30496ec45aa92 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:24.369701522 +0000 UTC m=+4.035476652,LastTimestamp:2026-04-04 01:55:24.369701522 +0000 UTC m=+4.035476652,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.628893 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18a30496ef631327 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:24.421960487 +0000 UTC m=+4.087735617,LastTimestamp:2026-04-04 01:55:24.421960487 +0000 UTC m=+4.087735617,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.634732 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18a30496f06bb3b8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:24.439303096 +0000 UTC m=+4.105078216,LastTimestamp:2026-04-04 01:55:24.439303096 +0000 UTC m=+4.105078216,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.642447 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18a3049721d560ab openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:25.268312235 +0000 UTC m=+4.934087365,LastTimestamp:2026-04-04 01:55:25.268312235 +0000 UTC m=+4.934087365,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.658870 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18a304972f6aab76 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:25.496200054 +0000 UTC m=+5.161975174,LastTimestamp:2026-04-04 01:55:25.496200054 +0000 UTC m=+5.161975174,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.665924 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18a304972ff6871f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:25.505365791 +0000 UTC m=+5.171140911,LastTimestamp:2026-04-04 01:55:25.505365791 +0000 UTC m=+5.171140911,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.670997 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18a30497300b2779 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:25.506717561 +0000 UTC m=+5.172492721,LastTimestamp:2026-04-04 01:55:25.506717561 +0000 UTC m=+5.172492721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.678213 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18a304973e387b54 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:25.744569172 +0000 UTC m=+5.410344302,LastTimestamp:2026-04-04 01:55:25.744569172 +0000 UTC m=+5.410344302,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.684154 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18a304973f129bf3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:25.758864371 +0000 UTC m=+5.424639501,LastTimestamp:2026-04-04 01:55:25.758864371 +0000 UTC m=+5.424639501,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.690118 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18a304973f20e370 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:25.759800176 +0000 UTC m=+5.425575306,LastTimestamp:2026-04-04 01:55:25.759800176 +0000 UTC m=+5.425575306,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.693334 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18a304974c11c225 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:25.976912421 +0000 UTC m=+5.642687551,LastTimestamp:2026-04-04 01:55:25.976912421 +0000 UTC m=+5.642687551,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.696587 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18a304974d01c78e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:25.992642446 +0000 UTC m=+5.658417606,LastTimestamp:2026-04-04 01:55:25.992642446 +0000 UTC m=+5.658417606,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.699836 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18a304974d17d69e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:25.994088094 +0000 UTC m=+5.659863254,LastTimestamp:2026-04-04 01:55:25.994088094 +0000 UTC m=+5.659863254,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.703932 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18a304975c652be0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:26.250814432 +0000 UTC m=+5.916589592,LastTimestamp:2026-04-04 01:55:26.250814432 +0000 UTC m=+5.916589592,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.708164 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18a304975d4c373d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:26.265956157 +0000 UTC m=+5.931731317,LastTimestamp:2026-04-04 01:55:26.265956157 +0000 UTC m=+5.931731317,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.711692 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18a304975d746fc2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:26.268592066 +0000 UTC m=+5.934367236,LastTimestamp:2026-04-04 01:55:26.268592066 +0000 UTC m=+5.934367236,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.716800 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18a304976b4f2728 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:26.501029672 +0000 UTC m=+6.166804812,LastTimestamp:2026-04-04 01:55:26.501029672 +0000 UTC m=+6.166804812,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.720144 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18a304976c38be1c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:26.516338204 +0000 UTC m=+6.182113344,LastTimestamp:2026-04-04 01:55:26.516338204 +0000 UTC m=+6.182113344,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.725907 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Apr 04 01:56:06 crc kubenswrapper[4681]: &Event{ObjectMeta:{kube-controller-manager-crc.18a304989d94b8d7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Apr 04 01:56:06 crc kubenswrapper[4681]: body: Apr 04 01:56:06 crc kubenswrapper[4681]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:31.639417047 +0000 UTC m=+11.305192247,LastTimestamp:2026-04-04 01:55:31.639417047 +0000 UTC m=+11.305192247,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Apr 04 01:56:06 crc kubenswrapper[4681]: > Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.730170 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18a304989d962a5d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:31.639511645 +0000 UTC m=+11.305286805,LastTimestamp:2026-04-04 01:55:31.639511645 +0000 UTC m=+11.305286805,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.734317 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Apr 04 01:56:06 crc kubenswrapper[4681]: &Event{ObjectMeta:{kube-apiserver-crc.18a30499aa58510b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Apr 04 01:56:06 crc kubenswrapper[4681]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Apr 04 01:56:06 crc kubenswrapper[4681]: Apr 04 01:56:06 crc kubenswrapper[4681]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:36.148529419 +0000 UTC m=+15.814304579,LastTimestamp:2026-04-04 01:55:36.148529419 +0000 UTC m=+15.814304579,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Apr 04 01:56:06 crc kubenswrapper[4681]: > Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.740463 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18a30499aa5966dd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:36.148600541 +0000 UTC m=+15.814375701,LastTimestamp:2026-04-04 01:55:36.148600541 +0000 UTC m=+15.814375701,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.744031 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18a30499aa58510b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Apr 04 01:56:06 crc kubenswrapper[4681]: &Event{ObjectMeta:{kube-apiserver-crc.18a30499aa58510b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Apr 04 01:56:06 crc kubenswrapper[4681]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Apr 04 01:56:06 crc kubenswrapper[4681]: Apr 04 01:56:06 crc kubenswrapper[4681]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:36.148529419 +0000 UTC m=+15.814304579,LastTimestamp:2026-04-04 01:55:36.161877734 +0000 UTC m=+15.827652894,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Apr 04 01:56:06 crc kubenswrapper[4681]: > Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.748428 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18a30499aa5966dd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18a30499aa5966dd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:36.148600541 +0000 UTC m=+15.814375701,LastTimestamp:2026-04-04 01:55:36.161913925 +0000 UTC m=+15.827689085,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.750456 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18a30496e06b6e59\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18a30496e06b6e59 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:24.170849881 +0000 UTC m=+3.836625001,LastTimestamp:2026-04-04 01:55:36.322442704 +0000 UTC m=+15.988217864,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.754082 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18a30496eb88a200\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18a30496eb88a200 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:24.357313024 +0000 UTC m=+4.023088144,LastTimestamp:2026-04-04 01:55:36.543060255 +0000 UTC m=+16.208835375,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.760815 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18a30496ec45aa92\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18a30496ec45aa92 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:24.369701522 +0000 UTC m=+4.035476652,LastTimestamp:2026-04-04 01:55:36.561622648 +0000 UTC m=+16.227397768,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.768357 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Apr 04 01:56:06 crc kubenswrapper[4681]: &Event{ObjectMeta:{kube-controller-manager-crc.18a3049af19d8331 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Apr 04 01:56:06 crc kubenswrapper[4681]: body: Apr 04 01:56:06 crc kubenswrapper[4681]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:41.639213873 +0000 UTC m=+21.304989023,LastTimestamp:2026-04-04 01:55:41.639213873 +0000 UTC m=+21.304989023,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Apr 04 01:56:06 crc kubenswrapper[4681]: > Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.772631 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18a3049af19f3aa8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:41.639326376 +0000 UTC m=+21.305101536,LastTimestamp:2026-04-04 01:55:41.639326376 +0000 UTC m=+21.305101536,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.779053 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18a3049af19d8331\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Apr 04 01:56:06 crc kubenswrapper[4681]: &Event{ObjectMeta:{kube-controller-manager-crc.18a3049af19d8331 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Apr 04 01:56:06 crc kubenswrapper[4681]: body: Apr 04 01:56:06 crc kubenswrapper[4681]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:41.639213873 +0000 UTC m=+21.304989023,LastTimestamp:2026-04-04 01:55:51.640132111 +0000 UTC m=+31.305907271,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Apr 04 01:56:06 crc kubenswrapper[4681]: > Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.783022 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18a3049af19f3aa8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18a3049af19f3aa8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:41.639326376 +0000 UTC m=+21.305101536,LastTimestamp:2026-04-04 01:55:51.64045281 +0000 UTC m=+31.306227980,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.787106 4681 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18a3049d45f625fa openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:51.64424345 +0000 UTC m=+31.310018610,LastTimestamp:2026-04-04 01:55:51.64424345 +0000 UTC m=+31.310018610,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.790474 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18a3049675d8ca00\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18a3049675d8ca00 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:22.382854656 +0000 UTC m=+2.048629786,LastTimestamp:2026-04-04 01:55:51.7671115 +0000 UTC m=+31.432886650,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.794461 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18a3049688b09829\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18a3049688b09829 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:22.698987561 +0000 UTC m=+2.364762711,LastTimestamp:2026-04-04 01:55:51.947789702 +0000 UTC m=+31.613564822,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.797799 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18a3049689906ab8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18a3049689906ab8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:22.713655992 +0000 UTC m=+2.379431122,LastTimestamp:2026-04-04 01:55:51.957341676 +0000 UTC m=+31.623116796,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.802765 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18a3049af19d8331\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Apr 04 01:56:06 crc kubenswrapper[4681]: &Event{ObjectMeta:{kube-controller-manager-crc.18a3049af19d8331 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Apr 04 01:56:06 crc kubenswrapper[4681]: body: Apr 04 01:56:06 crc kubenswrapper[4681]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:41.639213873 +0000 UTC m=+21.304989023,LastTimestamp:2026-04-04 01:56:01.63885282 +0000 UTC m=+41.304627970,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Apr 04 01:56:06 crc kubenswrapper[4681]: > Apr 04 01:56:06 crc kubenswrapper[4681]: E0404 01:56:06.808420 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18a3049af19f3aa8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18a3049af19f3aa8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:41.639326376 +0000 UTC m=+21.305101536,LastTimestamp:2026-04-04 01:56:01.638926812 +0000 UTC m=+41.304701972,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 01:56:07 crc kubenswrapper[4681]: I0404 01:56:07.124852 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:08 crc kubenswrapper[4681]: I0404 01:56:08.124899 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:08 crc kubenswrapper[4681]: W0404 01:56:08.695406 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:08 crc kubenswrapper[4681]: E0404 01:56:08.695486 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Apr 04 01:56:08 crc kubenswrapper[4681]: I0404 01:56:08.810164 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 01:56:08 crc kubenswrapper[4681]: I0404 01:56:08.810664 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:56:08 crc kubenswrapper[4681]: I0404 01:56:08.812146 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:08 crc kubenswrapper[4681]: I0404 01:56:08.812233 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:08 crc kubenswrapper[4681]: I0404 01:56:08.812258 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:08 crc kubenswrapper[4681]: I0404 01:56:08.813342 4681 scope.go:117] "RemoveContainer" containerID="b5342533989ccf2aa5c75068fa098a8b696cfea93d6a2d98450cf4af942b5119" Apr 04 01:56:08 crc kubenswrapper[4681]: E0404 01:56:08.813725 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Apr 04 01:56:09 crc kubenswrapper[4681]: I0404 01:56:09.124379 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:10 crc kubenswrapper[4681]: I0404 01:56:10.125445 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:10 crc kubenswrapper[4681]: I0404 01:56:10.568346 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:56:10 crc kubenswrapper[4681]: I0404 01:56:10.570140 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:10 crc kubenswrapper[4681]: I0404 01:56:10.570205 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:10 crc kubenswrapper[4681]: I0404 01:56:10.570226 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:10 crc kubenswrapper[4681]: I0404 01:56:10.570293 4681 kubelet_node_status.go:76] "Attempting to register node" node="crc" Apr 04 01:56:10 crc kubenswrapper[4681]: E0404 01:56:10.574894 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 04 01:56:10 crc kubenswrapper[4681]: E0404 01:56:10.575013 4681 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Apr 04 01:56:11 crc kubenswrapper[4681]: I0404 01:56:11.125204 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:11 crc kubenswrapper[4681]: E0404 01:56:11.274901 4681 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Apr 04 01:56:11 crc kubenswrapper[4681]: I0404 01:56:11.639648 4681 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 01:56:11 crc kubenswrapper[4681]: I0404 01:56:11.639775 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 01:56:11 crc kubenswrapper[4681]: E0404 01:56:11.648450 4681 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18a3049af19d8331\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Apr 04 01:56:11 crc kubenswrapper[4681]: &Event{ObjectMeta:{kube-controller-manager-crc.18a3049af19d8331 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Apr 04 01:56:11 crc kubenswrapper[4681]: body: Apr 04 01:56:11 crc kubenswrapper[4681]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 01:55:41.639213873 +0000 UTC m=+21.304989023,LastTimestamp:2026-04-04 01:56:11.639733158 +0000 UTC m=+51.305508308,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Apr 04 01:56:11 crc kubenswrapper[4681]: > Apr 04 01:56:12 crc kubenswrapper[4681]: I0404 01:56:12.124846 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:13 crc kubenswrapper[4681]: I0404 01:56:13.124723 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:14 crc kubenswrapper[4681]: I0404 01:56:14.126075 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:15 crc kubenswrapper[4681]: I0404 01:56:15.125904 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:15 crc kubenswrapper[4681]: I0404 01:56:15.970506 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 01:56:15 crc kubenswrapper[4681]: I0404 01:56:15.970657 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:56:15 crc kubenswrapper[4681]: I0404 01:56:15.971844 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:15 crc kubenswrapper[4681]: I0404 01:56:15.971901 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:15 crc kubenswrapper[4681]: I0404 01:56:15.971917 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:16 crc kubenswrapper[4681]: I0404 01:56:16.123661 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:17 crc kubenswrapper[4681]: I0404 01:56:17.121645 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:17 crc kubenswrapper[4681]: I0404 01:56:17.575356 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:56:17 crc kubenswrapper[4681]: I0404 01:56:17.576675 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:17 crc kubenswrapper[4681]: I0404 01:56:17.576739 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:17 crc kubenswrapper[4681]: I0404 01:56:17.576756 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:17 crc kubenswrapper[4681]: I0404 01:56:17.576785 4681 kubelet_node_status.go:76] "Attempting to register node" node="crc" Apr 04 01:56:17 crc kubenswrapper[4681]: E0404 01:56:17.579048 4681 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Apr 04 01:56:17 crc kubenswrapper[4681]: E0404 01:56:17.579779 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 04 01:56:18 crc kubenswrapper[4681]: I0404 01:56:18.124124 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:18 crc kubenswrapper[4681]: I0404 01:56:18.645305 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 01:56:18 crc kubenswrapper[4681]: I0404 01:56:18.645533 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:56:18 crc kubenswrapper[4681]: I0404 01:56:18.647062 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:18 crc kubenswrapper[4681]: I0404 01:56:18.647126 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:18 crc kubenswrapper[4681]: I0404 01:56:18.647146 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:18 crc kubenswrapper[4681]: I0404 01:56:18.651984 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 01:56:19 crc kubenswrapper[4681]: I0404 01:56:19.123206 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:19 crc kubenswrapper[4681]: I0404 01:56:19.481495 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:56:19 crc kubenswrapper[4681]: I0404 01:56:19.482765 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:19 crc kubenswrapper[4681]: I0404 01:56:19.482837 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:19 crc kubenswrapper[4681]: I0404 01:56:19.482863 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:20 crc kubenswrapper[4681]: I0404 01:56:20.123876 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:21 crc kubenswrapper[4681]: I0404 01:56:21.125431 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:21 crc kubenswrapper[4681]: E0404 01:56:21.274997 4681 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Apr 04 01:56:22 crc kubenswrapper[4681]: I0404 01:56:22.123251 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:22 crc kubenswrapper[4681]: I0404 01:56:22.201056 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:56:22 crc kubenswrapper[4681]: I0404 01:56:22.203973 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:22 crc kubenswrapper[4681]: I0404 01:56:22.204018 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:22 crc kubenswrapper[4681]: I0404 01:56:22.204032 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:22 crc kubenswrapper[4681]: I0404 01:56:22.204760 4681 scope.go:117] "RemoveContainer" containerID="b5342533989ccf2aa5c75068fa098a8b696cfea93d6a2d98450cf4af942b5119" Apr 04 01:56:23 crc kubenswrapper[4681]: I0404 01:56:23.126000 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:23 crc kubenswrapper[4681]: I0404 01:56:23.494654 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Apr 04 01:56:23 crc kubenswrapper[4681]: I0404 01:56:23.497806 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da"} Apr 04 01:56:23 crc kubenswrapper[4681]: I0404 01:56:23.498374 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:56:23 crc kubenswrapper[4681]: I0404 01:56:23.499697 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:23 crc kubenswrapper[4681]: I0404 01:56:23.499767 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:23 crc kubenswrapper[4681]: I0404 01:56:23.499791 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:24 crc kubenswrapper[4681]: I0404 01:56:24.123137 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:24 crc kubenswrapper[4681]: I0404 01:56:24.502158 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Apr 04 01:56:24 crc kubenswrapper[4681]: I0404 01:56:24.503021 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Apr 04 01:56:24 crc kubenswrapper[4681]: I0404 01:56:24.505489 4681 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da" exitCode=255 Apr 04 01:56:24 crc kubenswrapper[4681]: I0404 01:56:24.505528 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da"} Apr 04 01:56:24 crc kubenswrapper[4681]: I0404 01:56:24.505562 4681 scope.go:117] "RemoveContainer" containerID="b5342533989ccf2aa5c75068fa098a8b696cfea93d6a2d98450cf4af942b5119" Apr 04 01:56:24 crc kubenswrapper[4681]: I0404 01:56:24.505698 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:56:24 crc kubenswrapper[4681]: I0404 01:56:24.506889 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:24 crc kubenswrapper[4681]: I0404 01:56:24.506914 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:24 crc kubenswrapper[4681]: I0404 01:56:24.506923 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:24 crc kubenswrapper[4681]: I0404 01:56:24.507384 4681 scope.go:117] "RemoveContainer" containerID="b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da" Apr 04 01:56:24 crc kubenswrapper[4681]: E0404 01:56:24.507530 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Apr 04 01:56:24 crc kubenswrapper[4681]: I0404 01:56:24.580194 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:56:24 crc kubenswrapper[4681]: E0404 01:56:24.581821 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 04 01:56:24 crc kubenswrapper[4681]: I0404 01:56:24.583320 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:24 crc kubenswrapper[4681]: I0404 01:56:24.583381 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:24 crc kubenswrapper[4681]: I0404 01:56:24.583393 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:24 crc kubenswrapper[4681]: I0404 01:56:24.583424 4681 kubelet_node_status.go:76] "Attempting to register node" node="crc" Apr 04 01:56:24 crc kubenswrapper[4681]: E0404 01:56:24.606312 4681 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Apr 04 01:56:25 crc kubenswrapper[4681]: I0404 01:56:25.121844 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:25 crc kubenswrapper[4681]: I0404 01:56:25.509712 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Apr 04 01:56:26 crc kubenswrapper[4681]: I0404 01:56:26.124192 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:27 crc kubenswrapper[4681]: I0404 01:56:27.125417 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:28 crc kubenswrapper[4681]: I0404 01:56:28.124138 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:28 crc kubenswrapper[4681]: I0404 01:56:28.809455 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 01:56:28 crc kubenswrapper[4681]: I0404 01:56:28.809672 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:56:28 crc kubenswrapper[4681]: I0404 01:56:28.811345 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:28 crc kubenswrapper[4681]: I0404 01:56:28.811433 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:28 crc kubenswrapper[4681]: I0404 01:56:28.811454 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:28 crc kubenswrapper[4681]: I0404 01:56:28.812204 4681 scope.go:117] "RemoveContainer" containerID="b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da" Apr 04 01:56:28 crc kubenswrapper[4681]: E0404 01:56:28.812553 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Apr 04 01:56:29 crc kubenswrapper[4681]: I0404 01:56:29.126629 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:30 crc kubenswrapper[4681]: I0404 01:56:30.123363 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:31 crc kubenswrapper[4681]: I0404 01:56:31.119203 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:31 crc kubenswrapper[4681]: E0404 01:56:31.275767 4681 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Apr 04 01:56:31 crc kubenswrapper[4681]: W0404 01:56:31.323210 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Apr 04 01:56:31 crc kubenswrapper[4681]: E0404 01:56:31.323289 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Apr 04 01:56:31 crc kubenswrapper[4681]: E0404 01:56:31.588220 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Apr 04 01:56:31 crc kubenswrapper[4681]: I0404 01:56:31.606530 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:56:31 crc kubenswrapper[4681]: I0404 01:56:31.608096 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:31 crc kubenswrapper[4681]: I0404 01:56:31.608154 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:31 crc kubenswrapper[4681]: I0404 01:56:31.608172 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:31 crc kubenswrapper[4681]: I0404 01:56:31.608208 4681 kubelet_node_status.go:76] "Attempting to register node" node="crc" Apr 04 01:56:31 crc kubenswrapper[4681]: E0404 01:56:31.612600 4681 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Apr 04 01:56:32 crc kubenswrapper[4681]: I0404 01:56:32.122772 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:33 crc kubenswrapper[4681]: I0404 01:56:33.124213 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:33 crc kubenswrapper[4681]: I0404 01:56:33.425530 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 01:56:33 crc kubenswrapper[4681]: I0404 01:56:33.425778 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:56:33 crc kubenswrapper[4681]: I0404 01:56:33.427311 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:33 crc kubenswrapper[4681]: I0404 01:56:33.427359 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:33 crc kubenswrapper[4681]: I0404 01:56:33.427379 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:33 crc kubenswrapper[4681]: I0404 01:56:33.428240 4681 scope.go:117] "RemoveContainer" containerID="b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da" Apr 04 01:56:33 crc kubenswrapper[4681]: E0404 01:56:33.428651 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Apr 04 01:56:33 crc kubenswrapper[4681]: I0404 01:56:33.910106 4681 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Apr 04 01:56:33 crc kubenswrapper[4681]: I0404 01:56:33.924965 4681 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Apr 04 01:56:34 crc kubenswrapper[4681]: I0404 01:56:34.125386 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:35 crc kubenswrapper[4681]: I0404 01:56:35.125849 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:35 crc kubenswrapper[4681]: W0404 01:56:35.418511 4681 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Apr 04 01:56:35 crc kubenswrapper[4681]: E0404 01:56:35.418594 4681 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Apr 04 01:56:36 crc kubenswrapper[4681]: I0404 01:56:36.125004 4681 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Apr 04 01:56:36 crc kubenswrapper[4681]: I0404 01:56:36.760042 4681 csr.go:261] certificate signing request csr-wjcl2 is approved, waiting to be issued Apr 04 01:56:36 crc kubenswrapper[4681]: I0404 01:56:36.771687 4681 csr.go:257] certificate signing request csr-wjcl2 is issued Apr 04 01:56:36 crc kubenswrapper[4681]: I0404 01:56:36.823067 4681 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Apr 04 01:56:36 crc kubenswrapper[4681]: I0404 01:56:36.959228 4681 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 04 01:56:37 crc kubenswrapper[4681]: I0404 01:56:37.774134 4681 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-27 17:04:10.048541718 +0000 UTC Apr 04 01:56:37 crc kubenswrapper[4681]: I0404 01:56:37.774195 4681 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6423h7m32.274352072s for next certificate rotation Apr 04 01:56:38 crc kubenswrapper[4681]: I0404 01:56:38.613467 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:56:38 crc kubenswrapper[4681]: I0404 01:56:38.615077 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:38 crc kubenswrapper[4681]: I0404 01:56:38.615123 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:38 crc kubenswrapper[4681]: I0404 01:56:38.615140 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:38 crc kubenswrapper[4681]: I0404 01:56:38.615369 4681 kubelet_node_status.go:76] "Attempting to register node" node="crc" Apr 04 01:56:38 crc kubenswrapper[4681]: I0404 01:56:38.627298 4681 kubelet_node_status.go:115] "Node was previously registered" node="crc" Apr 04 01:56:38 crc kubenswrapper[4681]: I0404 01:56:38.627527 4681 kubelet_node_status.go:79] "Successfully registered node" node="crc" Apr 04 01:56:38 crc kubenswrapper[4681]: E0404 01:56:38.627549 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Apr 04 01:56:38 crc kubenswrapper[4681]: I0404 01:56:38.632252 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:38 crc kubenswrapper[4681]: I0404 01:56:38.632531 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:38 crc kubenswrapper[4681]: I0404 01:56:38.632693 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:38 crc kubenswrapper[4681]: I0404 01:56:38.632848 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:38 crc kubenswrapper[4681]: I0404 01:56:38.632996 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:38Z","lastTransitionTime":"2026-04-04T01:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:38 crc kubenswrapper[4681]: E0404 01:56:38.653195 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:38 crc kubenswrapper[4681]: I0404 01:56:38.666976 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:38 crc kubenswrapper[4681]: I0404 01:56:38.667059 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:38 crc kubenswrapper[4681]: I0404 01:56:38.667083 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:38 crc kubenswrapper[4681]: I0404 01:56:38.667118 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:38 crc kubenswrapper[4681]: I0404 01:56:38.667137 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:38Z","lastTransitionTime":"2026-04-04T01:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:38 crc kubenswrapper[4681]: E0404 01:56:38.685733 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:38 crc kubenswrapper[4681]: I0404 01:56:38.696570 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:38 crc kubenswrapper[4681]: I0404 01:56:38.696623 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:38 crc kubenswrapper[4681]: I0404 01:56:38.696640 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:38 crc kubenswrapper[4681]: I0404 01:56:38.696665 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:38 crc kubenswrapper[4681]: I0404 01:56:38.696682 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:38Z","lastTransitionTime":"2026-04-04T01:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:38 crc kubenswrapper[4681]: E0404 01:56:38.715575 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:38 crc kubenswrapper[4681]: I0404 01:56:38.727133 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:38 crc kubenswrapper[4681]: I0404 01:56:38.727594 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:38 crc kubenswrapper[4681]: I0404 01:56:38.727799 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:38 crc kubenswrapper[4681]: I0404 01:56:38.727973 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:38 crc kubenswrapper[4681]: I0404 01:56:38.728124 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:38Z","lastTransitionTime":"2026-04-04T01:56:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:38 crc kubenswrapper[4681]: E0404 01:56:38.741209 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:38 crc kubenswrapper[4681]: E0404 01:56:38.741381 4681 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Apr 04 01:56:38 crc kubenswrapper[4681]: E0404 01:56:38.741410 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:38 crc kubenswrapper[4681]: E0404 01:56:38.842118 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:38 crc kubenswrapper[4681]: E0404 01:56:38.942928 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:39 crc kubenswrapper[4681]: E0404 01:56:39.044539 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:39 crc kubenswrapper[4681]: E0404 01:56:39.145794 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:39 crc kubenswrapper[4681]: E0404 01:56:39.246030 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:39 crc kubenswrapper[4681]: E0404 01:56:39.347047 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:39 crc kubenswrapper[4681]: E0404 01:56:39.448143 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:39 crc kubenswrapper[4681]: E0404 01:56:39.548258 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:39 crc kubenswrapper[4681]: E0404 01:56:39.649191 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:39 crc kubenswrapper[4681]: E0404 01:56:39.749520 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:39 crc kubenswrapper[4681]: E0404 01:56:39.850629 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:39 crc kubenswrapper[4681]: E0404 01:56:39.951458 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:40 crc kubenswrapper[4681]: E0404 01:56:40.052178 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:40 crc kubenswrapper[4681]: E0404 01:56:40.152363 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:40 crc kubenswrapper[4681]: I0404 01:56:40.200099 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:56:40 crc kubenswrapper[4681]: I0404 01:56:40.201830 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:40 crc kubenswrapper[4681]: I0404 01:56:40.201890 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:40 crc kubenswrapper[4681]: I0404 01:56:40.201909 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:40 crc kubenswrapper[4681]: E0404 01:56:40.253748 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:40 crc kubenswrapper[4681]: E0404 01:56:40.354684 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:40 crc kubenswrapper[4681]: E0404 01:56:40.455705 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:40 crc kubenswrapper[4681]: E0404 01:56:40.556724 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:40 crc kubenswrapper[4681]: E0404 01:56:40.658029 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:40 crc kubenswrapper[4681]: E0404 01:56:40.759197 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:40 crc kubenswrapper[4681]: E0404 01:56:40.859877 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:40 crc kubenswrapper[4681]: E0404 01:56:40.960810 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:41 crc kubenswrapper[4681]: E0404 01:56:41.061723 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:41 crc kubenswrapper[4681]: E0404 01:56:41.162601 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:41 crc kubenswrapper[4681]: E0404 01:56:41.263568 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:41 crc kubenswrapper[4681]: E0404 01:56:41.276621 4681 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Apr 04 01:56:41 crc kubenswrapper[4681]: E0404 01:56:41.364993 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:41 crc kubenswrapper[4681]: E0404 01:56:41.466405 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:41 crc kubenswrapper[4681]: E0404 01:56:41.566872 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:41 crc kubenswrapper[4681]: E0404 01:56:41.668130 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:41 crc kubenswrapper[4681]: E0404 01:56:41.769352 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:41 crc kubenswrapper[4681]: E0404 01:56:41.870411 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:41 crc kubenswrapper[4681]: E0404 01:56:41.970562 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:42 crc kubenswrapper[4681]: E0404 01:56:42.071393 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:42 crc kubenswrapper[4681]: E0404 01:56:42.172248 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:42 crc kubenswrapper[4681]: E0404 01:56:42.273877 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:42 crc kubenswrapper[4681]: E0404 01:56:42.374672 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:42 crc kubenswrapper[4681]: E0404 01:56:42.475815 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:42 crc kubenswrapper[4681]: E0404 01:56:42.576458 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:42 crc kubenswrapper[4681]: E0404 01:56:42.677502 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:42 crc kubenswrapper[4681]: E0404 01:56:42.778246 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:42 crc kubenswrapper[4681]: E0404 01:56:42.879167 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:42 crc kubenswrapper[4681]: E0404 01:56:42.979813 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:43 crc kubenswrapper[4681]: E0404 01:56:43.080838 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:43 crc kubenswrapper[4681]: E0404 01:56:43.181465 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:43 crc kubenswrapper[4681]: E0404 01:56:43.282299 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:43 crc kubenswrapper[4681]: E0404 01:56:43.382756 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:43 crc kubenswrapper[4681]: E0404 01:56:43.483512 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:43 crc kubenswrapper[4681]: E0404 01:56:43.584072 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:43 crc kubenswrapper[4681]: E0404 01:56:43.685016 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:43 crc kubenswrapper[4681]: E0404 01:56:43.786217 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:43 crc kubenswrapper[4681]: E0404 01:56:43.886707 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:43 crc kubenswrapper[4681]: E0404 01:56:43.987364 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:44 crc kubenswrapper[4681]: E0404 01:56:44.087974 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:44 crc kubenswrapper[4681]: E0404 01:56:44.188948 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:44 crc kubenswrapper[4681]: E0404 01:56:44.289857 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:44 crc kubenswrapper[4681]: E0404 01:56:44.390649 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:44 crc kubenswrapper[4681]: E0404 01:56:44.491345 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:44 crc kubenswrapper[4681]: E0404 01:56:44.592066 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:44 crc kubenswrapper[4681]: E0404 01:56:44.692788 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:44 crc kubenswrapper[4681]: E0404 01:56:44.792982 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:44 crc kubenswrapper[4681]: E0404 01:56:44.893576 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:44 crc kubenswrapper[4681]: E0404 01:56:44.993854 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:45 crc kubenswrapper[4681]: E0404 01:56:45.094475 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:45 crc kubenswrapper[4681]: I0404 01:56:45.145767 4681 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Apr 04 01:56:45 crc kubenswrapper[4681]: E0404 01:56:45.195237 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:45 crc kubenswrapper[4681]: E0404 01:56:45.296204 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:45 crc kubenswrapper[4681]: E0404 01:56:45.396714 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:45 crc kubenswrapper[4681]: E0404 01:56:45.497364 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:45 crc kubenswrapper[4681]: E0404 01:56:45.598177 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:45 crc kubenswrapper[4681]: E0404 01:56:45.698967 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:45 crc kubenswrapper[4681]: E0404 01:56:45.799313 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:45 crc kubenswrapper[4681]: E0404 01:56:45.899796 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:46 crc kubenswrapper[4681]: E0404 01:56:46.000346 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:46 crc kubenswrapper[4681]: E0404 01:56:46.101286 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:46 crc kubenswrapper[4681]: E0404 01:56:46.201487 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:46 crc kubenswrapper[4681]: E0404 01:56:46.301981 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:46 crc kubenswrapper[4681]: E0404 01:56:46.402137 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:46 crc kubenswrapper[4681]: E0404 01:56:46.502513 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:46 crc kubenswrapper[4681]: E0404 01:56:46.603447 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:46 crc kubenswrapper[4681]: E0404 01:56:46.703892 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:46 crc kubenswrapper[4681]: E0404 01:56:46.804609 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:46 crc kubenswrapper[4681]: E0404 01:56:46.905706 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:47 crc kubenswrapper[4681]: E0404 01:56:47.006189 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:47 crc kubenswrapper[4681]: E0404 01:56:47.106340 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:47 crc kubenswrapper[4681]: E0404 01:56:47.207229 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:47 crc kubenswrapper[4681]: E0404 01:56:47.307911 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:47 crc kubenswrapper[4681]: E0404 01:56:47.420331 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:47 crc kubenswrapper[4681]: E0404 01:56:47.520957 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:47 crc kubenswrapper[4681]: E0404 01:56:47.621556 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:47 crc kubenswrapper[4681]: E0404 01:56:47.722365 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:47 crc kubenswrapper[4681]: E0404 01:56:47.822888 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:47 crc kubenswrapper[4681]: E0404 01:56:47.924155 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:48 crc kubenswrapper[4681]: E0404 01:56:48.024879 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:48 crc kubenswrapper[4681]: E0404 01:56:48.124994 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:48 crc kubenswrapper[4681]: E0404 01:56:48.226070 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:48 crc kubenswrapper[4681]: E0404 01:56:48.327065 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:48 crc kubenswrapper[4681]: E0404 01:56:48.428103 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:48 crc kubenswrapper[4681]: E0404 01:56:48.528750 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:48 crc kubenswrapper[4681]: E0404 01:56:48.629219 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:48 crc kubenswrapper[4681]: E0404 01:56:48.729884 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:48 crc kubenswrapper[4681]: E0404 01:56:48.830636 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:48 crc kubenswrapper[4681]: E0404 01:56:48.931664 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:49 crc kubenswrapper[4681]: E0404 01:56:49.017925 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Apr 04 01:56:49 crc kubenswrapper[4681]: I0404 01:56:49.023008 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:49 crc kubenswrapper[4681]: I0404 01:56:49.023060 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:49 crc kubenswrapper[4681]: I0404 01:56:49.023076 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:49 crc kubenswrapper[4681]: I0404 01:56:49.023099 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:49 crc kubenswrapper[4681]: I0404 01:56:49.023116 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:49Z","lastTransitionTime":"2026-04-04T01:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:49 crc kubenswrapper[4681]: E0404 01:56:49.034233 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:49 crc kubenswrapper[4681]: I0404 01:56:49.039148 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:49 crc kubenswrapper[4681]: I0404 01:56:49.039321 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:49 crc kubenswrapper[4681]: I0404 01:56:49.039349 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:49 crc kubenswrapper[4681]: I0404 01:56:49.039413 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:49 crc kubenswrapper[4681]: I0404 01:56:49.039434 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:49Z","lastTransitionTime":"2026-04-04T01:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:49 crc kubenswrapper[4681]: E0404 01:56:49.055169 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:49 crc kubenswrapper[4681]: I0404 01:56:49.059636 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:49 crc kubenswrapper[4681]: I0404 01:56:49.059675 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:49 crc kubenswrapper[4681]: I0404 01:56:49.059691 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:49 crc kubenswrapper[4681]: I0404 01:56:49.059714 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:49 crc kubenswrapper[4681]: I0404 01:56:49.059732 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:49Z","lastTransitionTime":"2026-04-04T01:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:49 crc kubenswrapper[4681]: E0404 01:56:49.071741 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:49 crc kubenswrapper[4681]: I0404 01:56:49.075819 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:49 crc kubenswrapper[4681]: I0404 01:56:49.075861 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:49 crc kubenswrapper[4681]: I0404 01:56:49.075877 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:49 crc kubenswrapper[4681]: I0404 01:56:49.075898 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:49 crc kubenswrapper[4681]: I0404 01:56:49.075914 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:49Z","lastTransitionTime":"2026-04-04T01:56:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:49 crc kubenswrapper[4681]: E0404 01:56:49.088671 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:49 crc kubenswrapper[4681]: E0404 01:56:49.088890 4681 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Apr 04 01:56:49 crc kubenswrapper[4681]: E0404 01:56:49.088920 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:49 crc kubenswrapper[4681]: E0404 01:56:49.189014 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:49 crc kubenswrapper[4681]: I0404 01:56:49.200723 4681 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Apr 04 01:56:49 crc kubenswrapper[4681]: I0404 01:56:49.202131 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:49 crc kubenswrapper[4681]: I0404 01:56:49.202179 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:49 crc kubenswrapper[4681]: I0404 01:56:49.202198 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:49 crc kubenswrapper[4681]: I0404 01:56:49.203091 4681 scope.go:117] "RemoveContainer" containerID="b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da" Apr 04 01:56:49 crc kubenswrapper[4681]: E0404 01:56:49.203443 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Apr 04 01:56:49 crc kubenswrapper[4681]: E0404 01:56:49.290033 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:49 crc kubenswrapper[4681]: E0404 01:56:49.390672 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:49 crc kubenswrapper[4681]: E0404 01:56:49.491122 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:49 crc kubenswrapper[4681]: E0404 01:56:49.592053 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:49 crc kubenswrapper[4681]: E0404 01:56:49.693080 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:49 crc kubenswrapper[4681]: E0404 01:56:49.793331 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:49 crc kubenswrapper[4681]: E0404 01:56:49.893968 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:49 crc kubenswrapper[4681]: E0404 01:56:49.994637 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:50 crc kubenswrapper[4681]: E0404 01:56:50.095134 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:50 crc kubenswrapper[4681]: E0404 01:56:50.195324 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:50 crc kubenswrapper[4681]: E0404 01:56:50.296452 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:50 crc kubenswrapper[4681]: E0404 01:56:50.397524 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:50 crc kubenswrapper[4681]: E0404 01:56:50.497985 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:50 crc kubenswrapper[4681]: E0404 01:56:50.598375 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:50 crc kubenswrapper[4681]: E0404 01:56:50.698746 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:50 crc kubenswrapper[4681]: E0404 01:56:50.799502 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:50 crc kubenswrapper[4681]: E0404 01:56:50.900713 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:51 crc kubenswrapper[4681]: E0404 01:56:51.001296 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:51 crc kubenswrapper[4681]: E0404 01:56:51.101480 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:51 crc kubenswrapper[4681]: E0404 01:56:51.202643 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:51 crc kubenswrapper[4681]: E0404 01:56:51.277612 4681 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Apr 04 01:56:51 crc kubenswrapper[4681]: E0404 01:56:51.303226 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:51 crc kubenswrapper[4681]: E0404 01:56:51.404225 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:51 crc kubenswrapper[4681]: E0404 01:56:51.504609 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:51 crc kubenswrapper[4681]: E0404 01:56:51.604994 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:51 crc kubenswrapper[4681]: E0404 01:56:51.705874 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:51 crc kubenswrapper[4681]: E0404 01:56:51.806895 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:51 crc kubenswrapper[4681]: E0404 01:56:51.907768 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:52 crc kubenswrapper[4681]: E0404 01:56:52.008464 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:52 crc kubenswrapper[4681]: E0404 01:56:52.108646 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:52 crc kubenswrapper[4681]: E0404 01:56:52.209478 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:52 crc kubenswrapper[4681]: E0404 01:56:52.310536 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:52 crc kubenswrapper[4681]: E0404 01:56:52.411364 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:52 crc kubenswrapper[4681]: E0404 01:56:52.512389 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:52 crc kubenswrapper[4681]: E0404 01:56:52.613530 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:52 crc kubenswrapper[4681]: E0404 01:56:52.714495 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:52 crc kubenswrapper[4681]: E0404 01:56:52.815639 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:52 crc kubenswrapper[4681]: E0404 01:56:52.916209 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:53 crc kubenswrapper[4681]: E0404 01:56:53.016853 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:53 crc kubenswrapper[4681]: E0404 01:56:53.117153 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:53 crc kubenswrapper[4681]: E0404 01:56:53.218375 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:53 crc kubenswrapper[4681]: E0404 01:56:53.319294 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:53 crc kubenswrapper[4681]: E0404 01:56:53.419665 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:53 crc kubenswrapper[4681]: E0404 01:56:53.520504 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:53 crc kubenswrapper[4681]: E0404 01:56:53.620674 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:53 crc kubenswrapper[4681]: E0404 01:56:53.721333 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:53 crc kubenswrapper[4681]: E0404 01:56:53.822388 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:53 crc kubenswrapper[4681]: E0404 01:56:53.923298 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:54 crc kubenswrapper[4681]: E0404 01:56:54.023968 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:54 crc kubenswrapper[4681]: E0404 01:56:54.124785 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:54 crc kubenswrapper[4681]: E0404 01:56:54.225762 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:54 crc kubenswrapper[4681]: E0404 01:56:54.326399 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:54 crc kubenswrapper[4681]: E0404 01:56:54.426843 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:54 crc kubenswrapper[4681]: E0404 01:56:54.527525 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:54 crc kubenswrapper[4681]: E0404 01:56:54.628604 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:54 crc kubenswrapper[4681]: E0404 01:56:54.729568 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:54 crc kubenswrapper[4681]: E0404 01:56:54.829755 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:54 crc kubenswrapper[4681]: E0404 01:56:54.930931 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:55 crc kubenswrapper[4681]: E0404 01:56:55.031515 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:55 crc kubenswrapper[4681]: E0404 01:56:55.132127 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:55 crc kubenswrapper[4681]: E0404 01:56:55.232349 4681 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.297483 4681 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.334589 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.334623 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.334633 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.334648 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.334659 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:55Z","lastTransitionTime":"2026-04-04T01:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.438135 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.438197 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.438215 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.438237 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.438254 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:55Z","lastTransitionTime":"2026-04-04T01:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.541471 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.541510 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.541520 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.541535 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.541546 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:55Z","lastTransitionTime":"2026-04-04T01:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.644034 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.644082 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.644097 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.644117 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.644133 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:55Z","lastTransitionTime":"2026-04-04T01:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.747628 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.747693 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.747715 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.747743 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.747764 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:55Z","lastTransitionTime":"2026-04-04T01:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.850298 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.850329 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.850336 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.850349 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.850359 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:55Z","lastTransitionTime":"2026-04-04T01:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.952940 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.953013 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.953032 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.953057 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:55 crc kubenswrapper[4681]: I0404 01:56:55.953074 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:55Z","lastTransitionTime":"2026-04-04T01:56:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.056622 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.056679 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.056699 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.056730 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.056752 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:56Z","lastTransitionTime":"2026-04-04T01:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.159481 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.159542 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.159559 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.159586 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.159603 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:56Z","lastTransitionTime":"2026-04-04T01:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.160899 4681 apiserver.go:52] "Watching apiserver" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.168531 4681 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.169095 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-jsn9l","openshift-machine-config-operator/machine-config-daemon-v6mjr","openshift-multus/multus-additional-cni-plugins-bqtgx","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-multus/multus-w5wbs","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-ovn-kubernetes/ovnkube-node-cntwc"] Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.169586 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.169752 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.169796 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.170522 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.169923 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.170638 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.170670 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.170396 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-jsn9l" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.170110 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.169808 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.171025 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.171052 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.171405 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.171365 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.173544 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.174039 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.174943 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.177640 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.177644 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.178481 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.178559 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.178580 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.179659 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.179714 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.179790 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.179866 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.180245 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.180351 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.180828 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.182074 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.182222 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.182081 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.182686 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.182823 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.183079 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.183104 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.183409 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.183447 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.183547 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.183610 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.183646 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.183830 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.184241 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.185775 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.186502 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.214497 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.219088 4681 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.232665 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.244116 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.251685 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.251807 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.251896 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.251978 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252050 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252006 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252119 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252279 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252336 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252350 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252391 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252384 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252431 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252453 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252470 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252489 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252505 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252546 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252562 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252576 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252594 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252610 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252611 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252625 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252641 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252657 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252673 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252689 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252703 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252719 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252736 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252750 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252765 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252779 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252793 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252807 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252823 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252837 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252853 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252862 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252867 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252938 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252963 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.252987 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253011 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253034 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253057 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253079 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253102 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253123 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253145 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253166 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253188 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253209 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253229 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253238 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253250 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253354 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253426 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253462 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253497 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253530 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253562 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253596 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253627 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253660 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253694 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253727 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253761 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253795 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253828 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253862 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253902 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253936 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253969 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254002 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254035 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254066 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254095 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254124 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254161 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254196 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254343 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254380 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254417 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254450 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254483 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254513 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254553 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254587 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254619 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254664 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254697 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254727 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254757 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254789 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254822 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254856 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254887 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254923 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254954 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254985 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255017 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255052 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255119 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255153 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255184 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255216 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255248 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255303 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255338 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255371 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255404 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255439 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255475 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255508 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255541 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255575 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255605 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255638 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255684 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255718 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255748 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255777 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255808 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255838 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255868 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255900 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255932 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255965 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255998 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256031 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256065 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256097 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256133 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256168 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256203 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256237 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256295 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256329 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256362 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256393 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256429 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256460 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256493 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256531 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256562 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256594 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256629 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256662 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256698 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256733 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256791 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256827 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256862 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256896 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256927 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253358 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256960 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256994 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.257028 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.257060 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.257094 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.257125 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.257155 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.257192 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.257225 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.257257 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.257718 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.257759 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.257792 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.257827 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.257859 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.257915 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.257939 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.257964 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.257986 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258012 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258037 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258061 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258083 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258106 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258127 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258150 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258173 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258196 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258218 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258240 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258287 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258321 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258347 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258372 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258394 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258422 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258445 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258501 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258527 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258553 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258576 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258598 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258621 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258649 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258686 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258712 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258786 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqjdn\" (UniqueName: \"kubernetes.io/projected/cab7ffc5-0101-48b8-87ab-de8324bacc38-kube-api-access-wqjdn\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258813 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-node-log\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258835 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-log-socket\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258856 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5918fa67-6cfa-4c3b-bc04-7cc7888abf1c-os-release\") pod \"multus-additional-cni-plugins-bqtgx\" (UID: \"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\") " pod="openshift-multus/multus-additional-cni-plugins-bqtgx" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258878 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5918fa67-6cfa-4c3b-bc04-7cc7888abf1c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bqtgx\" (UID: \"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\") " pod="openshift-multus/multus-additional-cni-plugins-bqtgx" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.259262 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-host-var-lib-cni-multus\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.259320 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-var-lib-openvswitch\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.259347 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-run-ovn\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.259368 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-run-ovn-kubernetes\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.259389 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d004639b-c07a-4401-8588-8af4ed981db3-ovn-node-metrics-cert\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.261976 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.262466 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5918fa67-6cfa-4c3b-bc04-7cc7888abf1c-cnibin\") pod \"multus-additional-cni-plugins-bqtgx\" (UID: \"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\") " pod="openshift-multus/multus-additional-cni-plugins-bqtgx" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.262524 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q824j\" (UniqueName: \"kubernetes.io/projected/5918fa67-6cfa-4c3b-bc04-7cc7888abf1c-kube-api-access-q824j\") pod \"multus-additional-cni-plugins-bqtgx\" (UID: \"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\") " pod="openshift-multus/multus-additional-cni-plugins-bqtgx" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.263400 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-systemd-units\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.263584 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.263628 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n97gc\" (UniqueName: \"kubernetes.io/projected/e4e1568b-1dc4-41c2-a74f-38bfabcf1280-kube-api-access-n97gc\") pod \"node-resolver-jsn9l\" (UID: \"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\") " pod="openshift-dns/node-resolver-jsn9l" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.263659 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.263684 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-slash\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.263709 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-run-systemd\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.263737 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.263763 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.263785 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.263811 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-host-run-netns\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.263833 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5918fa67-6cfa-4c3b-bc04-7cc7888abf1c-system-cni-dir\") pod \"multus-additional-cni-plugins-bqtgx\" (UID: \"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\") " pod="openshift-multus/multus-additional-cni-plugins-bqtgx" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.263856 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5918fa67-6cfa-4c3b-bc04-7cc7888abf1c-cni-binary-copy\") pod \"multus-additional-cni-plugins-bqtgx\" (UID: \"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\") " pod="openshift-multus/multus-additional-cni-plugins-bqtgx" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.263881 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-cnibin\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.263903 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cab7ffc5-0101-48b8-87ab-de8324bacc38-cni-binary-copy\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.263925 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-multus-conf-dir\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.263945 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-run-netns\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.263964 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-etc-openvswitch\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.263984 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-system-cni-dir\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.264004 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d004639b-c07a-4401-8588-8af4ed981db3-env-overrides\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.264026 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d457ca0b-43c6-4bab-940c-5aa4ab124992-proxy-tls\") pod \"machine-config-daemon-v6mjr\" (UID: \"d457ca0b-43c6-4bab-940c-5aa4ab124992\") " pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.264055 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.264083 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-host-run-k8s-cni-cncf-io\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.264114 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-etc-kubernetes\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.264139 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.264162 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5918fa67-6cfa-4c3b-bc04-7cc7888abf1c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bqtgx\" (UID: \"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\") " pod="openshift-multus/multus-additional-cni-plugins-bqtgx" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.264186 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-os-release\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.264208 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-multus-socket-dir-parent\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.264228 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-host-var-lib-cni-bin\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.264249 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-hostroot\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.264299 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d457ca0b-43c6-4bab-940c-5aa4ab124992-rootfs\") pod \"machine-config-daemon-v6mjr\" (UID: \"d457ca0b-43c6-4bab-940c-5aa4ab124992\") " pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.264315 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.264332 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.264351 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.264360 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.264369 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.264387 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.264422 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d004639b-c07a-4401-8588-8af4ed981db3-ovnkube-script-lib\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.264457 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz8jr\" (UniqueName: \"kubernetes.io/projected/d004639b-c07a-4401-8588-8af4ed981db3-kube-api-access-vz8jr\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.264495 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.264901 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.264963 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.265000 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt54j\" (UniqueName: \"kubernetes.io/projected/d457ca0b-43c6-4bab-940c-5aa4ab124992-kube-api-access-nt54j\") pod \"machine-config-daemon-v6mjr\" (UID: \"d457ca0b-43c6-4bab-940c-5aa4ab124992\") " pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.265027 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-kubelet\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.265048 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-run-openvswitch\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.265158 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-cni-netd\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.265214 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-host-var-lib-kubelet\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.265283 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/cab7ffc5-0101-48b8-87ab-de8324bacc38-multus-daemon-config\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.265313 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-cni-bin\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.265341 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d004639b-c07a-4401-8588-8af4ed981db3-ovnkube-config\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.265382 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.265414 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.265445 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-host-run-multus-certs\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.265476 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-multus-cni-dir\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.265504 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e4e1568b-1dc4-41c2-a74f-38bfabcf1280-hosts-file\") pod \"node-resolver-jsn9l\" (UID: \"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\") " pod="openshift-dns/node-resolver-jsn9l" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.265531 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d457ca0b-43c6-4bab-940c-5aa4ab124992-mcd-auth-proxy-config\") pod \"machine-config-daemon-v6mjr\" (UID: \"d457ca0b-43c6-4bab-940c-5aa4ab124992\") " pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.265628 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.265647 4681 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.265666 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.265682 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.265697 4681 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.265712 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.265726 4681 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.265741 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.265756 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.266395 4681 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.270666 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.264393 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.273105 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:56Z","lastTransitionTime":"2026-04-04T01:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.273998 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.275366 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253463 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253878 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.253964 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254184 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254415 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254513 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254664 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.254834 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255139 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255433 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255455 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255620 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255658 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255665 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255896 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.255994 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256003 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256124 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256371 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256366 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256375 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256580 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256635 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256649 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256660 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256681 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256905 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.256939 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.257034 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.257137 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.257161 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.257282 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.257515 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.257564 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.257675 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.257931 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258010 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258133 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258391 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.258666 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.259081 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.259065 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.259506 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.259569 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.259582 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.259910 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.259989 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.260085 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.260101 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.260385 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.260384 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.279125 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.260439 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.260578 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.260602 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.260691 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.260854 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.260872 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.260900 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.261051 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.261114 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.261197 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.261203 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.261110 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.261302 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.261664 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.261918 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.261935 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.262082 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.262393 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.262761 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.262841 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.262885 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.262943 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.263389 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.264352 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.264602 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.264633 4681 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.264751 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.264618 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.264876 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.265455 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.265534 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.265505 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.265592 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.265591 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.265837 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.266053 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.266074 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.266076 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.265979 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.266208 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.266243 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.266628 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.266778 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.266983 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.267469 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.267511 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.267554 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.267397 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.267655 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.286227 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.267821 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.267950 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.267971 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.268049 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.268787 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.268816 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.269105 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.269353 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.269602 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.269903 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.270120 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.270200 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.270348 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.270393 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.270440 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.271871 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.272144 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:56:56.772100566 +0000 UTC m=+96.437875736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.272144 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.272361 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.272588 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.272614 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.272865 4681 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.273138 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.273381 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.273778 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.274102 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.274145 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.274300 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.274330 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.274346 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.274362 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.274825 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.274852 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.274962 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.275048 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.275125 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.275371 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.275674 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.276056 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.276091 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.276206 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.277062 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.285483 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.286277 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.286492 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.286857 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-04-04 01:56:56.786834349 +0000 UTC m=+96.452609489 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.286955 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-04-04 01:56:56.786945472 +0000 UTC m=+96.452720612 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.287345 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.289868 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.293202 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.293564 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.293604 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.293620 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.296187 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.299893 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.300089 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.300129 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.300146 4681 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.300315 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-04-04 01:56:56.800293295 +0000 UTC m=+96.466068425 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.300473 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.300698 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.300792 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.303891 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.303995 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.304112 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.304438 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.304466 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.304481 4681 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.304533 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-04-04 01:56:56.804515869 +0000 UTC m=+96.470291099 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.304706 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.305080 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.305514 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.305809 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.305928 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.306011 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.306519 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.307256 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.308140 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.308530 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.309115 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.309154 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.309748 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.309744 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.309781 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.309936 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.310212 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.310492 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.310630 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.310892 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.310990 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.312018 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.312105 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.312208 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.312443 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.312791 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.313402 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.313664 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.313785 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.314009 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.314179 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.316605 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.317525 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.321479 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.329690 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.333054 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.338536 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.344154 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.354108 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.362491 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366125 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-os-release\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366161 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-multus-socket-dir-parent\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366185 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-host-var-lib-cni-bin\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366209 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-hostroot\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366231 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d457ca0b-43c6-4bab-940c-5aa4ab124992-rootfs\") pod \"machine-config-daemon-v6mjr\" (UID: \"d457ca0b-43c6-4bab-940c-5aa4ab124992\") " pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366251 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz8jr\" (UniqueName: \"kubernetes.io/projected/d004639b-c07a-4401-8588-8af4ed981db3-kube-api-access-vz8jr\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366305 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366329 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d004639b-c07a-4401-8588-8af4ed981db3-ovnkube-script-lib\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366362 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt54j\" (UniqueName: \"kubernetes.io/projected/d457ca0b-43c6-4bab-940c-5aa4ab124992-kube-api-access-nt54j\") pod \"machine-config-daemon-v6mjr\" (UID: \"d457ca0b-43c6-4bab-940c-5aa4ab124992\") " pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366381 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-multus-socket-dir-parent\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366411 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-kubelet\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366383 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-kubelet\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366424 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d457ca0b-43c6-4bab-940c-5aa4ab124992-rootfs\") pod \"machine-config-daemon-v6mjr\" (UID: \"d457ca0b-43c6-4bab-940c-5aa4ab124992\") " pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366325 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-host-var-lib-cni-bin\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366458 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-run-openvswitch\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366478 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-run-openvswitch\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366595 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-cni-netd\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366627 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-host-var-lib-kubelet\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366468 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366383 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-os-release\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366657 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/cab7ffc5-0101-48b8-87ab-de8324bacc38-multus-daemon-config\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366682 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-cni-netd\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366688 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-cni-bin\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366695 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-host-var-lib-kubelet\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366719 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d004639b-c07a-4401-8588-8af4ed981db3-ovnkube-config\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366734 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-cni-bin\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366501 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-hostroot\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366752 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-host-run-multus-certs\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366808 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-multus-cni-dir\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366838 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e4e1568b-1dc4-41c2-a74f-38bfabcf1280-hosts-file\") pod \"node-resolver-jsn9l\" (UID: \"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\") " pod="openshift-dns/node-resolver-jsn9l" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366867 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d457ca0b-43c6-4bab-940c-5aa4ab124992-mcd-auth-proxy-config\") pod \"machine-config-daemon-v6mjr\" (UID: \"d457ca0b-43c6-4bab-940c-5aa4ab124992\") " pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366923 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqjdn\" (UniqueName: \"kubernetes.io/projected/cab7ffc5-0101-48b8-87ab-de8324bacc38-kube-api-access-wqjdn\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366949 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-node-log\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.366980 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-log-socket\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367041 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5918fa67-6cfa-4c3b-bc04-7cc7888abf1c-os-release\") pod \"multus-additional-cni-plugins-bqtgx\" (UID: \"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\") " pod="openshift-multus/multus-additional-cni-plugins-bqtgx" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367058 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/e4e1568b-1dc4-41c2-a74f-38bfabcf1280-hosts-file\") pod \"node-resolver-jsn9l\" (UID: \"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\") " pod="openshift-dns/node-resolver-jsn9l" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367204 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d004639b-c07a-4401-8588-8af4ed981db3-ovnkube-script-lib\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367409 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-multus-cni-dir\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367431 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d004639b-c07a-4401-8588-8af4ed981db3-ovnkube-config\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367450 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-host-run-multus-certs\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367075 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5918fa67-6cfa-4c3b-bc04-7cc7888abf1c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bqtgx\" (UID: \"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\") " pod="openshift-multus/multus-additional-cni-plugins-bqtgx" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367485 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-log-socket\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367443 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-node-log\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367511 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-host-var-lib-cni-multus\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367484 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-host-var-lib-cni-multus\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367554 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-var-lib-openvswitch\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367577 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-run-ovn\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367584 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-var-lib-openvswitch\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367555 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5918fa67-6cfa-4c3b-bc04-7cc7888abf1c-os-release\") pod \"multus-additional-cni-plugins-bqtgx\" (UID: \"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\") " pod="openshift-multus/multus-additional-cni-plugins-bqtgx" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367602 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-run-ovn-kubernetes\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367629 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d004639b-c07a-4401-8588-8af4ed981db3-ovn-node-metrics-cert\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367637 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-run-ovn\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367654 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5918fa67-6cfa-4c3b-bc04-7cc7888abf1c-cnibin\") pod \"multus-additional-cni-plugins-bqtgx\" (UID: \"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\") " pod="openshift-multus/multus-additional-cni-plugins-bqtgx" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367672 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-run-ovn-kubernetes\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367677 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q824j\" (UniqueName: \"kubernetes.io/projected/5918fa67-6cfa-4c3b-bc04-7cc7888abf1c-kube-api-access-q824j\") pod \"multus-additional-cni-plugins-bqtgx\" (UID: \"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\") " pod="openshift-multus/multus-additional-cni-plugins-bqtgx" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367699 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-systemd-units\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367720 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n97gc\" (UniqueName: \"kubernetes.io/projected/e4e1568b-1dc4-41c2-a74f-38bfabcf1280-kube-api-access-n97gc\") pod \"node-resolver-jsn9l\" (UID: \"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\") " pod="openshift-dns/node-resolver-jsn9l" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367755 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-slash\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367776 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-run-systemd\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367820 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-systemd-units\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367824 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367890 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-host-run-netns\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367910 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367940 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5918fa67-6cfa-4c3b-bc04-7cc7888abf1c-system-cni-dir\") pod \"multus-additional-cni-plugins-bqtgx\" (UID: \"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\") " pod="openshift-multus/multus-additional-cni-plugins-bqtgx" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367944 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5918fa67-6cfa-4c3b-bc04-7cc7888abf1c-cnibin\") pod \"multus-additional-cni-plugins-bqtgx\" (UID: \"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\") " pod="openshift-multus/multus-additional-cni-plugins-bqtgx" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368184 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-host-run-netns\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368218 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-run-systemd\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368248 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-slash\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368426 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/cab7ffc5-0101-48b8-87ab-de8324bacc38-multus-daemon-config\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.367912 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5918fa67-6cfa-4c3b-bc04-7cc7888abf1c-system-cni-dir\") pod \"multus-additional-cni-plugins-bqtgx\" (UID: \"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\") " pod="openshift-multus/multus-additional-cni-plugins-bqtgx" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368480 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-etc-openvswitch\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368504 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5918fa67-6cfa-4c3b-bc04-7cc7888abf1c-cni-binary-copy\") pod \"multus-additional-cni-plugins-bqtgx\" (UID: \"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\") " pod="openshift-multus/multus-additional-cni-plugins-bqtgx" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368527 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-cnibin\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368526 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-etc-openvswitch\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368546 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cab7ffc5-0101-48b8-87ab-de8324bacc38-cni-binary-copy\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368568 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-multus-conf-dir\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368590 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-run-netns\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368610 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-system-cni-dir\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368629 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d004639b-c07a-4401-8588-8af4ed981db3-env-overrides\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368628 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-cnibin\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368651 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d457ca0b-43c6-4bab-940c-5aa4ab124992-proxy-tls\") pod \"machine-config-daemon-v6mjr\" (UID: \"d457ca0b-43c6-4bab-940c-5aa4ab124992\") " pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368684 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-host-run-k8s-cni-cncf-io\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368705 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-etc-kubernetes\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368725 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368746 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5918fa67-6cfa-4c3b-bc04-7cc7888abf1c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bqtgx\" (UID: \"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\") " pod="openshift-multus/multus-additional-cni-plugins-bqtgx" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368842 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368856 4681 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368869 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368882 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368895 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368908 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368921 4681 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368932 4681 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368944 4681 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368955 4681 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368968 4681 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368979 4681 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368991 4681 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.369002 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.369014 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.369025 4681 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.369037 4681 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.369049 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.369061 4681 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.369072 4681 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.369084 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.369107 4681 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.369119 4681 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.369131 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.369142 4681 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.369155 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.369167 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.369179 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.369190 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.369202 4681 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.369213 4681 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.369224 4681 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.369235 4681 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.369247 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.369258 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.369289 4681 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.369299 4681 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.369311 4681 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.369315 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-multus-conf-dir\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.369323 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.369625 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/cab7ffc5-0101-48b8-87ab-de8324bacc38-cni-binary-copy\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.369665 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-host-run-k8s-cni-cncf-io\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370043 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5918fa67-6cfa-4c3b-bc04-7cc7888abf1c-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bqtgx\" (UID: \"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\") " pod="openshift-multus/multus-additional-cni-plugins-bqtgx" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370100 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-system-cni-dir\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.368700 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-run-netns\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370146 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/cab7ffc5-0101-48b8-87ab-de8324bacc38-etc-kubernetes\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370183 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370198 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370237 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370295 4681 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370304 4681 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370313 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370341 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370379 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370389 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370392 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d004639b-c07a-4401-8588-8af4ed981db3-env-overrides\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370399 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370464 4681 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370486 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370505 4681 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370523 4681 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370541 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370558 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370576 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370593 4681 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370599 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d457ca0b-43c6-4bab-940c-5aa4ab124992-mcd-auth-proxy-config\") pod \"machine-config-daemon-v6mjr\" (UID: \"d457ca0b-43c6-4bab-940c-5aa4ab124992\") " pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370615 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5918fa67-6cfa-4c3b-bc04-7cc7888abf1c-cni-binary-copy\") pod \"multus-additional-cni-plugins-bqtgx\" (UID: \"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\") " pod="openshift-multus/multus-additional-cni-plugins-bqtgx" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370612 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370677 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370698 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370715 4681 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370733 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370820 4681 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370923 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.370949 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.371000 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.371047 4681 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.371063 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.371080 4681 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.371095 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.371112 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.371189 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5918fa67-6cfa-4c3b-bc04-7cc7888abf1c-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bqtgx\" (UID: \"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\") " pod="openshift-multus/multus-additional-cni-plugins-bqtgx" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.383701 4681 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.383739 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.383750 4681 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.383758 4681 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.383776 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.383788 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.383798 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.383806 4681 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.383819 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.383829 4681 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.383838 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.383847 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.383855 4681 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.383864 4681 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.383873 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.383882 4681 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.383890 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.383898 4681 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.383907 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.383915 4681 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.383922 4681 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.383933 4681 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.383942 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.383952 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.383960 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.383969 4681 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.383977 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.383986 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.383995 4681 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384003 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384013 4681 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384022 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384031 4681 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384040 4681 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384062 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384071 4681 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384080 4681 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384088 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384096 4681 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384104 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384114 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384122 4681 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384129 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384138 4681 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384146 4681 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384154 4681 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384162 4681 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384171 4681 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384180 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384190 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384198 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384206 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384213 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384222 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384231 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384241 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384250 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384258 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384280 4681 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384287 4681 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384295 4681 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384304 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384321 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384329 4681 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384339 4681 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384347 4681 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384356 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384364 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384372 4681 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384380 4681 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384388 4681 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384397 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384405 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384413 4681 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384421 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384429 4681 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384438 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384446 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384454 4681 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384464 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384471 4681 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384479 4681 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384487 4681 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384496 4681 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384505 4681 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384513 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384521 4681 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384529 4681 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384539 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384547 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384557 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384565 4681 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384575 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384583 4681 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384593 4681 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384602 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384611 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384622 4681 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384631 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384639 4681 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384648 4681 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384656 4681 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384665 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384673 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384682 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384690 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384722 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384731 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384740 4681 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384749 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384758 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384766 4681 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384775 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384784 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384792 4681 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.384987 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.385210 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d004639b-c07a-4401-8588-8af4ed981db3-ovn-node-metrics-cert\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.385251 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d457ca0b-43c6-4bab-940c-5aa4ab124992-proxy-tls\") pod \"machine-config-daemon-v6mjr\" (UID: \"d457ca0b-43c6-4bab-940c-5aa4ab124992\") " pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.385651 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.385677 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.385687 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.385702 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.385717 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:56Z","lastTransitionTime":"2026-04-04T01:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.387962 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqjdn\" (UniqueName: \"kubernetes.io/projected/cab7ffc5-0101-48b8-87ab-de8324bacc38-kube-api-access-wqjdn\") pod \"multus-w5wbs\" (UID: \"cab7ffc5-0101-48b8-87ab-de8324bacc38\") " pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.388844 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz8jr\" (UniqueName: \"kubernetes.io/projected/d004639b-c07a-4401-8588-8af4ed981db3-kube-api-access-vz8jr\") pod \"ovnkube-node-cntwc\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.389157 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt54j\" (UniqueName: \"kubernetes.io/projected/d457ca0b-43c6-4bab-940c-5aa4ab124992-kube-api-access-nt54j\") pod \"machine-config-daemon-v6mjr\" (UID: \"d457ca0b-43c6-4bab-940c-5aa4ab124992\") " pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.389737 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q824j\" (UniqueName: \"kubernetes.io/projected/5918fa67-6cfa-4c3b-bc04-7cc7888abf1c-kube-api-access-q824j\") pod \"multus-additional-cni-plugins-bqtgx\" (UID: \"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\") " pod="openshift-multus/multus-additional-cni-plugins-bqtgx" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.395996 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n97gc\" (UniqueName: \"kubernetes.io/projected/e4e1568b-1dc4-41c2-a74f-38bfabcf1280-kube-api-access-n97gc\") pod \"node-resolver-jsn9l\" (UID: \"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\") " pod="openshift-dns/node-resolver-jsn9l" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.487964 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.488015 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.488026 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.488052 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.488066 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:56Z","lastTransitionTime":"2026-04-04T01:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.497151 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.511445 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:56:56 crc kubenswrapper[4681]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Apr 04 01:56:56 crc kubenswrapper[4681]: set -o allexport Apr 04 01:56:56 crc kubenswrapper[4681]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Apr 04 01:56:56 crc kubenswrapper[4681]: source /etc/kubernetes/apiserver-url.env Apr 04 01:56:56 crc kubenswrapper[4681]: else Apr 04 01:56:56 crc kubenswrapper[4681]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Apr 04 01:56:56 crc kubenswrapper[4681]: exit 1 Apr 04 01:56:56 crc kubenswrapper[4681]: fi Apr 04 01:56:56 crc kubenswrapper[4681]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Apr 04 01:56:56 crc kubenswrapper[4681]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:56:56 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.513441 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.513626 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.523336 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 01:56:56 crc kubenswrapper[4681]: W0404 01:56:56.530902 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-50fb338970d0af908900e931462502ddc47528bd7eb0b388dc5ad842a8d978dc WatchSource:0}: Error finding container 50fb338970d0af908900e931462502ddc47528bd7eb0b388dc5ad842a8d978dc: Status 404 returned error can't find the container with id 50fb338970d0af908900e931462502ddc47528bd7eb0b388dc5ad842a8d978dc Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.531527 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.534299 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:56:56 crc kubenswrapper[4681]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Apr 04 01:56:56 crc kubenswrapper[4681]: if [[ -f "/env/_master" ]]; then Apr 04 01:56:56 crc kubenswrapper[4681]: set -o allexport Apr 04 01:56:56 crc kubenswrapper[4681]: source "/env/_master" Apr 04 01:56:56 crc kubenswrapper[4681]: set +o allexport Apr 04 01:56:56 crc kubenswrapper[4681]: fi Apr 04 01:56:56 crc kubenswrapper[4681]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Apr 04 01:56:56 crc kubenswrapper[4681]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Apr 04 01:56:56 crc kubenswrapper[4681]: ho_enable="--enable-hybrid-overlay" Apr 04 01:56:56 crc kubenswrapper[4681]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Apr 04 01:56:56 crc kubenswrapper[4681]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Apr 04 01:56:56 crc kubenswrapper[4681]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Apr 04 01:56:56 crc kubenswrapper[4681]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Apr 04 01:56:56 crc kubenswrapper[4681]: --webhook-cert-dir="/etc/webhook-cert" \ Apr 04 01:56:56 crc kubenswrapper[4681]: --webhook-host=127.0.0.1 \ Apr 04 01:56:56 crc kubenswrapper[4681]: --webhook-port=9743 \ Apr 04 01:56:56 crc kubenswrapper[4681]: ${ho_enable} \ Apr 04 01:56:56 crc kubenswrapper[4681]: --enable-interconnect \ Apr 04 01:56:56 crc kubenswrapper[4681]: --disable-approver \ Apr 04 01:56:56 crc kubenswrapper[4681]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Apr 04 01:56:56 crc kubenswrapper[4681]: --wait-for-kubernetes-api=200s \ Apr 04 01:56:56 crc kubenswrapper[4681]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Apr 04 01:56:56 crc kubenswrapper[4681]: --loglevel="${LOGLEVEL}" Apr 04 01:56:56 crc kubenswrapper[4681]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:56:56 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.541027 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-jsn9l" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.544245 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:56:56 crc kubenswrapper[4681]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Apr 04 01:56:56 crc kubenswrapper[4681]: if [[ -f "/env/_master" ]]; then Apr 04 01:56:56 crc kubenswrapper[4681]: set -o allexport Apr 04 01:56:56 crc kubenswrapper[4681]: source "/env/_master" Apr 04 01:56:56 crc kubenswrapper[4681]: set +o allexport Apr 04 01:56:56 crc kubenswrapper[4681]: fi Apr 04 01:56:56 crc kubenswrapper[4681]: Apr 04 01:56:56 crc kubenswrapper[4681]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Apr 04 01:56:56 crc kubenswrapper[4681]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Apr 04 01:56:56 crc kubenswrapper[4681]: --disable-webhook \ Apr 04 01:56:56 crc kubenswrapper[4681]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Apr 04 01:56:56 crc kubenswrapper[4681]: --loglevel="${LOGLEVEL}" Apr 04 01:56:56 crc kubenswrapper[4681]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:56:56 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:56:56 crc kubenswrapper[4681]: W0404 01:56:56.544378 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd457ca0b_43c6_4bab_940c_5aa4ab124992.slice/crio-8dee9dfcf0f3a1bfd595aa1bac52ec660b3b42674576551b56af94c7e3b5d014 WatchSource:0}: Error finding container 8dee9dfcf0f3a1bfd595aa1bac52ec660b3b42674576551b56af94c7e3b5d014: Status 404 returned error can't find the container with id 8dee9dfcf0f3a1bfd595aa1bac52ec660b3b42674576551b56af94c7e3b5d014 Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.545675 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.548553 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nt54j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.552145 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.554451 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nt54j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.555977 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.561046 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.561405 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:56:56 crc kubenswrapper[4681]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Apr 04 01:56:56 crc kubenswrapper[4681]: set -uo pipefail Apr 04 01:56:56 crc kubenswrapper[4681]: Apr 04 01:56:56 crc kubenswrapper[4681]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Apr 04 01:56:56 crc kubenswrapper[4681]: Apr 04 01:56:56 crc kubenswrapper[4681]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Apr 04 01:56:56 crc kubenswrapper[4681]: HOSTS_FILE="/etc/hosts" Apr 04 01:56:56 crc kubenswrapper[4681]: TEMP_FILE="/etc/hosts.tmp" Apr 04 01:56:56 crc kubenswrapper[4681]: Apr 04 01:56:56 crc kubenswrapper[4681]: IFS=', ' read -r -a services <<< "${SERVICES}" Apr 04 01:56:56 crc kubenswrapper[4681]: Apr 04 01:56:56 crc kubenswrapper[4681]: # Make a temporary file with the old hosts file's attributes. Apr 04 01:56:56 crc kubenswrapper[4681]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Apr 04 01:56:56 crc kubenswrapper[4681]: echo "Failed to preserve hosts file. Exiting." Apr 04 01:56:56 crc kubenswrapper[4681]: exit 1 Apr 04 01:56:56 crc kubenswrapper[4681]: fi Apr 04 01:56:56 crc kubenswrapper[4681]: Apr 04 01:56:56 crc kubenswrapper[4681]: while true; do Apr 04 01:56:56 crc kubenswrapper[4681]: declare -A svc_ips Apr 04 01:56:56 crc kubenswrapper[4681]: for svc in "${services[@]}"; do Apr 04 01:56:56 crc kubenswrapper[4681]: # Fetch service IP from cluster dns if present. We make several tries Apr 04 01:56:56 crc kubenswrapper[4681]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Apr 04 01:56:56 crc kubenswrapper[4681]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Apr 04 01:56:56 crc kubenswrapper[4681]: # support UDP loadbalancers and require reaching DNS through TCP. Apr 04 01:56:56 crc kubenswrapper[4681]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Apr 04 01:56:56 crc kubenswrapper[4681]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Apr 04 01:56:56 crc kubenswrapper[4681]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Apr 04 01:56:56 crc kubenswrapper[4681]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Apr 04 01:56:56 crc kubenswrapper[4681]: for i in ${!cmds[*]} Apr 04 01:56:56 crc kubenswrapper[4681]: do Apr 04 01:56:56 crc kubenswrapper[4681]: ips=($(eval "${cmds[i]}")) Apr 04 01:56:56 crc kubenswrapper[4681]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Apr 04 01:56:56 crc kubenswrapper[4681]: svc_ips["${svc}"]="${ips[@]}" Apr 04 01:56:56 crc kubenswrapper[4681]: break Apr 04 01:56:56 crc kubenswrapper[4681]: fi Apr 04 01:56:56 crc kubenswrapper[4681]: done Apr 04 01:56:56 crc kubenswrapper[4681]: done Apr 04 01:56:56 crc kubenswrapper[4681]: Apr 04 01:56:56 crc kubenswrapper[4681]: # Update /etc/hosts only if we get valid service IPs Apr 04 01:56:56 crc kubenswrapper[4681]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Apr 04 01:56:56 crc kubenswrapper[4681]: # Stale entries could exist in /etc/hosts if the service is deleted Apr 04 01:56:56 crc kubenswrapper[4681]: if [[ -n "${svc_ips[*]-}" ]]; then Apr 04 01:56:56 crc kubenswrapper[4681]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Apr 04 01:56:56 crc kubenswrapper[4681]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Apr 04 01:56:56 crc kubenswrapper[4681]: # Only continue rebuilding the hosts entries if its original content is preserved Apr 04 01:56:56 crc kubenswrapper[4681]: sleep 60 & wait Apr 04 01:56:56 crc kubenswrapper[4681]: continue Apr 04 01:56:56 crc kubenswrapper[4681]: fi Apr 04 01:56:56 crc kubenswrapper[4681]: Apr 04 01:56:56 crc kubenswrapper[4681]: # Append resolver entries for services Apr 04 01:56:56 crc kubenswrapper[4681]: rc=0 Apr 04 01:56:56 crc kubenswrapper[4681]: for svc in "${!svc_ips[@]}"; do Apr 04 01:56:56 crc kubenswrapper[4681]: for ip in ${svc_ips[${svc}]}; do Apr 04 01:56:56 crc kubenswrapper[4681]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Apr 04 01:56:56 crc kubenswrapper[4681]: done Apr 04 01:56:56 crc kubenswrapper[4681]: done Apr 04 01:56:56 crc kubenswrapper[4681]: if [[ $rc -ne 0 ]]; then Apr 04 01:56:56 crc kubenswrapper[4681]: sleep 60 & wait Apr 04 01:56:56 crc kubenswrapper[4681]: continue Apr 04 01:56:56 crc kubenswrapper[4681]: fi Apr 04 01:56:56 crc kubenswrapper[4681]: Apr 04 01:56:56 crc kubenswrapper[4681]: Apr 04 01:56:56 crc kubenswrapper[4681]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Apr 04 01:56:56 crc kubenswrapper[4681]: # Replace /etc/hosts with our modified version if needed Apr 04 01:56:56 crc kubenswrapper[4681]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Apr 04 01:56:56 crc kubenswrapper[4681]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Apr 04 01:56:56 crc kubenswrapper[4681]: fi Apr 04 01:56:56 crc kubenswrapper[4681]: sleep 60 & wait Apr 04 01:56:56 crc kubenswrapper[4681]: unset svc_ips Apr 04 01:56:56 crc kubenswrapper[4681]: done Apr 04 01:56:56 crc kubenswrapper[4681]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n97gc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-jsn9l_openshift-dns(e4e1568b-1dc4-41c2-a74f-38bfabcf1280): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:56:56 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.562200 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.562226 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.564331 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-jsn9l" podUID="e4e1568b-1dc4-41c2-a74f-38bfabcf1280" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.573801 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q824j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-bqtgx_openshift-multus(5918fa67-6cfa-4c3b-bc04-7cc7888abf1c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.574975 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" podUID="5918fa67-6cfa-4c3b-bc04-7cc7888abf1c" Apr 04 01:56:56 crc kubenswrapper[4681]: W0404 01:56:56.578613 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd004639b_c07a_4401_8588_8af4ed981db3.slice/crio-15cab4aace595262b01cd287da7d8d427cff77f3f5b2304cb74965bc5358f82f WatchSource:0}: Error finding container 15cab4aace595262b01cd287da7d8d427cff77f3f5b2304cb74965bc5358f82f: Status 404 returned error can't find the container with id 15cab4aace595262b01cd287da7d8d427cff77f3f5b2304cb74965bc5358f82f Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.582615 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:56:56 crc kubenswrapper[4681]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Apr 04 01:56:56 crc kubenswrapper[4681]: apiVersion: v1 Apr 04 01:56:56 crc kubenswrapper[4681]: clusters: Apr 04 01:56:56 crc kubenswrapper[4681]: - cluster: Apr 04 01:56:56 crc kubenswrapper[4681]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Apr 04 01:56:56 crc kubenswrapper[4681]: server: https://api-int.crc.testing:6443 Apr 04 01:56:56 crc kubenswrapper[4681]: name: default-cluster Apr 04 01:56:56 crc kubenswrapper[4681]: contexts: Apr 04 01:56:56 crc kubenswrapper[4681]: - context: Apr 04 01:56:56 crc kubenswrapper[4681]: cluster: default-cluster Apr 04 01:56:56 crc kubenswrapper[4681]: namespace: default Apr 04 01:56:56 crc kubenswrapper[4681]: user: default-auth Apr 04 01:56:56 crc kubenswrapper[4681]: name: default-context Apr 04 01:56:56 crc kubenswrapper[4681]: current-context: default-context Apr 04 01:56:56 crc kubenswrapper[4681]: kind: Config Apr 04 01:56:56 crc kubenswrapper[4681]: preferences: {} Apr 04 01:56:56 crc kubenswrapper[4681]: users: Apr 04 01:56:56 crc kubenswrapper[4681]: - name: default-auth Apr 04 01:56:56 crc kubenswrapper[4681]: user: Apr 04 01:56:56 crc kubenswrapper[4681]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Apr 04 01:56:56 crc kubenswrapper[4681]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Apr 04 01:56:56 crc kubenswrapper[4681]: EOF Apr 04 01:56:56 crc kubenswrapper[4681]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vz8jr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-cntwc_openshift-ovn-kubernetes(d004639b-c07a-4401-8588-8af4ed981db3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:56:56 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.583845 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" podUID="d004639b-c07a-4401-8588-8af4ed981db3" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.589763 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.589863 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.589891 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.589919 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.589938 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:56Z","lastTransitionTime":"2026-04-04T01:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.595350 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" event={"ID":"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c","Type":"ContainerStarted","Data":"2d6f41224f67c5a79db2140aa4c2dd0ef249e8b771c8a24119e17e4dbd21bdc1"} Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.596717 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-jsn9l" event={"ID":"e4e1568b-1dc4-41c2-a74f-38bfabcf1280","Type":"ContainerStarted","Data":"7ec73012692193b66ca9e06f6a07009804db2aae51d459aa210a1becf0acacb3"} Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.596728 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-w5wbs" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.596893 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q824j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-bqtgx_openshift-multus(5918fa67-6cfa-4c3b-bc04-7cc7888abf1c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.598397 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" podUID="5918fa67-6cfa-4c3b-bc04-7cc7888abf1c" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.598764 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:56:56 crc kubenswrapper[4681]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Apr 04 01:56:56 crc kubenswrapper[4681]: set -uo pipefail Apr 04 01:56:56 crc kubenswrapper[4681]: Apr 04 01:56:56 crc kubenswrapper[4681]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Apr 04 01:56:56 crc kubenswrapper[4681]: Apr 04 01:56:56 crc kubenswrapper[4681]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Apr 04 01:56:56 crc kubenswrapper[4681]: HOSTS_FILE="/etc/hosts" Apr 04 01:56:56 crc kubenswrapper[4681]: TEMP_FILE="/etc/hosts.tmp" Apr 04 01:56:56 crc kubenswrapper[4681]: Apr 04 01:56:56 crc kubenswrapper[4681]: IFS=', ' read -r -a services <<< "${SERVICES}" Apr 04 01:56:56 crc kubenswrapper[4681]: Apr 04 01:56:56 crc kubenswrapper[4681]: # Make a temporary file with the old hosts file's attributes. Apr 04 01:56:56 crc kubenswrapper[4681]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Apr 04 01:56:56 crc kubenswrapper[4681]: echo "Failed to preserve hosts file. Exiting." Apr 04 01:56:56 crc kubenswrapper[4681]: exit 1 Apr 04 01:56:56 crc kubenswrapper[4681]: fi Apr 04 01:56:56 crc kubenswrapper[4681]: Apr 04 01:56:56 crc kubenswrapper[4681]: while true; do Apr 04 01:56:56 crc kubenswrapper[4681]: declare -A svc_ips Apr 04 01:56:56 crc kubenswrapper[4681]: for svc in "${services[@]}"; do Apr 04 01:56:56 crc kubenswrapper[4681]: # Fetch service IP from cluster dns if present. We make several tries Apr 04 01:56:56 crc kubenswrapper[4681]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Apr 04 01:56:56 crc kubenswrapper[4681]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Apr 04 01:56:56 crc kubenswrapper[4681]: # support UDP loadbalancers and require reaching DNS through TCP. Apr 04 01:56:56 crc kubenswrapper[4681]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Apr 04 01:56:56 crc kubenswrapper[4681]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Apr 04 01:56:56 crc kubenswrapper[4681]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Apr 04 01:56:56 crc kubenswrapper[4681]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Apr 04 01:56:56 crc kubenswrapper[4681]: for i in ${!cmds[*]} Apr 04 01:56:56 crc kubenswrapper[4681]: do Apr 04 01:56:56 crc kubenswrapper[4681]: ips=($(eval "${cmds[i]}")) Apr 04 01:56:56 crc kubenswrapper[4681]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Apr 04 01:56:56 crc kubenswrapper[4681]: svc_ips["${svc}"]="${ips[@]}" Apr 04 01:56:56 crc kubenswrapper[4681]: break Apr 04 01:56:56 crc kubenswrapper[4681]: fi Apr 04 01:56:56 crc kubenswrapper[4681]: done Apr 04 01:56:56 crc kubenswrapper[4681]: done Apr 04 01:56:56 crc kubenswrapper[4681]: Apr 04 01:56:56 crc kubenswrapper[4681]: # Update /etc/hosts only if we get valid service IPs Apr 04 01:56:56 crc kubenswrapper[4681]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Apr 04 01:56:56 crc kubenswrapper[4681]: # Stale entries could exist in /etc/hosts if the service is deleted Apr 04 01:56:56 crc kubenswrapper[4681]: if [[ -n "${svc_ips[*]-}" ]]; then Apr 04 01:56:56 crc kubenswrapper[4681]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Apr 04 01:56:56 crc kubenswrapper[4681]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Apr 04 01:56:56 crc kubenswrapper[4681]: # Only continue rebuilding the hosts entries if its original content is preserved Apr 04 01:56:56 crc kubenswrapper[4681]: sleep 60 & wait Apr 04 01:56:56 crc kubenswrapper[4681]: continue Apr 04 01:56:56 crc kubenswrapper[4681]: fi Apr 04 01:56:56 crc kubenswrapper[4681]: Apr 04 01:56:56 crc kubenswrapper[4681]: # Append resolver entries for services Apr 04 01:56:56 crc kubenswrapper[4681]: rc=0 Apr 04 01:56:56 crc kubenswrapper[4681]: for svc in "${!svc_ips[@]}"; do Apr 04 01:56:56 crc kubenswrapper[4681]: for ip in ${svc_ips[${svc}]}; do Apr 04 01:56:56 crc kubenswrapper[4681]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Apr 04 01:56:56 crc kubenswrapper[4681]: done Apr 04 01:56:56 crc kubenswrapper[4681]: done Apr 04 01:56:56 crc kubenswrapper[4681]: if [[ $rc -ne 0 ]]; then Apr 04 01:56:56 crc kubenswrapper[4681]: sleep 60 & wait Apr 04 01:56:56 crc kubenswrapper[4681]: continue Apr 04 01:56:56 crc kubenswrapper[4681]: fi Apr 04 01:56:56 crc kubenswrapper[4681]: Apr 04 01:56:56 crc kubenswrapper[4681]: Apr 04 01:56:56 crc kubenswrapper[4681]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Apr 04 01:56:56 crc kubenswrapper[4681]: # Replace /etc/hosts with our modified version if needed Apr 04 01:56:56 crc kubenswrapper[4681]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Apr 04 01:56:56 crc kubenswrapper[4681]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Apr 04 01:56:56 crc kubenswrapper[4681]: fi Apr 04 01:56:56 crc kubenswrapper[4681]: sleep 60 & wait Apr 04 01:56:56 crc kubenswrapper[4681]: unset svc_ips Apr 04 01:56:56 crc kubenswrapper[4681]: done Apr 04 01:56:56 crc kubenswrapper[4681]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n97gc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-jsn9l_openshift-dns(e4e1568b-1dc4-41c2-a74f-38bfabcf1280): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:56:56 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.601171 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d2cc138368ac4a36be1b68d1c032d425727287c91cebaf7ce5cf6abb17088cd9"} Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.602007 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-jsn9l" podUID="e4e1568b-1dc4-41c2-a74f-38bfabcf1280" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.603339 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:56:56 crc kubenswrapper[4681]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Apr 04 01:56:56 crc kubenswrapper[4681]: set -o allexport Apr 04 01:56:56 crc kubenswrapper[4681]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Apr 04 01:56:56 crc kubenswrapper[4681]: source /etc/kubernetes/apiserver-url.env Apr 04 01:56:56 crc kubenswrapper[4681]: else Apr 04 01:56:56 crc kubenswrapper[4681]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Apr 04 01:56:56 crc kubenswrapper[4681]: exit 1 Apr 04 01:56:56 crc kubenswrapper[4681]: fi Apr 04 01:56:56 crc kubenswrapper[4681]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Apr 04 01:56:56 crc kubenswrapper[4681]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:56:56 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.603499 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" event={"ID":"d004639b-c07a-4401-8588-8af4ed981db3","Type":"ContainerStarted","Data":"15cab4aace595262b01cd287da7d8d427cff77f3f5b2304cb74965bc5358f82f"} Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.604575 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.604762 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"687b52a4b381601d286e235a947c26fe17d79da54f4682d01f40023ee685fe2e"} Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.606542 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.606670 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:56:56 crc kubenswrapper[4681]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Apr 04 01:56:56 crc kubenswrapper[4681]: apiVersion: v1 Apr 04 01:56:56 crc kubenswrapper[4681]: clusters: Apr 04 01:56:56 crc kubenswrapper[4681]: - cluster: Apr 04 01:56:56 crc kubenswrapper[4681]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Apr 04 01:56:56 crc kubenswrapper[4681]: server: https://api-int.crc.testing:6443 Apr 04 01:56:56 crc kubenswrapper[4681]: name: default-cluster Apr 04 01:56:56 crc kubenswrapper[4681]: contexts: Apr 04 01:56:56 crc kubenswrapper[4681]: - context: Apr 04 01:56:56 crc kubenswrapper[4681]: cluster: default-cluster Apr 04 01:56:56 crc kubenswrapper[4681]: namespace: default Apr 04 01:56:56 crc kubenswrapper[4681]: user: default-auth Apr 04 01:56:56 crc kubenswrapper[4681]: name: default-context Apr 04 01:56:56 crc kubenswrapper[4681]: current-context: default-context Apr 04 01:56:56 crc kubenswrapper[4681]: kind: Config Apr 04 01:56:56 crc kubenswrapper[4681]: preferences: {} Apr 04 01:56:56 crc kubenswrapper[4681]: users: Apr 04 01:56:56 crc kubenswrapper[4681]: - name: default-auth Apr 04 01:56:56 crc kubenswrapper[4681]: user: Apr 04 01:56:56 crc kubenswrapper[4681]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Apr 04 01:56:56 crc kubenswrapper[4681]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Apr 04 01:56:56 crc kubenswrapper[4681]: EOF Apr 04 01:56:56 crc kubenswrapper[4681]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vz8jr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-cntwc_openshift-ovn-kubernetes(d004639b-c07a-4401-8588-8af4ed981db3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:56:56 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.607021 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerStarted","Data":"8dee9dfcf0f3a1bfd595aa1bac52ec660b3b42674576551b56af94c7e3b5d014"} Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.607259 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.608160 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" podUID="d004639b-c07a-4401-8588-8af4ed981db3" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.608386 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.609194 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"50fb338970d0af908900e931462502ddc47528bd7eb0b388dc5ad842a8d978dc"} Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.610327 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:56:56 crc kubenswrapper[4681]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Apr 04 01:56:56 crc kubenswrapper[4681]: if [[ -f "/env/_master" ]]; then Apr 04 01:56:56 crc kubenswrapper[4681]: set -o allexport Apr 04 01:56:56 crc kubenswrapper[4681]: source "/env/_master" Apr 04 01:56:56 crc kubenswrapper[4681]: set +o allexport Apr 04 01:56:56 crc kubenswrapper[4681]: fi Apr 04 01:56:56 crc kubenswrapper[4681]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Apr 04 01:56:56 crc kubenswrapper[4681]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Apr 04 01:56:56 crc kubenswrapper[4681]: ho_enable="--enable-hybrid-overlay" Apr 04 01:56:56 crc kubenswrapper[4681]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Apr 04 01:56:56 crc kubenswrapper[4681]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Apr 04 01:56:56 crc kubenswrapper[4681]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Apr 04 01:56:56 crc kubenswrapper[4681]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Apr 04 01:56:56 crc kubenswrapper[4681]: --webhook-cert-dir="/etc/webhook-cert" \ Apr 04 01:56:56 crc kubenswrapper[4681]: --webhook-host=127.0.0.1 \ Apr 04 01:56:56 crc kubenswrapper[4681]: --webhook-port=9743 \ Apr 04 01:56:56 crc kubenswrapper[4681]: ${ho_enable} \ Apr 04 01:56:56 crc kubenswrapper[4681]: --enable-interconnect \ Apr 04 01:56:56 crc kubenswrapper[4681]: --disable-approver \ Apr 04 01:56:56 crc kubenswrapper[4681]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Apr 04 01:56:56 crc kubenswrapper[4681]: --wait-for-kubernetes-api=200s \ Apr 04 01:56:56 crc kubenswrapper[4681]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Apr 04 01:56:56 crc kubenswrapper[4681]: --loglevel="${LOGLEVEL}" Apr 04 01:56:56 crc kubenswrapper[4681]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:56:56 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.610314 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nt54j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.614257 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nt54j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.614439 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:56:56 crc kubenswrapper[4681]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Apr 04 01:56:56 crc kubenswrapper[4681]: if [[ -f "/env/_master" ]]; then Apr 04 01:56:56 crc kubenswrapper[4681]: set -o allexport Apr 04 01:56:56 crc kubenswrapper[4681]: source "/env/_master" Apr 04 01:56:56 crc kubenswrapper[4681]: set +o allexport Apr 04 01:56:56 crc kubenswrapper[4681]: fi Apr 04 01:56:56 crc kubenswrapper[4681]: Apr 04 01:56:56 crc kubenswrapper[4681]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Apr 04 01:56:56 crc kubenswrapper[4681]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Apr 04 01:56:56 crc kubenswrapper[4681]: --disable-webhook \ Apr 04 01:56:56 crc kubenswrapper[4681]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Apr 04 01:56:56 crc kubenswrapper[4681]: --loglevel="${LOGLEVEL}" Apr 04 01:56:56 crc kubenswrapper[4681]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:56:56 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.614918 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:56:56 crc kubenswrapper[4681]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Apr 04 01:56:56 crc kubenswrapper[4681]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Apr 04 01:56:56 crc kubenswrapper[4681]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqjdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-w5wbs_openshift-multus(cab7ffc5-0101-48b8-87ab-de8324bacc38): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:56:56 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.615692 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.615764 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.616535 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-w5wbs" podUID="cab7ffc5-0101-48b8-87ab-de8324bacc38" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.616565 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.626215 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.633632 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.641891 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.648667 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.658788 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.680694 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.691565 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.691893 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.691930 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.691942 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.691960 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.691971 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:56Z","lastTransitionTime":"2026-04-04T01:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.709045 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.726731 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.736020 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.746989 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.756622 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.764712 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.777783 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.790089 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.790180 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.790219 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.790333 4681 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.790372 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-04-04 01:56:57.790360029 +0000 UTC m=+97.456135149 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.790406 4681 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.790455 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:56:57.790425741 +0000 UTC m=+97.456200861 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.790498 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-04-04 01:56:57.790488553 +0000 UTC m=+97.456263813 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.794984 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.795006 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.795014 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.795028 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.795036 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:56Z","lastTransitionTime":"2026-04-04T01:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.796395 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.806208 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.817525 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.832794 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.846879 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.863473 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.891034 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.891123 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.891322 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.891348 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.891364 4681 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.891360 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.891422 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.891440 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-04-04 01:56:57.891419191 +0000 UTC m=+97.557194321 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.891444 4681 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:56:56 crc kubenswrapper[4681]: E0404 01:56:56.891534 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-04-04 01:56:57.891508124 +0000 UTC m=+97.557283284 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.897959 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.898031 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.898050 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.898078 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:56 crc kubenswrapper[4681]: I0404 01:56:56.898096 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:56Z","lastTransitionTime":"2026-04-04T01:56:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.001001 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.001070 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.001089 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.001124 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.001145 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:57Z","lastTransitionTime":"2026-04-04T01:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.105868 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.105930 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.105946 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.105967 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.105991 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:57Z","lastTransitionTime":"2026-04-04T01:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.206990 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.208511 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.209049 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.209107 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.209125 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.209149 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.209167 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:57Z","lastTransitionTime":"2026-04-04T01:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.211137 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.212750 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.214033 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.215169 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.216469 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.217761 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.219474 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.220742 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.222075 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.224899 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.226227 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.227535 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.229686 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.230854 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.232016 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.232863 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.234713 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.236116 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.237126 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.239235 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.240229 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.242153 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.243139 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.244908 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.246375 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.247409 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.249354 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.250331 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.250986 4681 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.251122 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.252861 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.253539 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.254086 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.258008 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.259601 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.261782 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.263547 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.266101 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.267253 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.268730 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.271023 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.273170 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.274227 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.276146 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.277467 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.279991 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.281015 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.282785 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.283830 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.285001 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.287093 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.288123 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.312185 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.312295 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.312327 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.312357 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.312380 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:57Z","lastTransitionTime":"2026-04-04T01:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.415529 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.415589 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.415605 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.415627 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.415646 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:57Z","lastTransitionTime":"2026-04-04T01:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.518130 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.518190 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.518207 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.518229 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.518246 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:57Z","lastTransitionTime":"2026-04-04T01:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.613783 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-w5wbs" event={"ID":"cab7ffc5-0101-48b8-87ab-de8324bacc38","Type":"ContainerStarted","Data":"27a4e99f34a05a9fb1ec1570062f020ed0629d5d0e25873906519981a395cd37"} Apr 04 01:56:57 crc kubenswrapper[4681]: E0404 01:56:57.616364 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:56:57 crc kubenswrapper[4681]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Apr 04 01:56:57 crc kubenswrapper[4681]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Apr 04 01:56:57 crc kubenswrapper[4681]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqjdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-w5wbs_openshift-multus(cab7ffc5-0101-48b8-87ab-de8324bacc38): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:56:57 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:56:57 crc kubenswrapper[4681]: E0404 01:56:57.618342 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-w5wbs" podUID="cab7ffc5-0101-48b8-87ab-de8324bacc38" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.620424 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.620544 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.620563 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.620583 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.620601 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:57Z","lastTransitionTime":"2026-04-04T01:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.627052 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.641435 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.668718 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.687810 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.706153 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.716959 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.723185 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.723235 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.723246 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.723288 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.723300 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:57Z","lastTransitionTime":"2026-04-04T01:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.728428 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.740613 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.757658 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.767256 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.781289 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.801747 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.801847 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:56:57 crc kubenswrapper[4681]: E0404 01:56:57.801866 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:56:59.801841848 +0000 UTC m=+99.467616978 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:56:57 crc kubenswrapper[4681]: E0404 01:56:57.801951 4681 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.801950 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:56:57 crc kubenswrapper[4681]: E0404 01:56:57.801996 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-04-04 01:56:59.801983052 +0000 UTC m=+99.467758172 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Apr 04 01:56:57 crc kubenswrapper[4681]: E0404 01:56:57.802020 4681 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Apr 04 01:56:57 crc kubenswrapper[4681]: E0404 01:56:57.802060 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-04-04 01:56:59.802049984 +0000 UTC m=+99.467825124 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.825987 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.826050 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.826062 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.826101 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.826114 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:57Z","lastTransitionTime":"2026-04-04T01:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.902947 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.903043 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:56:57 crc kubenswrapper[4681]: E0404 01:56:57.903236 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 04 01:56:57 crc kubenswrapper[4681]: E0404 01:56:57.903309 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 04 01:56:57 crc kubenswrapper[4681]: E0404 01:56:57.903314 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 04 01:56:57 crc kubenswrapper[4681]: E0404 01:56:57.903378 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 04 01:56:57 crc kubenswrapper[4681]: E0404 01:56:57.903404 4681 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:56:57 crc kubenswrapper[4681]: E0404 01:56:57.903332 4681 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:56:57 crc kubenswrapper[4681]: E0404 01:56:57.903497 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-04-04 01:56:59.903466977 +0000 UTC m=+99.569242137 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:56:57 crc kubenswrapper[4681]: E0404 01:56:57.903678 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-04-04 01:56:59.903641682 +0000 UTC m=+99.569416842 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.929319 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.929374 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.929390 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.929415 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:57 crc kubenswrapper[4681]: I0404 01:56:57.929432 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:57Z","lastTransitionTime":"2026-04-04T01:56:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.031377 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.031439 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.031462 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.031490 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.031511 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:58Z","lastTransitionTime":"2026-04-04T01:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.133631 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.133666 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.133678 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.133693 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.133703 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:58Z","lastTransitionTime":"2026-04-04T01:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.200457 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.200489 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:56:58 crc kubenswrapper[4681]: E0404 01:56:58.200559 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.200498 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:56:58 crc kubenswrapper[4681]: E0404 01:56:58.200677 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:56:58 crc kubenswrapper[4681]: E0404 01:56:58.200840 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.236608 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.236640 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.236650 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.236664 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.236674 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:58Z","lastTransitionTime":"2026-04-04T01:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.339362 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.339399 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.339410 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.339425 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.339435 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:58Z","lastTransitionTime":"2026-04-04T01:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.441245 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.441373 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.441398 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.441431 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.441449 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:58Z","lastTransitionTime":"2026-04-04T01:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.543613 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.543682 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.543701 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.543725 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.543742 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:58Z","lastTransitionTime":"2026-04-04T01:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.647185 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.647316 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.647343 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.647374 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.647412 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:58Z","lastTransitionTime":"2026-04-04T01:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.750664 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.750756 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.750779 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.750809 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.750831 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:58Z","lastTransitionTime":"2026-04-04T01:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.853737 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.853792 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.853809 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.853833 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.853852 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:58Z","lastTransitionTime":"2026-04-04T01:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.955927 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.955964 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.955972 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.955984 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:58 crc kubenswrapper[4681]: I0404 01:56:58.955994 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:58Z","lastTransitionTime":"2026-04-04T01:56:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.058750 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.058799 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.058815 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.058837 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.058853 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:59Z","lastTransitionTime":"2026-04-04T01:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.162309 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.162353 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.162403 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.162425 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.162441 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:59Z","lastTransitionTime":"2026-04-04T01:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.266158 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.266210 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.266226 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.266249 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.266304 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:59Z","lastTransitionTime":"2026-04-04T01:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.369994 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.370258 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.370333 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.370362 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.370383 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:59Z","lastTransitionTime":"2026-04-04T01:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.473550 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.473591 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.473603 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.473617 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.473628 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:59Z","lastTransitionTime":"2026-04-04T01:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.478421 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.478481 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.478493 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.478511 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.478519 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:59Z","lastTransitionTime":"2026-04-04T01:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:59 crc kubenswrapper[4681]: E0404 01:56:59.490517 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.494582 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.494616 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.494629 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.494647 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.494661 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:59Z","lastTransitionTime":"2026-04-04T01:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:59 crc kubenswrapper[4681]: E0404 01:56:59.505415 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.509118 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.509143 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.509151 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.509163 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.509172 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:59Z","lastTransitionTime":"2026-04-04T01:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:59 crc kubenswrapper[4681]: E0404 01:56:59.520044 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.523891 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.523923 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.523938 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.523958 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.523973 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:59Z","lastTransitionTime":"2026-04-04T01:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:59 crc kubenswrapper[4681]: E0404 01:56:59.541544 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.546918 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.546964 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.546975 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.546992 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.547003 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:59Z","lastTransitionTime":"2026-04-04T01:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:59 crc kubenswrapper[4681]: E0404 01:56:59.558874 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:59 crc kubenswrapper[4681]: E0404 01:56:59.559470 4681 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.576526 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.576563 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.576574 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.576629 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.576642 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:59Z","lastTransitionTime":"2026-04-04T01:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.679192 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.679228 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.679239 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.679252 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.679628 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:59Z","lastTransitionTime":"2026-04-04T01:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.729743 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-8jsq4"] Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.730221 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8jsq4" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.733528 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.733751 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.733777 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.735076 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.751330 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.769752 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.782578 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.782621 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.782632 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.782651 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.782663 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:59Z","lastTransitionTime":"2026-04-04T01:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.787099 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.800787 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.819741 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.824576 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:56:59 crc kubenswrapper[4681]: E0404 01:56:59.824702 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:57:03.824684953 +0000 UTC m=+103.490460073 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.824748 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.824771 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1-host\") pod \"node-ca-8jsq4\" (UID: \"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\") " pod="openshift-image-registry/node-ca-8jsq4" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.824795 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1-serviceca\") pod \"node-ca-8jsq4\" (UID: \"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\") " pod="openshift-image-registry/node-ca-8jsq4" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.824814 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxjst\" (UniqueName: \"kubernetes.io/projected/3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1-kube-api-access-sxjst\") pod \"node-ca-8jsq4\" (UID: \"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\") " pod="openshift-image-registry/node-ca-8jsq4" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.824846 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:56:59 crc kubenswrapper[4681]: E0404 01:56:59.824904 4681 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Apr 04 01:56:59 crc kubenswrapper[4681]: E0404 01:56:59.824965 4681 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Apr 04 01:56:59 crc kubenswrapper[4681]: E0404 01:56:59.825024 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-04-04 01:57:03.825012162 +0000 UTC m=+103.490787282 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Apr 04 01:56:59 crc kubenswrapper[4681]: E0404 01:56:59.825114 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-04-04 01:57:03.825070704 +0000 UTC m=+103.490845874 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.829212 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.838349 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.853635 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.861849 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.869828 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.885094 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.885135 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.885148 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.885166 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.885180 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:59Z","lastTransitionTime":"2026-04-04T01:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.893397 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.904991 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.925700 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxjst\" (UniqueName: \"kubernetes.io/projected/3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1-kube-api-access-sxjst\") pod \"node-ca-8jsq4\" (UID: \"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\") " pod="openshift-image-registry/node-ca-8jsq4" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.925833 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.925955 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.926031 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1-host\") pod \"node-ca-8jsq4\" (UID: \"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\") " pod="openshift-image-registry/node-ca-8jsq4" Apr 04 01:56:59 crc kubenswrapper[4681]: E0404 01:56:59.926075 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.926098 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1-serviceca\") pod \"node-ca-8jsq4\" (UID: \"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\") " pod="openshift-image-registry/node-ca-8jsq4" Apr 04 01:56:59 crc kubenswrapper[4681]: E0404 01:56:59.926137 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 04 01:56:59 crc kubenswrapper[4681]: E0404 01:56:59.926157 4681 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:56:59 crc kubenswrapper[4681]: E0404 01:56:59.926214 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.926229 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1-host\") pod \"node-ca-8jsq4\" (UID: \"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\") " pod="openshift-image-registry/node-ca-8jsq4" Apr 04 01:56:59 crc kubenswrapper[4681]: E0404 01:56:59.926245 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 04 01:56:59 crc kubenswrapper[4681]: E0404 01:56:59.926337 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-04-04 01:57:03.9262579 +0000 UTC m=+103.592033060 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:56:59 crc kubenswrapper[4681]: E0404 01:56:59.926363 4681 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:56:59 crc kubenswrapper[4681]: E0404 01:56:59.926501 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-04-04 01:57:03.926481907 +0000 UTC m=+103.592257057 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.928579 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1-serviceca\") pod \"node-ca-8jsq4\" (UID: \"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\") " pod="openshift-image-registry/node-ca-8jsq4" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.956422 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxjst\" (UniqueName: \"kubernetes.io/projected/3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1-kube-api-access-sxjst\") pod \"node-ca-8jsq4\" (UID: \"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\") " pod="openshift-image-registry/node-ca-8jsq4" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.988138 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.988208 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.988227 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.988252 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:56:59 crc kubenswrapper[4681]: I0404 01:56:59.988299 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:56:59Z","lastTransitionTime":"2026-04-04T01:56:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.045150 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-8jsq4" Apr 04 01:57:00 crc kubenswrapper[4681]: W0404 01:57:00.063386 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cf3df95_c4d1_4983_81f0_c7ec3b2b5ad1.slice/crio-353328ce54e2bf419c869876db07a3ba10f549a12ebcd9abe9258cd05702e920 WatchSource:0}: Error finding container 353328ce54e2bf419c869876db07a3ba10f549a12ebcd9abe9258cd05702e920: Status 404 returned error can't find the container with id 353328ce54e2bf419c869876db07a3ba10f549a12ebcd9abe9258cd05702e920 Apr 04 01:57:00 crc kubenswrapper[4681]: E0404 01:57:00.066756 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:57:00 crc kubenswrapper[4681]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Apr 04 01:57:00 crc kubenswrapper[4681]: while [ true ]; Apr 04 01:57:00 crc kubenswrapper[4681]: do Apr 04 01:57:00 crc kubenswrapper[4681]: for f in $(ls /tmp/serviceca); do Apr 04 01:57:00 crc kubenswrapper[4681]: echo $f Apr 04 01:57:00 crc kubenswrapper[4681]: ca_file_path="/tmp/serviceca/${f}" Apr 04 01:57:00 crc kubenswrapper[4681]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Apr 04 01:57:00 crc kubenswrapper[4681]: reg_dir_path="/etc/docker/certs.d/${f}" Apr 04 01:57:00 crc kubenswrapper[4681]: if [ -e "${reg_dir_path}" ]; then Apr 04 01:57:00 crc kubenswrapper[4681]: cp -u $ca_file_path $reg_dir_path/ca.crt Apr 04 01:57:00 crc kubenswrapper[4681]: else Apr 04 01:57:00 crc kubenswrapper[4681]: mkdir $reg_dir_path Apr 04 01:57:00 crc kubenswrapper[4681]: cp $ca_file_path $reg_dir_path/ca.crt Apr 04 01:57:00 crc kubenswrapper[4681]: fi Apr 04 01:57:00 crc kubenswrapper[4681]: done Apr 04 01:57:00 crc kubenswrapper[4681]: for d in $(ls /etc/docker/certs.d); do Apr 04 01:57:00 crc kubenswrapper[4681]: echo $d Apr 04 01:57:00 crc kubenswrapper[4681]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Apr 04 01:57:00 crc kubenswrapper[4681]: reg_conf_path="/tmp/serviceca/${dp}" Apr 04 01:57:00 crc kubenswrapper[4681]: if [ ! -e "${reg_conf_path}" ]; then Apr 04 01:57:00 crc kubenswrapper[4681]: rm -rf /etc/docker/certs.d/$d Apr 04 01:57:00 crc kubenswrapper[4681]: fi Apr 04 01:57:00 crc kubenswrapper[4681]: done Apr 04 01:57:00 crc kubenswrapper[4681]: sleep 60 & wait ${!} Apr 04 01:57:00 crc kubenswrapper[4681]: done Apr 04 01:57:00 crc kubenswrapper[4681]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sxjst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-8jsq4_openshift-image-registry(3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:57:00 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:57:00 crc kubenswrapper[4681]: E0404 01:57:00.068067 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-8jsq4" podUID="3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.091603 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.091669 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.091683 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.091700 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.091709 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:00Z","lastTransitionTime":"2026-04-04T01:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.194492 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.194599 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.194624 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.194656 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.194679 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:00Z","lastTransitionTime":"2026-04-04T01:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.200834 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.200869 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.200962 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:00 crc kubenswrapper[4681]: E0404 01:57:00.201138 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:57:00 crc kubenswrapper[4681]: E0404 01:57:00.201254 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:57:00 crc kubenswrapper[4681]: E0404 01:57:00.201415 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.298126 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.298199 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.298226 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.298256 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.298312 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:00Z","lastTransitionTime":"2026-04-04T01:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.401099 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.401228 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.401254 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.401314 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.401336 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:00Z","lastTransitionTime":"2026-04-04T01:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.504867 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.504946 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.504965 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.504988 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.505006 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:00Z","lastTransitionTime":"2026-04-04T01:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.608855 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.608936 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.608959 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.608993 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.609012 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:00Z","lastTransitionTime":"2026-04-04T01:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.623890 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8jsq4" event={"ID":"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1","Type":"ContainerStarted","Data":"353328ce54e2bf419c869876db07a3ba10f549a12ebcd9abe9258cd05702e920"} Apr 04 01:57:00 crc kubenswrapper[4681]: E0404 01:57:00.626467 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:57:00 crc kubenswrapper[4681]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Apr 04 01:57:00 crc kubenswrapper[4681]: while [ true ]; Apr 04 01:57:00 crc kubenswrapper[4681]: do Apr 04 01:57:00 crc kubenswrapper[4681]: for f in $(ls /tmp/serviceca); do Apr 04 01:57:00 crc kubenswrapper[4681]: echo $f Apr 04 01:57:00 crc kubenswrapper[4681]: ca_file_path="/tmp/serviceca/${f}" Apr 04 01:57:00 crc kubenswrapper[4681]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Apr 04 01:57:00 crc kubenswrapper[4681]: reg_dir_path="/etc/docker/certs.d/${f}" Apr 04 01:57:00 crc kubenswrapper[4681]: if [ -e "${reg_dir_path}" ]; then Apr 04 01:57:00 crc kubenswrapper[4681]: cp -u $ca_file_path $reg_dir_path/ca.crt Apr 04 01:57:00 crc kubenswrapper[4681]: else Apr 04 01:57:00 crc kubenswrapper[4681]: mkdir $reg_dir_path Apr 04 01:57:00 crc kubenswrapper[4681]: cp $ca_file_path $reg_dir_path/ca.crt Apr 04 01:57:00 crc kubenswrapper[4681]: fi Apr 04 01:57:00 crc kubenswrapper[4681]: done Apr 04 01:57:00 crc kubenswrapper[4681]: for d in $(ls /etc/docker/certs.d); do Apr 04 01:57:00 crc kubenswrapper[4681]: echo $d Apr 04 01:57:00 crc kubenswrapper[4681]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Apr 04 01:57:00 crc kubenswrapper[4681]: reg_conf_path="/tmp/serviceca/${dp}" Apr 04 01:57:00 crc kubenswrapper[4681]: if [ ! -e "${reg_conf_path}" ]; then Apr 04 01:57:00 crc kubenswrapper[4681]: rm -rf /etc/docker/certs.d/$d Apr 04 01:57:00 crc kubenswrapper[4681]: fi Apr 04 01:57:00 crc kubenswrapper[4681]: done Apr 04 01:57:00 crc kubenswrapper[4681]: sleep 60 & wait ${!} Apr 04 01:57:00 crc kubenswrapper[4681]: done Apr 04 01:57:00 crc kubenswrapper[4681]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sxjst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-8jsq4_openshift-image-registry(3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:57:00 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:57:00 crc kubenswrapper[4681]: E0404 01:57:00.627599 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-8jsq4" podUID="3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.643085 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.652645 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.661296 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.670404 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.680711 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.687581 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.693820 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.702490 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.711849 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.712012 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.712073 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.712167 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.712242 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:00Z","lastTransitionTime":"2026-04-04T01:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.716146 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.730338 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.739606 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.756972 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.815170 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.815214 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.815225 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.815243 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.815256 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:00Z","lastTransitionTime":"2026-04-04T01:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.918224 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.918308 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.918320 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.918338 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:00 crc kubenswrapper[4681]: I0404 01:57:00.918350 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:00Z","lastTransitionTime":"2026-04-04T01:57:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.021741 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.021805 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.021822 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.021849 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.021867 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:01Z","lastTransitionTime":"2026-04-04T01:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.124817 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.124901 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.124919 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.124942 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.124960 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:01Z","lastTransitionTime":"2026-04-04T01:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.217585 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.227431 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.227490 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.227513 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.227535 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.227552 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:01Z","lastTransitionTime":"2026-04-04T01:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.238639 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.259586 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.275771 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.299377 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.309854 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.321198 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.331028 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.331066 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.331083 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.331104 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.331119 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:01Z","lastTransitionTime":"2026-04-04T01:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.335314 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.348079 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.359020 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.388825 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.405423 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.434588 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.434651 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.434669 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.434692 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.434714 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:01Z","lastTransitionTime":"2026-04-04T01:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.538010 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.538124 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.538149 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.538176 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.538199 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:01Z","lastTransitionTime":"2026-04-04T01:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.641100 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.641155 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.641167 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.641186 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.641215 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:01Z","lastTransitionTime":"2026-04-04T01:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.744370 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.744405 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.744413 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.744426 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.744435 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:01Z","lastTransitionTime":"2026-04-04T01:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.847629 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.847691 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.847708 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.847731 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.847747 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:01Z","lastTransitionTime":"2026-04-04T01:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.950384 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.950424 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.950434 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.950448 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:01 crc kubenswrapper[4681]: I0404 01:57:01.950459 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:01Z","lastTransitionTime":"2026-04-04T01:57:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.052427 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.052495 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.052507 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.052522 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.052533 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:02Z","lastTransitionTime":"2026-04-04T01:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.155136 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.155202 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.155219 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.155247 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.155311 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:02Z","lastTransitionTime":"2026-04-04T01:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.200693 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:02 crc kubenswrapper[4681]: E0404 01:57:02.200929 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.201508 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.201508 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:02 crc kubenswrapper[4681]: E0404 01:57:02.202226 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:57:02 crc kubenswrapper[4681]: E0404 01:57:02.202705 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.220658 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.221181 4681 scope.go:117] "RemoveContainer" containerID="b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da" Apr 04 01:57:02 crc kubenswrapper[4681]: E0404 01:57:02.221555 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.258364 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.258430 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.258449 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.258475 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.258494 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:02Z","lastTransitionTime":"2026-04-04T01:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.360988 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.361061 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.361078 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.361104 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.361120 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:02Z","lastTransitionTime":"2026-04-04T01:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.464148 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.464209 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.464229 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.464259 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.464299 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:02Z","lastTransitionTime":"2026-04-04T01:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.567036 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.567087 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.567103 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.567130 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.567147 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:02Z","lastTransitionTime":"2026-04-04T01:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.630782 4681 scope.go:117] "RemoveContainer" containerID="b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da" Apr 04 01:57:02 crc kubenswrapper[4681]: E0404 01:57:02.631051 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.670240 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.670329 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.670346 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.670371 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.670388 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:02Z","lastTransitionTime":"2026-04-04T01:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.773054 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.773117 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.773136 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.773161 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.773178 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:02Z","lastTransitionTime":"2026-04-04T01:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.875918 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.875993 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.876017 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.876052 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.876077 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:02Z","lastTransitionTime":"2026-04-04T01:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.979774 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.979855 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.979873 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.979897 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:02 crc kubenswrapper[4681]: I0404 01:57:02.979916 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:02Z","lastTransitionTime":"2026-04-04T01:57:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.082938 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.083027 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.083044 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.083066 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.083082 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:03Z","lastTransitionTime":"2026-04-04T01:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.186396 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.186550 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.186576 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.186605 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.186626 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:03Z","lastTransitionTime":"2026-04-04T01:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.289694 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.289761 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.289778 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.289801 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.289818 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:03Z","lastTransitionTime":"2026-04-04T01:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.393000 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.393078 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.393101 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.393131 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.393152 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:03Z","lastTransitionTime":"2026-04-04T01:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.496712 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.496808 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.496835 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.496864 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.496886 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:03Z","lastTransitionTime":"2026-04-04T01:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.599532 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.599620 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.599644 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.599672 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.599693 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:03Z","lastTransitionTime":"2026-04-04T01:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.702892 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.702971 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.702994 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.703024 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.703046 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:03Z","lastTransitionTime":"2026-04-04T01:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.806208 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.806311 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.806330 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.806356 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.806373 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:03Z","lastTransitionTime":"2026-04-04T01:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.873952 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:57:03 crc kubenswrapper[4681]: E0404 01:57:03.874187 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:57:11.874150225 +0000 UTC m=+111.539925385 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.874298 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.874388 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:03 crc kubenswrapper[4681]: E0404 01:57:03.874502 4681 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Apr 04 01:57:03 crc kubenswrapper[4681]: E0404 01:57:03.874508 4681 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Apr 04 01:57:03 crc kubenswrapper[4681]: E0404 01:57:03.874578 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-04-04 01:57:11.874555937 +0000 UTC m=+111.540331097 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Apr 04 01:57:03 crc kubenswrapper[4681]: E0404 01:57:03.874601 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-04-04 01:57:11.874590338 +0000 UTC m=+111.540365488 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.909677 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.909743 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.909761 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.909785 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.909803 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:03Z","lastTransitionTime":"2026-04-04T01:57:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.975659 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:03 crc kubenswrapper[4681]: I0404 01:57:03.975766 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:03 crc kubenswrapper[4681]: E0404 01:57:03.975892 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 04 01:57:03 crc kubenswrapper[4681]: E0404 01:57:03.975952 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 04 01:57:03 crc kubenswrapper[4681]: E0404 01:57:03.975954 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 04 01:57:03 crc kubenswrapper[4681]: E0404 01:57:03.975976 4681 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:57:03 crc kubenswrapper[4681]: E0404 01:57:03.976002 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 04 01:57:03 crc kubenswrapper[4681]: E0404 01:57:03.976024 4681 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:57:03 crc kubenswrapper[4681]: E0404 01:57:03.976081 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-04-04 01:57:11.976056322 +0000 UTC m=+111.641831472 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:57:03 crc kubenswrapper[4681]: E0404 01:57:03.976108 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-04-04 01:57:11.976096953 +0000 UTC m=+111.641872113 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.012253 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.012319 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.012331 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.012352 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.012364 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:04Z","lastTransitionTime":"2026-04-04T01:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.115081 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.115142 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.115158 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.115186 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.115206 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:04Z","lastTransitionTime":"2026-04-04T01:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.200240 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.200359 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.200407 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:04 crc kubenswrapper[4681]: E0404 01:57:04.200508 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:57:04 crc kubenswrapper[4681]: E0404 01:57:04.200730 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:57:04 crc kubenswrapper[4681]: E0404 01:57:04.200905 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.219165 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.219588 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.219790 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.219998 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.220182 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:04Z","lastTransitionTime":"2026-04-04T01:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.323550 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.323664 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.323716 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.323741 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.323789 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:04Z","lastTransitionTime":"2026-04-04T01:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.427520 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.427614 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.427637 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.427667 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.427710 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:04Z","lastTransitionTime":"2026-04-04T01:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.531213 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.531351 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.531373 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.531405 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.531439 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:04Z","lastTransitionTime":"2026-04-04T01:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.635620 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.635676 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.635693 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.635714 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.635729 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:04Z","lastTransitionTime":"2026-04-04T01:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.739319 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.739772 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.739810 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.739845 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.739862 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:04Z","lastTransitionTime":"2026-04-04T01:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.842368 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.842429 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.842449 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.842475 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.842494 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:04Z","lastTransitionTime":"2026-04-04T01:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.945938 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.945995 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.946012 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.946035 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:04 crc kubenswrapper[4681]: I0404 01:57:04.946052 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:04Z","lastTransitionTime":"2026-04-04T01:57:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.049336 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.049404 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.049426 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.049456 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.049477 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:05Z","lastTransitionTime":"2026-04-04T01:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.153024 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.153155 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.153182 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.153217 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.153242 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:05Z","lastTransitionTime":"2026-04-04T01:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.255849 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.255918 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.255935 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.255961 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.255978 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:05Z","lastTransitionTime":"2026-04-04T01:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.359455 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.359510 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.359526 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.359548 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.359565 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:05Z","lastTransitionTime":"2026-04-04T01:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.462766 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.462914 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.462935 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.462959 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.462976 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:05Z","lastTransitionTime":"2026-04-04T01:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.566568 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.566635 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.566653 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.566678 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.566695 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:05Z","lastTransitionTime":"2026-04-04T01:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.604672 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf"] Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.605490 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.608025 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.608916 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.626565 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.646039 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.662861 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.669048 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.669114 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.669136 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.669181 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.669206 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:05Z","lastTransitionTime":"2026-04-04T01:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.679172 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.695254 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.708392 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.720580 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.734435 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.745789 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.757664 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.772908 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.772972 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.772993 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.773024 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.773041 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:05Z","lastTransitionTime":"2026-04-04T01:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.784552 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.797208 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvgg6\" (UniqueName: \"kubernetes.io/projected/fc4fe566-3f65-4de4-9595-80b23fe4149c-kube-api-access-kvgg6\") pod \"ovnkube-control-plane-749d76644c-sswhf\" (UID: \"fc4fe566-3f65-4de4-9595-80b23fe4149c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.797315 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4fe566-3f65-4de4-9595-80b23fe4149c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-sswhf\" (UID: \"fc4fe566-3f65-4de4-9595-80b23fe4149c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.797374 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc4fe566-3f65-4de4-9595-80b23fe4149c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-sswhf\" (UID: \"fc4fe566-3f65-4de4-9595-80b23fe4149c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.797518 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc4fe566-3f65-4de4-9595-80b23fe4149c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-sswhf\" (UID: \"fc4fe566-3f65-4de4-9595-80b23fe4149c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.803447 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.819463 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.832733 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.876745 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.876823 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.876847 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.876877 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.876901 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:05Z","lastTransitionTime":"2026-04-04T01:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.898593 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc4fe566-3f65-4de4-9595-80b23fe4149c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-sswhf\" (UID: \"fc4fe566-3f65-4de4-9595-80b23fe4149c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.898728 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc4fe566-3f65-4de4-9595-80b23fe4149c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-sswhf\" (UID: \"fc4fe566-3f65-4de4-9595-80b23fe4149c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.898824 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvgg6\" (UniqueName: \"kubernetes.io/projected/fc4fe566-3f65-4de4-9595-80b23fe4149c-kube-api-access-kvgg6\") pod \"ovnkube-control-plane-749d76644c-sswhf\" (UID: \"fc4fe566-3f65-4de4-9595-80b23fe4149c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.898883 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4fe566-3f65-4de4-9595-80b23fe4149c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-sswhf\" (UID: \"fc4fe566-3f65-4de4-9595-80b23fe4149c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.899884 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4fe566-3f65-4de4-9595-80b23fe4149c-env-overrides\") pod \"ovnkube-control-plane-749d76644c-sswhf\" (UID: \"fc4fe566-3f65-4de4-9595-80b23fe4149c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.900057 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc4fe566-3f65-4de4-9595-80b23fe4149c-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-sswhf\" (UID: \"fc4fe566-3f65-4de4-9595-80b23fe4149c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.904435 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc4fe566-3f65-4de4-9595-80b23fe4149c-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-sswhf\" (UID: \"fc4fe566-3f65-4de4-9595-80b23fe4149c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.930066 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvgg6\" (UniqueName: \"kubernetes.io/projected/fc4fe566-3f65-4de4-9595-80b23fe4149c-kube-api-access-kvgg6\") pod \"ovnkube-control-plane-749d76644c-sswhf\" (UID: \"fc4fe566-3f65-4de4-9595-80b23fe4149c\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.980284 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.980324 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.980335 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.980351 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:05 crc kubenswrapper[4681]: I0404 01:57:05.980363 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:05Z","lastTransitionTime":"2026-04-04T01:57:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.083232 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.083317 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.083334 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.083358 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.083375 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:06Z","lastTransitionTime":"2026-04-04T01:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.186963 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.187032 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.187056 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.187083 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.187102 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:06Z","lastTransitionTime":"2026-04-04T01:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.200570 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.200625 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.200643 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:06 crc kubenswrapper[4681]: E0404 01:57:06.200737 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:57:06 crc kubenswrapper[4681]: E0404 01:57:06.200853 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:57:06 crc kubenswrapper[4681]: E0404 01:57:06.200942 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.227472 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" Apr 04 01:57:06 crc kubenswrapper[4681]: W0404 01:57:06.238249 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4fe566_3f65_4de4_9595_80b23fe4149c.slice/crio-0c04cfd9ed9852872c4b4dfc9bb462fb40825b921066127cfd6585383d32f9e4 WatchSource:0}: Error finding container 0c04cfd9ed9852872c4b4dfc9bb462fb40825b921066127cfd6585383d32f9e4: Status 404 returned error can't find the container with id 0c04cfd9ed9852872c4b4dfc9bb462fb40825b921066127cfd6585383d32f9e4 Apr 04 01:57:06 crc kubenswrapper[4681]: E0404 01:57:06.240532 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:57:06 crc kubenswrapper[4681]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Apr 04 01:57:06 crc kubenswrapper[4681]: set -euo pipefail Apr 04 01:57:06 crc kubenswrapper[4681]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Apr 04 01:57:06 crc kubenswrapper[4681]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Apr 04 01:57:06 crc kubenswrapper[4681]: # As the secret mount is optional we must wait for the files to be present. Apr 04 01:57:06 crc kubenswrapper[4681]: # The service is created in monitor.yaml and this is created in sdn.yaml. Apr 04 01:57:06 crc kubenswrapper[4681]: TS=$(date +%s) Apr 04 01:57:06 crc kubenswrapper[4681]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Apr 04 01:57:06 crc kubenswrapper[4681]: HAS_LOGGED_INFO=0 Apr 04 01:57:06 crc kubenswrapper[4681]: Apr 04 01:57:06 crc kubenswrapper[4681]: log_missing_certs(){ Apr 04 01:57:06 crc kubenswrapper[4681]: CUR_TS=$(date +%s) Apr 04 01:57:06 crc kubenswrapper[4681]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Apr 04 01:57:06 crc kubenswrapper[4681]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Apr 04 01:57:06 crc kubenswrapper[4681]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Apr 04 01:57:06 crc kubenswrapper[4681]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Apr 04 01:57:06 crc kubenswrapper[4681]: HAS_LOGGED_INFO=1 Apr 04 01:57:06 crc kubenswrapper[4681]: fi Apr 04 01:57:06 crc kubenswrapper[4681]: } Apr 04 01:57:06 crc kubenswrapper[4681]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Apr 04 01:57:06 crc kubenswrapper[4681]: log_missing_certs Apr 04 01:57:06 crc kubenswrapper[4681]: sleep 5 Apr 04 01:57:06 crc kubenswrapper[4681]: done Apr 04 01:57:06 crc kubenswrapper[4681]: Apr 04 01:57:06 crc kubenswrapper[4681]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Apr 04 01:57:06 crc kubenswrapper[4681]: exec /usr/bin/kube-rbac-proxy \ Apr 04 01:57:06 crc kubenswrapper[4681]: --logtostderr \ Apr 04 01:57:06 crc kubenswrapper[4681]: --secure-listen-address=:9108 \ Apr 04 01:57:06 crc kubenswrapper[4681]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Apr 04 01:57:06 crc kubenswrapper[4681]: --upstream=http://127.0.0.1:29108/ \ Apr 04 01:57:06 crc kubenswrapper[4681]: --tls-private-key-file=${TLS_PK} \ Apr 04 01:57:06 crc kubenswrapper[4681]: --tls-cert-file=${TLS_CERT} Apr 04 01:57:06 crc kubenswrapper[4681]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvgg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-sswhf_openshift-ovn-kubernetes(fc4fe566-3f65-4de4-9595-80b23fe4149c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:57:06 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:57:06 crc kubenswrapper[4681]: E0404 01:57:06.242853 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:57:06 crc kubenswrapper[4681]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Apr 04 01:57:06 crc kubenswrapper[4681]: if [[ -f "/env/_master" ]]; then Apr 04 01:57:06 crc kubenswrapper[4681]: set -o allexport Apr 04 01:57:06 crc kubenswrapper[4681]: source "/env/_master" Apr 04 01:57:06 crc kubenswrapper[4681]: set +o allexport Apr 04 01:57:06 crc kubenswrapper[4681]: fi Apr 04 01:57:06 crc kubenswrapper[4681]: Apr 04 01:57:06 crc kubenswrapper[4681]: ovn_v4_join_subnet_opt= Apr 04 01:57:06 crc kubenswrapper[4681]: if [[ "" != "" ]]; then Apr 04 01:57:06 crc kubenswrapper[4681]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Apr 04 01:57:06 crc kubenswrapper[4681]: fi Apr 04 01:57:06 crc kubenswrapper[4681]: ovn_v6_join_subnet_opt= Apr 04 01:57:06 crc kubenswrapper[4681]: if [[ "" != "" ]]; then Apr 04 01:57:06 crc kubenswrapper[4681]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Apr 04 01:57:06 crc kubenswrapper[4681]: fi Apr 04 01:57:06 crc kubenswrapper[4681]: Apr 04 01:57:06 crc kubenswrapper[4681]: ovn_v4_transit_switch_subnet_opt= Apr 04 01:57:06 crc kubenswrapper[4681]: if [[ "" != "" ]]; then Apr 04 01:57:06 crc kubenswrapper[4681]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Apr 04 01:57:06 crc kubenswrapper[4681]: fi Apr 04 01:57:06 crc kubenswrapper[4681]: ovn_v6_transit_switch_subnet_opt= Apr 04 01:57:06 crc kubenswrapper[4681]: if [[ "" != "" ]]; then Apr 04 01:57:06 crc kubenswrapper[4681]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Apr 04 01:57:06 crc kubenswrapper[4681]: fi Apr 04 01:57:06 crc kubenswrapper[4681]: Apr 04 01:57:06 crc kubenswrapper[4681]: dns_name_resolver_enabled_flag= Apr 04 01:57:06 crc kubenswrapper[4681]: if [[ "false" == "true" ]]; then Apr 04 01:57:06 crc kubenswrapper[4681]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Apr 04 01:57:06 crc kubenswrapper[4681]: fi Apr 04 01:57:06 crc kubenswrapper[4681]: Apr 04 01:57:06 crc kubenswrapper[4681]: persistent_ips_enabled_flag= Apr 04 01:57:06 crc kubenswrapper[4681]: if [[ "true" == "true" ]]; then Apr 04 01:57:06 crc kubenswrapper[4681]: persistent_ips_enabled_flag="--enable-persistent-ips" Apr 04 01:57:06 crc kubenswrapper[4681]: fi Apr 04 01:57:06 crc kubenswrapper[4681]: Apr 04 01:57:06 crc kubenswrapper[4681]: # This is needed so that converting clusters from GA to TP Apr 04 01:57:06 crc kubenswrapper[4681]: # will rollout control plane pods as well Apr 04 01:57:06 crc kubenswrapper[4681]: network_segmentation_enabled_flag= Apr 04 01:57:06 crc kubenswrapper[4681]: multi_network_enabled_flag= Apr 04 01:57:06 crc kubenswrapper[4681]: if [[ "true" == "true" ]]; then Apr 04 01:57:06 crc kubenswrapper[4681]: multi_network_enabled_flag="--enable-multi-network" Apr 04 01:57:06 crc kubenswrapper[4681]: network_segmentation_enabled_flag="--enable-network-segmentation" Apr 04 01:57:06 crc kubenswrapper[4681]: fi Apr 04 01:57:06 crc kubenswrapper[4681]: Apr 04 01:57:06 crc kubenswrapper[4681]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Apr 04 01:57:06 crc kubenswrapper[4681]: exec /usr/bin/ovnkube \ Apr 04 01:57:06 crc kubenswrapper[4681]: --enable-interconnect \ Apr 04 01:57:06 crc kubenswrapper[4681]: --init-cluster-manager "${K8S_NODE}" \ Apr 04 01:57:06 crc kubenswrapper[4681]: --config-file=/run/ovnkube-config/ovnkube.conf \ Apr 04 01:57:06 crc kubenswrapper[4681]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Apr 04 01:57:06 crc kubenswrapper[4681]: --metrics-bind-address "127.0.0.1:29108" \ Apr 04 01:57:06 crc kubenswrapper[4681]: --metrics-enable-pprof \ Apr 04 01:57:06 crc kubenswrapper[4681]: --metrics-enable-config-duration \ Apr 04 01:57:06 crc kubenswrapper[4681]: ${ovn_v4_join_subnet_opt} \ Apr 04 01:57:06 crc kubenswrapper[4681]: ${ovn_v6_join_subnet_opt} \ Apr 04 01:57:06 crc kubenswrapper[4681]: ${ovn_v4_transit_switch_subnet_opt} \ Apr 04 01:57:06 crc kubenswrapper[4681]: ${ovn_v6_transit_switch_subnet_opt} \ Apr 04 01:57:06 crc kubenswrapper[4681]: ${dns_name_resolver_enabled_flag} \ Apr 04 01:57:06 crc kubenswrapper[4681]: ${persistent_ips_enabled_flag} \ Apr 04 01:57:06 crc kubenswrapper[4681]: ${multi_network_enabled_flag} \ Apr 04 01:57:06 crc kubenswrapper[4681]: ${network_segmentation_enabled_flag} Apr 04 01:57:06 crc kubenswrapper[4681]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvgg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-sswhf_openshift-ovn-kubernetes(fc4fe566-3f65-4de4-9595-80b23fe4149c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:57:06 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:57:06 crc kubenswrapper[4681]: E0404 01:57:06.244883 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" podUID="fc4fe566-3f65-4de4-9595-80b23fe4149c" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.289852 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.289892 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.289900 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.289913 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.289923 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:06Z","lastTransitionTime":"2026-04-04T01:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.318728 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-jk6f6"] Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.319279 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:06 crc kubenswrapper[4681]: E0404 01:57:06.319356 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.332810 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.346607 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.357371 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.370401 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.384478 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.392315 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.392374 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.392395 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.392420 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.392441 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:06Z","lastTransitionTime":"2026-04-04T01:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.399177 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.410836 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.421135 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.434312 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.444725 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.456392 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.477045 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.495750 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.495805 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.495823 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.495859 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.495894 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:06Z","lastTransitionTime":"2026-04-04T01:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.496910 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.505433 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41bdd8e6-130d-4e3e-b466-313031c233d1-metrics-certs\") pod \"network-metrics-daemon-jk6f6\" (UID: \"41bdd8e6-130d-4e3e-b466-313031c233d1\") " pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.505499 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frbh6\" (UniqueName: \"kubernetes.io/projected/41bdd8e6-130d-4e3e-b466-313031c233d1-kube-api-access-frbh6\") pod \"network-metrics-daemon-jk6f6\" (UID: \"41bdd8e6-130d-4e3e-b466-313031c233d1\") " pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.512577 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.528371 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.599386 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.599450 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.599468 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.599490 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.599507 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:06Z","lastTransitionTime":"2026-04-04T01:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.606405 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41bdd8e6-130d-4e3e-b466-313031c233d1-metrics-certs\") pod \"network-metrics-daemon-jk6f6\" (UID: \"41bdd8e6-130d-4e3e-b466-313031c233d1\") " pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.606485 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frbh6\" (UniqueName: \"kubernetes.io/projected/41bdd8e6-130d-4e3e-b466-313031c233d1-kube-api-access-frbh6\") pod \"network-metrics-daemon-jk6f6\" (UID: \"41bdd8e6-130d-4e3e-b466-313031c233d1\") " pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:06 crc kubenswrapper[4681]: E0404 01:57:06.606611 4681 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 04 01:57:06 crc kubenswrapper[4681]: E0404 01:57:06.606730 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41bdd8e6-130d-4e3e-b466-313031c233d1-metrics-certs podName:41bdd8e6-130d-4e3e-b466-313031c233d1 nodeName:}" failed. No retries permitted until 2026-04-04 01:57:07.106695661 +0000 UTC m=+106.772470821 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/41bdd8e6-130d-4e3e-b466-313031c233d1-metrics-certs") pod "network-metrics-daemon-jk6f6" (UID: "41bdd8e6-130d-4e3e-b466-313031c233d1") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.636100 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frbh6\" (UniqueName: \"kubernetes.io/projected/41bdd8e6-130d-4e3e-b466-313031c233d1-kube-api-access-frbh6\") pod \"network-metrics-daemon-jk6f6\" (UID: \"41bdd8e6-130d-4e3e-b466-313031c233d1\") " pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.644415 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" event={"ID":"fc4fe566-3f65-4de4-9595-80b23fe4149c","Type":"ContainerStarted","Data":"0c04cfd9ed9852872c4b4dfc9bb462fb40825b921066127cfd6585383d32f9e4"} Apr 04 01:57:06 crc kubenswrapper[4681]: E0404 01:57:06.646847 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:57:06 crc kubenswrapper[4681]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Apr 04 01:57:06 crc kubenswrapper[4681]: set -euo pipefail Apr 04 01:57:06 crc kubenswrapper[4681]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Apr 04 01:57:06 crc kubenswrapper[4681]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Apr 04 01:57:06 crc kubenswrapper[4681]: # As the secret mount is optional we must wait for the files to be present. Apr 04 01:57:06 crc kubenswrapper[4681]: # The service is created in monitor.yaml and this is created in sdn.yaml. Apr 04 01:57:06 crc kubenswrapper[4681]: TS=$(date +%s) Apr 04 01:57:06 crc kubenswrapper[4681]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Apr 04 01:57:06 crc kubenswrapper[4681]: HAS_LOGGED_INFO=0 Apr 04 01:57:06 crc kubenswrapper[4681]: Apr 04 01:57:06 crc kubenswrapper[4681]: log_missing_certs(){ Apr 04 01:57:06 crc kubenswrapper[4681]: CUR_TS=$(date +%s) Apr 04 01:57:06 crc kubenswrapper[4681]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Apr 04 01:57:06 crc kubenswrapper[4681]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Apr 04 01:57:06 crc kubenswrapper[4681]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Apr 04 01:57:06 crc kubenswrapper[4681]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Apr 04 01:57:06 crc kubenswrapper[4681]: HAS_LOGGED_INFO=1 Apr 04 01:57:06 crc kubenswrapper[4681]: fi Apr 04 01:57:06 crc kubenswrapper[4681]: } Apr 04 01:57:06 crc kubenswrapper[4681]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Apr 04 01:57:06 crc kubenswrapper[4681]: log_missing_certs Apr 04 01:57:06 crc kubenswrapper[4681]: sleep 5 Apr 04 01:57:06 crc kubenswrapper[4681]: done Apr 04 01:57:06 crc kubenswrapper[4681]: Apr 04 01:57:06 crc kubenswrapper[4681]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Apr 04 01:57:06 crc kubenswrapper[4681]: exec /usr/bin/kube-rbac-proxy \ Apr 04 01:57:06 crc kubenswrapper[4681]: --logtostderr \ Apr 04 01:57:06 crc kubenswrapper[4681]: --secure-listen-address=:9108 \ Apr 04 01:57:06 crc kubenswrapper[4681]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Apr 04 01:57:06 crc kubenswrapper[4681]: --upstream=http://127.0.0.1:29108/ \ Apr 04 01:57:06 crc kubenswrapper[4681]: --tls-private-key-file=${TLS_PK} \ Apr 04 01:57:06 crc kubenswrapper[4681]: --tls-cert-file=${TLS_CERT} Apr 04 01:57:06 crc kubenswrapper[4681]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvgg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-sswhf_openshift-ovn-kubernetes(fc4fe566-3f65-4de4-9595-80b23fe4149c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:57:06 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:57:06 crc kubenswrapper[4681]: E0404 01:57:06.649461 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:57:06 crc kubenswrapper[4681]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Apr 04 01:57:06 crc kubenswrapper[4681]: if [[ -f "/env/_master" ]]; then Apr 04 01:57:06 crc kubenswrapper[4681]: set -o allexport Apr 04 01:57:06 crc kubenswrapper[4681]: source "/env/_master" Apr 04 01:57:06 crc kubenswrapper[4681]: set +o allexport Apr 04 01:57:06 crc kubenswrapper[4681]: fi Apr 04 01:57:06 crc kubenswrapper[4681]: Apr 04 01:57:06 crc kubenswrapper[4681]: ovn_v4_join_subnet_opt= Apr 04 01:57:06 crc kubenswrapper[4681]: if [[ "" != "" ]]; then Apr 04 01:57:06 crc kubenswrapper[4681]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Apr 04 01:57:06 crc kubenswrapper[4681]: fi Apr 04 01:57:06 crc kubenswrapper[4681]: ovn_v6_join_subnet_opt= Apr 04 01:57:06 crc kubenswrapper[4681]: if [[ "" != "" ]]; then Apr 04 01:57:06 crc kubenswrapper[4681]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Apr 04 01:57:06 crc kubenswrapper[4681]: fi Apr 04 01:57:06 crc kubenswrapper[4681]: Apr 04 01:57:06 crc kubenswrapper[4681]: ovn_v4_transit_switch_subnet_opt= Apr 04 01:57:06 crc kubenswrapper[4681]: if [[ "" != "" ]]; then Apr 04 01:57:06 crc kubenswrapper[4681]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Apr 04 01:57:06 crc kubenswrapper[4681]: fi Apr 04 01:57:06 crc kubenswrapper[4681]: ovn_v6_transit_switch_subnet_opt= Apr 04 01:57:06 crc kubenswrapper[4681]: if [[ "" != "" ]]; then Apr 04 01:57:06 crc kubenswrapper[4681]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Apr 04 01:57:06 crc kubenswrapper[4681]: fi Apr 04 01:57:06 crc kubenswrapper[4681]: Apr 04 01:57:06 crc kubenswrapper[4681]: dns_name_resolver_enabled_flag= Apr 04 01:57:06 crc kubenswrapper[4681]: if [[ "false" == "true" ]]; then Apr 04 01:57:06 crc kubenswrapper[4681]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Apr 04 01:57:06 crc kubenswrapper[4681]: fi Apr 04 01:57:06 crc kubenswrapper[4681]: Apr 04 01:57:06 crc kubenswrapper[4681]: persistent_ips_enabled_flag= Apr 04 01:57:06 crc kubenswrapper[4681]: if [[ "true" == "true" ]]; then Apr 04 01:57:06 crc kubenswrapper[4681]: persistent_ips_enabled_flag="--enable-persistent-ips" Apr 04 01:57:06 crc kubenswrapper[4681]: fi Apr 04 01:57:06 crc kubenswrapper[4681]: Apr 04 01:57:06 crc kubenswrapper[4681]: # This is needed so that converting clusters from GA to TP Apr 04 01:57:06 crc kubenswrapper[4681]: # will rollout control plane pods as well Apr 04 01:57:06 crc kubenswrapper[4681]: network_segmentation_enabled_flag= Apr 04 01:57:06 crc kubenswrapper[4681]: multi_network_enabled_flag= Apr 04 01:57:06 crc kubenswrapper[4681]: if [[ "true" == "true" ]]; then Apr 04 01:57:06 crc kubenswrapper[4681]: multi_network_enabled_flag="--enable-multi-network" Apr 04 01:57:06 crc kubenswrapper[4681]: network_segmentation_enabled_flag="--enable-network-segmentation" Apr 04 01:57:06 crc kubenswrapper[4681]: fi Apr 04 01:57:06 crc kubenswrapper[4681]: Apr 04 01:57:06 crc kubenswrapper[4681]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Apr 04 01:57:06 crc kubenswrapper[4681]: exec /usr/bin/ovnkube \ Apr 04 01:57:06 crc kubenswrapper[4681]: --enable-interconnect \ Apr 04 01:57:06 crc kubenswrapper[4681]: --init-cluster-manager "${K8S_NODE}" \ Apr 04 01:57:06 crc kubenswrapper[4681]: --config-file=/run/ovnkube-config/ovnkube.conf \ Apr 04 01:57:06 crc kubenswrapper[4681]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Apr 04 01:57:06 crc kubenswrapper[4681]: --metrics-bind-address "127.0.0.1:29108" \ Apr 04 01:57:06 crc kubenswrapper[4681]: --metrics-enable-pprof \ Apr 04 01:57:06 crc kubenswrapper[4681]: --metrics-enable-config-duration \ Apr 04 01:57:06 crc kubenswrapper[4681]: ${ovn_v4_join_subnet_opt} \ Apr 04 01:57:06 crc kubenswrapper[4681]: ${ovn_v6_join_subnet_opt} \ Apr 04 01:57:06 crc kubenswrapper[4681]: ${ovn_v4_transit_switch_subnet_opt} \ Apr 04 01:57:06 crc kubenswrapper[4681]: ${ovn_v6_transit_switch_subnet_opt} \ Apr 04 01:57:06 crc kubenswrapper[4681]: ${dns_name_resolver_enabled_flag} \ Apr 04 01:57:06 crc kubenswrapper[4681]: ${persistent_ips_enabled_flag} \ Apr 04 01:57:06 crc kubenswrapper[4681]: ${multi_network_enabled_flag} \ Apr 04 01:57:06 crc kubenswrapper[4681]: ${network_segmentation_enabled_flag} Apr 04 01:57:06 crc kubenswrapper[4681]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvgg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-sswhf_openshift-ovn-kubernetes(fc4fe566-3f65-4de4-9595-80b23fe4149c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:57:06 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:57:06 crc kubenswrapper[4681]: E0404 01:57:06.651042 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" podUID="fc4fe566-3f65-4de4-9595-80b23fe4149c" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.675081 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.692793 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.702458 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.702517 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.702535 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.702559 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.702576 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:06Z","lastTransitionTime":"2026-04-04T01:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.711832 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.727385 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.742304 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.760591 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.771775 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.787258 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.802509 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.805675 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.805733 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.805764 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.805791 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.805809 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:06Z","lastTransitionTime":"2026-04-04T01:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.820253 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.833982 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.847573 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.862239 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.873897 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.885432 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.908852 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.908913 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.908935 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.908963 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:06 crc kubenswrapper[4681]: I0404 01:57:06.908985 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:06Z","lastTransitionTime":"2026-04-04T01:57:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.012037 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.012114 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.012138 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.012171 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.012194 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:07Z","lastTransitionTime":"2026-04-04T01:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.111583 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41bdd8e6-130d-4e3e-b466-313031c233d1-metrics-certs\") pod \"network-metrics-daemon-jk6f6\" (UID: \"41bdd8e6-130d-4e3e-b466-313031c233d1\") " pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:07 crc kubenswrapper[4681]: E0404 01:57:07.111893 4681 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 04 01:57:07 crc kubenswrapper[4681]: E0404 01:57:07.112394 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41bdd8e6-130d-4e3e-b466-313031c233d1-metrics-certs podName:41bdd8e6-130d-4e3e-b466-313031c233d1 nodeName:}" failed. No retries permitted until 2026-04-04 01:57:08.112364263 +0000 UTC m=+107.778139413 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/41bdd8e6-130d-4e3e-b466-313031c233d1-metrics-certs") pod "network-metrics-daemon-jk6f6" (UID: "41bdd8e6-130d-4e3e-b466-313031c233d1") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.114726 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.114783 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.114805 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.114834 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.114855 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:07Z","lastTransitionTime":"2026-04-04T01:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:07 crc kubenswrapper[4681]: E0404 01:57:07.202250 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:57:07 crc kubenswrapper[4681]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Apr 04 01:57:07 crc kubenswrapper[4681]: set -uo pipefail Apr 04 01:57:07 crc kubenswrapper[4681]: Apr 04 01:57:07 crc kubenswrapper[4681]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Apr 04 01:57:07 crc kubenswrapper[4681]: Apr 04 01:57:07 crc kubenswrapper[4681]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Apr 04 01:57:07 crc kubenswrapper[4681]: HOSTS_FILE="/etc/hosts" Apr 04 01:57:07 crc kubenswrapper[4681]: TEMP_FILE="/etc/hosts.tmp" Apr 04 01:57:07 crc kubenswrapper[4681]: Apr 04 01:57:07 crc kubenswrapper[4681]: IFS=', ' read -r -a services <<< "${SERVICES}" Apr 04 01:57:07 crc kubenswrapper[4681]: Apr 04 01:57:07 crc kubenswrapper[4681]: # Make a temporary file with the old hosts file's attributes. Apr 04 01:57:07 crc kubenswrapper[4681]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Apr 04 01:57:07 crc kubenswrapper[4681]: echo "Failed to preserve hosts file. Exiting." Apr 04 01:57:07 crc kubenswrapper[4681]: exit 1 Apr 04 01:57:07 crc kubenswrapper[4681]: fi Apr 04 01:57:07 crc kubenswrapper[4681]: Apr 04 01:57:07 crc kubenswrapper[4681]: while true; do Apr 04 01:57:07 crc kubenswrapper[4681]: declare -A svc_ips Apr 04 01:57:07 crc kubenswrapper[4681]: for svc in "${services[@]}"; do Apr 04 01:57:07 crc kubenswrapper[4681]: # Fetch service IP from cluster dns if present. We make several tries Apr 04 01:57:07 crc kubenswrapper[4681]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Apr 04 01:57:07 crc kubenswrapper[4681]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Apr 04 01:57:07 crc kubenswrapper[4681]: # support UDP loadbalancers and require reaching DNS through TCP. Apr 04 01:57:07 crc kubenswrapper[4681]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Apr 04 01:57:07 crc kubenswrapper[4681]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Apr 04 01:57:07 crc kubenswrapper[4681]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Apr 04 01:57:07 crc kubenswrapper[4681]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Apr 04 01:57:07 crc kubenswrapper[4681]: for i in ${!cmds[*]} Apr 04 01:57:07 crc kubenswrapper[4681]: do Apr 04 01:57:07 crc kubenswrapper[4681]: ips=($(eval "${cmds[i]}")) Apr 04 01:57:07 crc kubenswrapper[4681]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Apr 04 01:57:07 crc kubenswrapper[4681]: svc_ips["${svc}"]="${ips[@]}" Apr 04 01:57:07 crc kubenswrapper[4681]: break Apr 04 01:57:07 crc kubenswrapper[4681]: fi Apr 04 01:57:07 crc kubenswrapper[4681]: done Apr 04 01:57:07 crc kubenswrapper[4681]: done Apr 04 01:57:07 crc kubenswrapper[4681]: Apr 04 01:57:07 crc kubenswrapper[4681]: # Update /etc/hosts only if we get valid service IPs Apr 04 01:57:07 crc kubenswrapper[4681]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Apr 04 01:57:07 crc kubenswrapper[4681]: # Stale entries could exist in /etc/hosts if the service is deleted Apr 04 01:57:07 crc kubenswrapper[4681]: if [[ -n "${svc_ips[*]-}" ]]; then Apr 04 01:57:07 crc kubenswrapper[4681]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Apr 04 01:57:07 crc kubenswrapper[4681]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Apr 04 01:57:07 crc kubenswrapper[4681]: # Only continue rebuilding the hosts entries if its original content is preserved Apr 04 01:57:07 crc kubenswrapper[4681]: sleep 60 & wait Apr 04 01:57:07 crc kubenswrapper[4681]: continue Apr 04 01:57:07 crc kubenswrapper[4681]: fi Apr 04 01:57:07 crc kubenswrapper[4681]: Apr 04 01:57:07 crc kubenswrapper[4681]: # Append resolver entries for services Apr 04 01:57:07 crc kubenswrapper[4681]: rc=0 Apr 04 01:57:07 crc kubenswrapper[4681]: for svc in "${!svc_ips[@]}"; do Apr 04 01:57:07 crc kubenswrapper[4681]: for ip in ${svc_ips[${svc}]}; do Apr 04 01:57:07 crc kubenswrapper[4681]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Apr 04 01:57:07 crc kubenswrapper[4681]: done Apr 04 01:57:07 crc kubenswrapper[4681]: done Apr 04 01:57:07 crc kubenswrapper[4681]: if [[ $rc -ne 0 ]]; then Apr 04 01:57:07 crc kubenswrapper[4681]: sleep 60 & wait Apr 04 01:57:07 crc kubenswrapper[4681]: continue Apr 04 01:57:07 crc kubenswrapper[4681]: fi Apr 04 01:57:07 crc kubenswrapper[4681]: Apr 04 01:57:07 crc kubenswrapper[4681]: Apr 04 01:57:07 crc kubenswrapper[4681]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Apr 04 01:57:07 crc kubenswrapper[4681]: # Replace /etc/hosts with our modified version if needed Apr 04 01:57:07 crc kubenswrapper[4681]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Apr 04 01:57:07 crc kubenswrapper[4681]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Apr 04 01:57:07 crc kubenswrapper[4681]: fi Apr 04 01:57:07 crc kubenswrapper[4681]: sleep 60 & wait Apr 04 01:57:07 crc kubenswrapper[4681]: unset svc_ips Apr 04 01:57:07 crc kubenswrapper[4681]: done Apr 04 01:57:07 crc kubenswrapper[4681]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n97gc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-jsn9l_openshift-dns(e4e1568b-1dc4-41c2-a74f-38bfabcf1280): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:57:07 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:57:07 crc kubenswrapper[4681]: E0404 01:57:07.203597 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-jsn9l" podUID="e4e1568b-1dc4-41c2-a74f-38bfabcf1280" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.217328 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.217416 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.217443 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.217473 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.217496 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:07Z","lastTransitionTime":"2026-04-04T01:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.319784 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.319901 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.319919 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.319945 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.319963 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:07Z","lastTransitionTime":"2026-04-04T01:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.423248 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.423344 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.423362 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.423387 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.423404 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:07Z","lastTransitionTime":"2026-04-04T01:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.526193 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.526241 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.526257 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.526319 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.526337 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:07Z","lastTransitionTime":"2026-04-04T01:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.630139 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.630233 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.630257 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.630332 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.630358 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:07Z","lastTransitionTime":"2026-04-04T01:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.733895 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.733955 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.733978 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.734006 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.734025 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:07Z","lastTransitionTime":"2026-04-04T01:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.837174 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.837302 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.837339 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.837370 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.837394 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:07Z","lastTransitionTime":"2026-04-04T01:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.940719 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.940777 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.940795 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.940819 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:07 crc kubenswrapper[4681]: I0404 01:57:07.940838 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:07Z","lastTransitionTime":"2026-04-04T01:57:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.044234 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.044323 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.044384 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.044413 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.044471 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:08Z","lastTransitionTime":"2026-04-04T01:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.123447 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41bdd8e6-130d-4e3e-b466-313031c233d1-metrics-certs\") pod \"network-metrics-daemon-jk6f6\" (UID: \"41bdd8e6-130d-4e3e-b466-313031c233d1\") " pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:08 crc kubenswrapper[4681]: E0404 01:57:08.123685 4681 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 04 01:57:08 crc kubenswrapper[4681]: E0404 01:57:08.123815 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41bdd8e6-130d-4e3e-b466-313031c233d1-metrics-certs podName:41bdd8e6-130d-4e3e-b466-313031c233d1 nodeName:}" failed. No retries permitted until 2026-04-04 01:57:10.123779699 +0000 UTC m=+109.789554859 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/41bdd8e6-130d-4e3e-b466-313031c233d1-metrics-certs") pod "network-metrics-daemon-jk6f6" (UID: "41bdd8e6-130d-4e3e-b466-313031c233d1") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.147508 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.147570 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.147587 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.147647 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.147665 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:08Z","lastTransitionTime":"2026-04-04T01:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.200675 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.200711 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.200767 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.200816 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:08 crc kubenswrapper[4681]: E0404 01:57:08.201031 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:57:08 crc kubenswrapper[4681]: E0404 01:57:08.202064 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:57:08 crc kubenswrapper[4681]: E0404 01:57:08.202348 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:57:08 crc kubenswrapper[4681]: E0404 01:57:08.202561 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:57:08 crc kubenswrapper[4681]: E0404 01:57:08.204586 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:57:08 crc kubenswrapper[4681]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Apr 04 01:57:08 crc kubenswrapper[4681]: apiVersion: v1 Apr 04 01:57:08 crc kubenswrapper[4681]: clusters: Apr 04 01:57:08 crc kubenswrapper[4681]: - cluster: Apr 04 01:57:08 crc kubenswrapper[4681]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Apr 04 01:57:08 crc kubenswrapper[4681]: server: https://api-int.crc.testing:6443 Apr 04 01:57:08 crc kubenswrapper[4681]: name: default-cluster Apr 04 01:57:08 crc kubenswrapper[4681]: contexts: Apr 04 01:57:08 crc kubenswrapper[4681]: - context: Apr 04 01:57:08 crc kubenswrapper[4681]: cluster: default-cluster Apr 04 01:57:08 crc kubenswrapper[4681]: namespace: default Apr 04 01:57:08 crc kubenswrapper[4681]: user: default-auth Apr 04 01:57:08 crc kubenswrapper[4681]: name: default-context Apr 04 01:57:08 crc kubenswrapper[4681]: current-context: default-context Apr 04 01:57:08 crc kubenswrapper[4681]: kind: Config Apr 04 01:57:08 crc kubenswrapper[4681]: preferences: {} Apr 04 01:57:08 crc kubenswrapper[4681]: users: Apr 04 01:57:08 crc kubenswrapper[4681]: - name: default-auth Apr 04 01:57:08 crc kubenswrapper[4681]: user: Apr 04 01:57:08 crc kubenswrapper[4681]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Apr 04 01:57:08 crc kubenswrapper[4681]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Apr 04 01:57:08 crc kubenswrapper[4681]: EOF Apr 04 01:57:08 crc kubenswrapper[4681]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vz8jr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-cntwc_openshift-ovn-kubernetes(d004639b-c07a-4401-8588-8af4ed981db3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:57:08 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:57:08 crc kubenswrapper[4681]: E0404 01:57:08.205971 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" podUID="d004639b-c07a-4401-8588-8af4ed981db3" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.223896 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.250719 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.250789 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.250807 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.250836 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.250863 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:08Z","lastTransitionTime":"2026-04-04T01:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.354237 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.354329 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.354342 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.354362 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.354377 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:08Z","lastTransitionTime":"2026-04-04T01:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.458107 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.458195 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.458218 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.458255 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.458318 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:08Z","lastTransitionTime":"2026-04-04T01:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.561193 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.561291 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.561309 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.561336 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.561366 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:08Z","lastTransitionTime":"2026-04-04T01:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.664398 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.664457 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.664484 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.664512 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.664535 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:08Z","lastTransitionTime":"2026-04-04T01:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.767357 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.767430 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.767454 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.767484 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.767506 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:08Z","lastTransitionTime":"2026-04-04T01:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.870242 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.870336 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.870354 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.870380 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.870397 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:08Z","lastTransitionTime":"2026-04-04T01:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.973454 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.973513 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.973531 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.973554 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:08 crc kubenswrapper[4681]: I0404 01:57:08.973571 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:08Z","lastTransitionTime":"2026-04-04T01:57:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.076851 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.077029 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.077091 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.077123 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.077147 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:09Z","lastTransitionTime":"2026-04-04T01:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.181204 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.181308 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.181342 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.181371 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.181392 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:09Z","lastTransitionTime":"2026-04-04T01:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:09 crc kubenswrapper[4681]: E0404 01:57:09.202696 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Apr 04 01:57:09 crc kubenswrapper[4681]: E0404 01:57:09.202826 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q824j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-bqtgx_openshift-multus(5918fa67-6cfa-4c3b-bc04-7cc7888abf1c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Apr 04 01:57:09 crc kubenswrapper[4681]: E0404 01:57:09.203846 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Apr 04 01:57:09 crc kubenswrapper[4681]: E0404 01:57:09.203934 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" podUID="5918fa67-6cfa-4c3b-bc04-7cc7888abf1c" Apr 04 01:57:09 crc kubenswrapper[4681]: E0404 01:57:09.204343 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:57:09 crc kubenswrapper[4681]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Apr 04 01:57:09 crc kubenswrapper[4681]: set -o allexport Apr 04 01:57:09 crc kubenswrapper[4681]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Apr 04 01:57:09 crc kubenswrapper[4681]: source /etc/kubernetes/apiserver-url.env Apr 04 01:57:09 crc kubenswrapper[4681]: else Apr 04 01:57:09 crc kubenswrapper[4681]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Apr 04 01:57:09 crc kubenswrapper[4681]: exit 1 Apr 04 01:57:09 crc kubenswrapper[4681]: fi Apr 04 01:57:09 crc kubenswrapper[4681]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Apr 04 01:57:09 crc kubenswrapper[4681]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:57:09 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:57:09 crc kubenswrapper[4681]: E0404 01:57:09.205969 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.284869 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.284950 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.284970 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.284995 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.285013 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:09Z","lastTransitionTime":"2026-04-04T01:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.388570 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.388632 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.388649 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.388680 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.388703 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:09Z","lastTransitionTime":"2026-04-04T01:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.491769 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.491856 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.491881 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.491913 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.491940 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:09Z","lastTransitionTime":"2026-04-04T01:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.595314 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.595425 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.595458 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.595498 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.595525 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:09Z","lastTransitionTime":"2026-04-04T01:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.699104 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.699190 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.699217 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.699252 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.699323 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:09Z","lastTransitionTime":"2026-04-04T01:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.794542 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.794628 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.794656 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.794689 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.794713 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:09Z","lastTransitionTime":"2026-04-04T01:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:09 crc kubenswrapper[4681]: E0404 01:57:09.813064 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.819316 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.819396 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.819420 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.819449 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.819471 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:09Z","lastTransitionTime":"2026-04-04T01:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:09 crc kubenswrapper[4681]: E0404 01:57:09.838413 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.844134 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.844201 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.844218 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.844243 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.844290 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:09Z","lastTransitionTime":"2026-04-04T01:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:09 crc kubenswrapper[4681]: E0404 01:57:09.860623 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.866714 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.866823 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.866850 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.866882 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.866910 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:09Z","lastTransitionTime":"2026-04-04T01:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:09 crc kubenswrapper[4681]: E0404 01:57:09.887012 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.893495 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.893594 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.893623 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.893660 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.893685 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:09Z","lastTransitionTime":"2026-04-04T01:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:09 crc kubenswrapper[4681]: E0404 01:57:09.915501 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:09 crc kubenswrapper[4681]: E0404 01:57:09.915725 4681 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.918400 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.918462 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.918486 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.918517 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:09 crc kubenswrapper[4681]: I0404 01:57:09.918541 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:09Z","lastTransitionTime":"2026-04-04T01:57:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.021871 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.021968 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.021985 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.022009 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.022025 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:10Z","lastTransitionTime":"2026-04-04T01:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.125375 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.125436 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.125459 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.125489 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.125512 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:10Z","lastTransitionTime":"2026-04-04T01:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.144629 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41bdd8e6-130d-4e3e-b466-313031c233d1-metrics-certs\") pod \"network-metrics-daemon-jk6f6\" (UID: \"41bdd8e6-130d-4e3e-b466-313031c233d1\") " pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:10 crc kubenswrapper[4681]: E0404 01:57:10.144901 4681 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 04 01:57:10 crc kubenswrapper[4681]: E0404 01:57:10.145022 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41bdd8e6-130d-4e3e-b466-313031c233d1-metrics-certs podName:41bdd8e6-130d-4e3e-b466-313031c233d1 nodeName:}" failed. No retries permitted until 2026-04-04 01:57:14.144989606 +0000 UTC m=+113.810764766 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/41bdd8e6-130d-4e3e-b466-313031c233d1-metrics-certs") pod "network-metrics-daemon-jk6f6" (UID: "41bdd8e6-130d-4e3e-b466-313031c233d1") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.200414 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.200458 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.200500 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.200415 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:10 crc kubenswrapper[4681]: E0404 01:57:10.200620 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:57:10 crc kubenswrapper[4681]: E0404 01:57:10.200777 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:57:10 crc kubenswrapper[4681]: E0404 01:57:10.200914 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:57:10 crc kubenswrapper[4681]: E0404 01:57:10.201181 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.229122 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.229176 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.229187 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.229205 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.229218 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:10Z","lastTransitionTime":"2026-04-04T01:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.332933 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.333003 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.333021 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.333051 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.333073 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:10Z","lastTransitionTime":"2026-04-04T01:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.436707 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.436783 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.436802 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.436830 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.436851 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:10Z","lastTransitionTime":"2026-04-04T01:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.541879 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.541953 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.541966 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.541985 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.542000 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:10Z","lastTransitionTime":"2026-04-04T01:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.645373 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.645425 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.645433 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.645447 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.645456 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:10Z","lastTransitionTime":"2026-04-04T01:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.748159 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.748215 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.748235 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.748305 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.748325 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:10Z","lastTransitionTime":"2026-04-04T01:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.851403 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.851484 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.851501 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.851527 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.851552 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:10Z","lastTransitionTime":"2026-04-04T01:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.954581 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.954651 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.954665 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.954690 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:10 crc kubenswrapper[4681]: I0404 01:57:10.954705 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:10Z","lastTransitionTime":"2026-04-04T01:57:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.058080 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.058182 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.058206 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.058238 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.058269 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:11Z","lastTransitionTime":"2026-04-04T01:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.162066 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.162135 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.162153 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.162180 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.162198 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:11Z","lastTransitionTime":"2026-04-04T01:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:11 crc kubenswrapper[4681]: E0404 01:57:11.206149 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nt54j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Apr 04 01:57:11 crc kubenswrapper[4681]: E0404 01:57:11.209687 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:57:11 crc kubenswrapper[4681]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Apr 04 01:57:11 crc kubenswrapper[4681]: if [[ -f "/env/_master" ]]; then Apr 04 01:57:11 crc kubenswrapper[4681]: set -o allexport Apr 04 01:57:11 crc kubenswrapper[4681]: source "/env/_master" Apr 04 01:57:11 crc kubenswrapper[4681]: set +o allexport Apr 04 01:57:11 crc kubenswrapper[4681]: fi Apr 04 01:57:11 crc kubenswrapper[4681]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Apr 04 01:57:11 crc kubenswrapper[4681]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Apr 04 01:57:11 crc kubenswrapper[4681]: ho_enable="--enable-hybrid-overlay" Apr 04 01:57:11 crc kubenswrapper[4681]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Apr 04 01:57:11 crc kubenswrapper[4681]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Apr 04 01:57:11 crc kubenswrapper[4681]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Apr 04 01:57:11 crc kubenswrapper[4681]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Apr 04 01:57:11 crc kubenswrapper[4681]: --webhook-cert-dir="/etc/webhook-cert" \ Apr 04 01:57:11 crc kubenswrapper[4681]: --webhook-host=127.0.0.1 \ Apr 04 01:57:11 crc kubenswrapper[4681]: --webhook-port=9743 \ Apr 04 01:57:11 crc kubenswrapper[4681]: ${ho_enable} \ Apr 04 01:57:11 crc kubenswrapper[4681]: --enable-interconnect \ Apr 04 01:57:11 crc kubenswrapper[4681]: --disable-approver \ Apr 04 01:57:11 crc kubenswrapper[4681]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Apr 04 01:57:11 crc kubenswrapper[4681]: --wait-for-kubernetes-api=200s \ Apr 04 01:57:11 crc kubenswrapper[4681]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Apr 04 01:57:11 crc kubenswrapper[4681]: --loglevel="${LOGLEVEL}" Apr 04 01:57:11 crc kubenswrapper[4681]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:57:11 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:57:11 crc kubenswrapper[4681]: E0404 01:57:11.212129 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nt54j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Apr 04 01:57:11 crc kubenswrapper[4681]: E0404 01:57:11.212921 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:57:11 crc kubenswrapper[4681]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Apr 04 01:57:11 crc kubenswrapper[4681]: if [[ -f "/env/_master" ]]; then Apr 04 01:57:11 crc kubenswrapper[4681]: set -o allexport Apr 04 01:57:11 crc kubenswrapper[4681]: source "/env/_master" Apr 04 01:57:11 crc kubenswrapper[4681]: set +o allexport Apr 04 01:57:11 crc kubenswrapper[4681]: fi Apr 04 01:57:11 crc kubenswrapper[4681]: Apr 04 01:57:11 crc kubenswrapper[4681]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Apr 04 01:57:11 crc kubenswrapper[4681]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Apr 04 01:57:11 crc kubenswrapper[4681]: --disable-webhook \ Apr 04 01:57:11 crc kubenswrapper[4681]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Apr 04 01:57:11 crc kubenswrapper[4681]: --loglevel="${LOGLEVEL}" Apr 04 01:57:11 crc kubenswrapper[4681]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:57:11 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:57:11 crc kubenswrapper[4681]: E0404 01:57:11.213639 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 01:57:11 crc kubenswrapper[4681]: E0404 01:57:11.214067 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.237889 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.256027 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.264839 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.264895 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.264917 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.264948 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.264970 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:11Z","lastTransitionTime":"2026-04-04T01:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.273523 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.288218 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.316094 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.333802 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.352980 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.365680 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.368425 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.368509 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.368526 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.368550 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.368568 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:11Z","lastTransitionTime":"2026-04-04T01:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.380584 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.396980 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.411416 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.424608 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.437341 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.447534 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.458051 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.470917 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.471020 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.471049 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.471080 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.471102 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:11Z","lastTransitionTime":"2026-04-04T01:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.472156 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.574788 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.574849 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.574867 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.574892 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.574910 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:11Z","lastTransitionTime":"2026-04-04T01:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.676843 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.676911 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.676948 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.676979 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.677019 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:11Z","lastTransitionTime":"2026-04-04T01:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.779954 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.780029 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.780055 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.780086 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.780110 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:11Z","lastTransitionTime":"2026-04-04T01:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.883382 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.883449 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.883469 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.883493 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.883510 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:11Z","lastTransitionTime":"2026-04-04T01:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.961567 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.961763 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:11 crc kubenswrapper[4681]: E0404 01:57:11.961827 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:57:27.961792131 +0000 UTC m=+127.627567291 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.961905 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:11 crc kubenswrapper[4681]: E0404 01:57:11.961955 4681 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Apr 04 01:57:11 crc kubenswrapper[4681]: E0404 01:57:11.962037 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-04-04 01:57:27.962015218 +0000 UTC m=+127.627790388 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Apr 04 01:57:11 crc kubenswrapper[4681]: E0404 01:57:11.962031 4681 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Apr 04 01:57:11 crc kubenswrapper[4681]: E0404 01:57:11.968992 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-04-04 01:57:27.968927031 +0000 UTC m=+127.634702181 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.986720 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.986790 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.986812 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.986837 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:11 crc kubenswrapper[4681]: I0404 01:57:11.986855 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:11Z","lastTransitionTime":"2026-04-04T01:57:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.062790 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.062905 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:12 crc kubenswrapper[4681]: E0404 01:57:12.063087 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 04 01:57:12 crc kubenswrapper[4681]: E0404 01:57:12.063138 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 04 01:57:12 crc kubenswrapper[4681]: E0404 01:57:12.063161 4681 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:57:12 crc kubenswrapper[4681]: E0404 01:57:12.063171 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 04 01:57:12 crc kubenswrapper[4681]: E0404 01:57:12.063213 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 04 01:57:12 crc kubenswrapper[4681]: E0404 01:57:12.063232 4681 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:57:12 crc kubenswrapper[4681]: E0404 01:57:12.063255 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-04-04 01:57:28.063227854 +0000 UTC m=+127.729003014 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:57:12 crc kubenswrapper[4681]: E0404 01:57:12.063357 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-04-04 01:57:28.063288436 +0000 UTC m=+127.729063596 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.090254 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.090358 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.090381 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.090411 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.090433 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:12Z","lastTransitionTime":"2026-04-04T01:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.193844 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.193919 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.193941 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.193969 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.193991 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:12Z","lastTransitionTime":"2026-04-04T01:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.200244 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.200449 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.200474 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.200308 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:12 crc kubenswrapper[4681]: E0404 01:57:12.200881 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:57:12 crc kubenswrapper[4681]: E0404 01:57:12.201051 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:57:12 crc kubenswrapper[4681]: E0404 01:57:12.201238 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:57:12 crc kubenswrapper[4681]: E0404 01:57:12.201474 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:57:12 crc kubenswrapper[4681]: E0404 01:57:12.202757 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:57:12 crc kubenswrapper[4681]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Apr 04 01:57:12 crc kubenswrapper[4681]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Apr 04 01:57:12 crc kubenswrapper[4681]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqjdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-w5wbs_openshift-multus(cab7ffc5-0101-48b8-87ab-de8324bacc38): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:57:12 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:57:12 crc kubenswrapper[4681]: E0404 01:57:12.204023 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-w5wbs" podUID="cab7ffc5-0101-48b8-87ab-de8324bacc38" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.297441 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.297494 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.297517 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.297545 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.297568 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:12Z","lastTransitionTime":"2026-04-04T01:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.400706 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.400759 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.400776 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.400799 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.400816 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:12Z","lastTransitionTime":"2026-04-04T01:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.504248 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.504333 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.504350 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.504373 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.504393 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:12Z","lastTransitionTime":"2026-04-04T01:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.607204 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.607352 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.607385 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.607414 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.607435 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:12Z","lastTransitionTime":"2026-04-04T01:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.711127 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.711203 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.711223 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.711249 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.711303 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:12Z","lastTransitionTime":"2026-04-04T01:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.780158 4681 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.814960 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.815048 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.815080 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.815113 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.815135 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:12Z","lastTransitionTime":"2026-04-04T01:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.918184 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.918273 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.918339 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.918363 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:12 crc kubenswrapper[4681]: I0404 01:57:12.918380 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:12Z","lastTransitionTime":"2026-04-04T01:57:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.021380 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.021454 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.021476 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.021505 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.021526 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:13Z","lastTransitionTime":"2026-04-04T01:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.125524 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.125595 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.125612 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.125638 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.125655 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:13Z","lastTransitionTime":"2026-04-04T01:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.229668 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.229735 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.229758 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.229785 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.229810 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:13Z","lastTransitionTime":"2026-04-04T01:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.333793 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.333847 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.333867 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.333893 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.333916 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:13Z","lastTransitionTime":"2026-04-04T01:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.436806 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.436848 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.436860 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.436875 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.436886 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:13Z","lastTransitionTime":"2026-04-04T01:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.539543 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.539604 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.539620 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.539642 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.539659 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:13Z","lastTransitionTime":"2026-04-04T01:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.643070 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.643127 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.643144 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.643166 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.643184 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:13Z","lastTransitionTime":"2026-04-04T01:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.746315 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.746395 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.746416 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.746444 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.746465 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:13Z","lastTransitionTime":"2026-04-04T01:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.849693 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.849757 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.849778 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.849803 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.849823 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:13Z","lastTransitionTime":"2026-04-04T01:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.953064 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.953109 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.953125 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.953145 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:13 crc kubenswrapper[4681]: I0404 01:57:13.953160 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:13Z","lastTransitionTime":"2026-04-04T01:57:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.060102 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.060252 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.060282 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.060359 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.060398 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:14Z","lastTransitionTime":"2026-04-04T01:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.163442 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.163803 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.163986 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.164164 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.164369 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:14Z","lastTransitionTime":"2026-04-04T01:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.184343 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41bdd8e6-130d-4e3e-b466-313031c233d1-metrics-certs\") pod \"network-metrics-daemon-jk6f6\" (UID: \"41bdd8e6-130d-4e3e-b466-313031c233d1\") " pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:14 crc kubenswrapper[4681]: E0404 01:57:14.184574 4681 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 04 01:57:14 crc kubenswrapper[4681]: E0404 01:57:14.184644 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41bdd8e6-130d-4e3e-b466-313031c233d1-metrics-certs podName:41bdd8e6-130d-4e3e-b466-313031c233d1 nodeName:}" failed. No retries permitted until 2026-04-04 01:57:22.184621617 +0000 UTC m=+121.850396777 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/41bdd8e6-130d-4e3e-b466-313031c233d1-metrics-certs") pod "network-metrics-daemon-jk6f6" (UID: "41bdd8e6-130d-4e3e-b466-313031c233d1") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.200510 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.200517 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.200556 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.200881 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:14 crc kubenswrapper[4681]: E0404 01:57:14.200868 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:57:14 crc kubenswrapper[4681]: E0404 01:57:14.201139 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:57:14 crc kubenswrapper[4681]: E0404 01:57:14.201243 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:57:14 crc kubenswrapper[4681]: E0404 01:57:14.201425 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:57:14 crc kubenswrapper[4681]: E0404 01:57:14.203632 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:57:14 crc kubenswrapper[4681]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Apr 04 01:57:14 crc kubenswrapper[4681]: while [ true ]; Apr 04 01:57:14 crc kubenswrapper[4681]: do Apr 04 01:57:14 crc kubenswrapper[4681]: for f in $(ls /tmp/serviceca); do Apr 04 01:57:14 crc kubenswrapper[4681]: echo $f Apr 04 01:57:14 crc kubenswrapper[4681]: ca_file_path="/tmp/serviceca/${f}" Apr 04 01:57:14 crc kubenswrapper[4681]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Apr 04 01:57:14 crc kubenswrapper[4681]: reg_dir_path="/etc/docker/certs.d/${f}" Apr 04 01:57:14 crc kubenswrapper[4681]: if [ -e "${reg_dir_path}" ]; then Apr 04 01:57:14 crc kubenswrapper[4681]: cp -u $ca_file_path $reg_dir_path/ca.crt Apr 04 01:57:14 crc kubenswrapper[4681]: else Apr 04 01:57:14 crc kubenswrapper[4681]: mkdir $reg_dir_path Apr 04 01:57:14 crc kubenswrapper[4681]: cp $ca_file_path $reg_dir_path/ca.crt Apr 04 01:57:14 crc kubenswrapper[4681]: fi Apr 04 01:57:14 crc kubenswrapper[4681]: done Apr 04 01:57:14 crc kubenswrapper[4681]: for d in $(ls /etc/docker/certs.d); do Apr 04 01:57:14 crc kubenswrapper[4681]: echo $d Apr 04 01:57:14 crc kubenswrapper[4681]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Apr 04 01:57:14 crc kubenswrapper[4681]: reg_conf_path="/tmp/serviceca/${dp}" Apr 04 01:57:14 crc kubenswrapper[4681]: if [ ! -e "${reg_conf_path}" ]; then Apr 04 01:57:14 crc kubenswrapper[4681]: rm -rf /etc/docker/certs.d/$d Apr 04 01:57:14 crc kubenswrapper[4681]: fi Apr 04 01:57:14 crc kubenswrapper[4681]: done Apr 04 01:57:14 crc kubenswrapper[4681]: sleep 60 & wait ${!} Apr 04 01:57:14 crc kubenswrapper[4681]: done Apr 04 01:57:14 crc kubenswrapper[4681]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sxjst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-8jsq4_openshift-image-registry(3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:57:14 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:57:14 crc kubenswrapper[4681]: E0404 01:57:14.204917 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-8jsq4" podUID="3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.267364 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.267420 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.267435 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.267479 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.267495 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:14Z","lastTransitionTime":"2026-04-04T01:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.370612 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.370677 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.370697 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.370724 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.370745 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:14Z","lastTransitionTime":"2026-04-04T01:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.473905 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.473975 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.473994 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.474019 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.474039 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:14Z","lastTransitionTime":"2026-04-04T01:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.576633 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.576698 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.576715 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.576740 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.576757 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:14Z","lastTransitionTime":"2026-04-04T01:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.679545 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.679603 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.679623 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.679647 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.679664 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:14Z","lastTransitionTime":"2026-04-04T01:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.782935 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.783000 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.783021 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.783068 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.783086 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:14Z","lastTransitionTime":"2026-04-04T01:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.885591 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.885667 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.885691 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.885724 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.885741 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:14Z","lastTransitionTime":"2026-04-04T01:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.989323 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.989399 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.989416 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.989441 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:14 crc kubenswrapper[4681]: I0404 01:57:14.989462 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:14Z","lastTransitionTime":"2026-04-04T01:57:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.092451 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.092507 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.092524 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.092548 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.092565 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:15Z","lastTransitionTime":"2026-04-04T01:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.195183 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.195237 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.195254 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.195317 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.195339 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:15Z","lastTransitionTime":"2026-04-04T01:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.298407 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.298478 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.298494 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.298518 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.298534 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:15Z","lastTransitionTime":"2026-04-04T01:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.401605 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.401681 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.401700 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.401724 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.401743 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:15Z","lastTransitionTime":"2026-04-04T01:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.503864 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.503904 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.503914 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.503931 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.503944 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:15Z","lastTransitionTime":"2026-04-04T01:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.607803 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.607863 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.607880 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.607902 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.607919 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:15Z","lastTransitionTime":"2026-04-04T01:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.710957 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.711017 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.711031 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.711045 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.711055 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:15Z","lastTransitionTime":"2026-04-04T01:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.814120 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.814180 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.814207 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.814228 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.814241 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:15Z","lastTransitionTime":"2026-04-04T01:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.917542 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.917715 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.917740 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.917763 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:15 crc kubenswrapper[4681]: I0404 01:57:15.917780 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:15Z","lastTransitionTime":"2026-04-04T01:57:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.021004 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.021073 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.021120 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.021146 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.021165 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:16Z","lastTransitionTime":"2026-04-04T01:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.123896 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.123965 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.123989 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.124019 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.124042 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:16Z","lastTransitionTime":"2026-04-04T01:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.200227 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:16 crc kubenswrapper[4681]: E0404 01:57:16.200644 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.200705 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.200668 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.200706 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:16 crc kubenswrapper[4681]: E0404 01:57:16.201242 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:57:16 crc kubenswrapper[4681]: E0404 01:57:16.201378 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:57:16 crc kubenswrapper[4681]: E0404 01:57:16.202026 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.202475 4681 scope.go:117] "RemoveContainer" containerID="b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.228008 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.229100 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.229123 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.229147 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.229165 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:16Z","lastTransitionTime":"2026-04-04T01:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.331793 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.331876 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.331894 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.331915 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.331930 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:16Z","lastTransitionTime":"2026-04-04T01:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.434686 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.434732 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.434744 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.434762 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.434786 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:16Z","lastTransitionTime":"2026-04-04T01:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.537971 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.538046 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.538056 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.538078 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.538091 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:16Z","lastTransitionTime":"2026-04-04T01:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.640999 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.641063 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.641081 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.641106 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.641125 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:16Z","lastTransitionTime":"2026-04-04T01:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.678501 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.681957 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c"} Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.682339 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.700146 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.714262 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.725936 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.739039 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.743575 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.743631 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.743643 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.743667 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.743681 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:16Z","lastTransitionTime":"2026-04-04T01:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.752842 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.765925 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.773739 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.783785 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.793497 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.805982 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.817137 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.838252 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.846131 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.846164 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.846188 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.846202 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.846232 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:16Z","lastTransitionTime":"2026-04-04T01:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.849273 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.858192 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.867161 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.883720 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.950446 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.950485 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.950494 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.950508 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:16 crc kubenswrapper[4681]: I0404 01:57:16.950518 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:16Z","lastTransitionTime":"2026-04-04T01:57:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.053617 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.053652 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.053660 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.053673 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.053686 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:17Z","lastTransitionTime":"2026-04-04T01:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.156629 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.156702 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.156726 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.156755 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.156778 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:17Z","lastTransitionTime":"2026-04-04T01:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.259652 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.259703 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.259720 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.259742 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.259757 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:17Z","lastTransitionTime":"2026-04-04T01:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.362889 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.362955 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.362972 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.362997 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.363017 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:17Z","lastTransitionTime":"2026-04-04T01:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.466065 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.466155 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.466171 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.466195 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.466211 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:17Z","lastTransitionTime":"2026-04-04T01:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.569994 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.570173 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.570195 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.570218 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.570315 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:17Z","lastTransitionTime":"2026-04-04T01:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.673819 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.673965 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.673983 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.674006 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.674022 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:17Z","lastTransitionTime":"2026-04-04T01:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.777313 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.777397 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.777422 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.777453 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.777473 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:17Z","lastTransitionTime":"2026-04-04T01:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.880970 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.881131 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.881204 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.881243 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.881309 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:17Z","lastTransitionTime":"2026-04-04T01:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.984132 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.984190 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.984207 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.984232 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:17 crc kubenswrapper[4681]: I0404 01:57:17.984248 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:17Z","lastTransitionTime":"2026-04-04T01:57:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.088107 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.088160 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.088178 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.088200 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.088217 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:18Z","lastTransitionTime":"2026-04-04T01:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.191824 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.191881 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.191899 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.191927 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.191952 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:18Z","lastTransitionTime":"2026-04-04T01:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.200765 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.200783 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:18 crc kubenswrapper[4681]: E0404 01:57:18.200897 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.200951 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.201014 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:18 crc kubenswrapper[4681]: E0404 01:57:18.201238 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:57:18 crc kubenswrapper[4681]: E0404 01:57:18.201415 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:57:18 crc kubenswrapper[4681]: E0404 01:57:18.201451 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.294632 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.294709 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.294735 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.294763 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.294786 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:18Z","lastTransitionTime":"2026-04-04T01:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.398531 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.398644 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.398669 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.398700 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.398723 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:18Z","lastTransitionTime":"2026-04-04T01:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.502043 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.502132 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.502162 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.502191 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.502216 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:18Z","lastTransitionTime":"2026-04-04T01:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.605065 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.605107 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.605119 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.605137 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.605148 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:18Z","lastTransitionTime":"2026-04-04T01:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.708759 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.708830 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.708848 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.708873 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.708895 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:18Z","lastTransitionTime":"2026-04-04T01:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.812103 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.812160 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.812211 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.812243 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.812302 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:18Z","lastTransitionTime":"2026-04-04T01:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.915536 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.915585 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.915597 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.915617 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:18 crc kubenswrapper[4681]: I0404 01:57:18.915635 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:18Z","lastTransitionTime":"2026-04-04T01:57:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.018497 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.018566 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.018590 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.018621 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.018644 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:19Z","lastTransitionTime":"2026-04-04T01:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.120942 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.121012 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.121031 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.121072 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.121103 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:19Z","lastTransitionTime":"2026-04-04T01:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:19 crc kubenswrapper[4681]: E0404 01:57:19.203246 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:57:19 crc kubenswrapper[4681]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[/bin/bash -c #!/bin/bash Apr 04 01:57:19 crc kubenswrapper[4681]: set -euo pipefail Apr 04 01:57:19 crc kubenswrapper[4681]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Apr 04 01:57:19 crc kubenswrapper[4681]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Apr 04 01:57:19 crc kubenswrapper[4681]: # As the secret mount is optional we must wait for the files to be present. Apr 04 01:57:19 crc kubenswrapper[4681]: # The service is created in monitor.yaml and this is created in sdn.yaml. Apr 04 01:57:19 crc kubenswrapper[4681]: TS=$(date +%s) Apr 04 01:57:19 crc kubenswrapper[4681]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Apr 04 01:57:19 crc kubenswrapper[4681]: HAS_LOGGED_INFO=0 Apr 04 01:57:19 crc kubenswrapper[4681]: Apr 04 01:57:19 crc kubenswrapper[4681]: log_missing_certs(){ Apr 04 01:57:19 crc kubenswrapper[4681]: CUR_TS=$(date +%s) Apr 04 01:57:19 crc kubenswrapper[4681]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Apr 04 01:57:19 crc kubenswrapper[4681]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Apr 04 01:57:19 crc kubenswrapper[4681]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Apr 04 01:57:19 crc kubenswrapper[4681]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Apr 04 01:57:19 crc kubenswrapper[4681]: HAS_LOGGED_INFO=1 Apr 04 01:57:19 crc kubenswrapper[4681]: fi Apr 04 01:57:19 crc kubenswrapper[4681]: } Apr 04 01:57:19 crc kubenswrapper[4681]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Apr 04 01:57:19 crc kubenswrapper[4681]: log_missing_certs Apr 04 01:57:19 crc kubenswrapper[4681]: sleep 5 Apr 04 01:57:19 crc kubenswrapper[4681]: done Apr 04 01:57:19 crc kubenswrapper[4681]: Apr 04 01:57:19 crc kubenswrapper[4681]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Apr 04 01:57:19 crc kubenswrapper[4681]: exec /usr/bin/kube-rbac-proxy \ Apr 04 01:57:19 crc kubenswrapper[4681]: --logtostderr \ Apr 04 01:57:19 crc kubenswrapper[4681]: --secure-listen-address=:9108 \ Apr 04 01:57:19 crc kubenswrapper[4681]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Apr 04 01:57:19 crc kubenswrapper[4681]: --upstream=http://127.0.0.1:29108/ \ Apr 04 01:57:19 crc kubenswrapper[4681]: --tls-private-key-file=${TLS_PK} \ Apr 04 01:57:19 crc kubenswrapper[4681]: --tls-cert-file=${TLS_CERT} Apr 04 01:57:19 crc kubenswrapper[4681]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvgg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-sswhf_openshift-ovn-kubernetes(fc4fe566-3f65-4de4-9595-80b23fe4149c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:57:19 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:57:19 crc kubenswrapper[4681]: E0404 01:57:19.205585 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:57:19 crc kubenswrapper[4681]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Apr 04 01:57:19 crc kubenswrapper[4681]: if [[ -f "/env/_master" ]]; then Apr 04 01:57:19 crc kubenswrapper[4681]: set -o allexport Apr 04 01:57:19 crc kubenswrapper[4681]: source "/env/_master" Apr 04 01:57:19 crc kubenswrapper[4681]: set +o allexport Apr 04 01:57:19 crc kubenswrapper[4681]: fi Apr 04 01:57:19 crc kubenswrapper[4681]: Apr 04 01:57:19 crc kubenswrapper[4681]: ovn_v4_join_subnet_opt= Apr 04 01:57:19 crc kubenswrapper[4681]: if [[ "" != "" ]]; then Apr 04 01:57:19 crc kubenswrapper[4681]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Apr 04 01:57:19 crc kubenswrapper[4681]: fi Apr 04 01:57:19 crc kubenswrapper[4681]: ovn_v6_join_subnet_opt= Apr 04 01:57:19 crc kubenswrapper[4681]: if [[ "" != "" ]]; then Apr 04 01:57:19 crc kubenswrapper[4681]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Apr 04 01:57:19 crc kubenswrapper[4681]: fi Apr 04 01:57:19 crc kubenswrapper[4681]: Apr 04 01:57:19 crc kubenswrapper[4681]: ovn_v4_transit_switch_subnet_opt= Apr 04 01:57:19 crc kubenswrapper[4681]: if [[ "" != "" ]]; then Apr 04 01:57:19 crc kubenswrapper[4681]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Apr 04 01:57:19 crc kubenswrapper[4681]: fi Apr 04 01:57:19 crc kubenswrapper[4681]: ovn_v6_transit_switch_subnet_opt= Apr 04 01:57:19 crc kubenswrapper[4681]: if [[ "" != "" ]]; then Apr 04 01:57:19 crc kubenswrapper[4681]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Apr 04 01:57:19 crc kubenswrapper[4681]: fi Apr 04 01:57:19 crc kubenswrapper[4681]: Apr 04 01:57:19 crc kubenswrapper[4681]: dns_name_resolver_enabled_flag= Apr 04 01:57:19 crc kubenswrapper[4681]: if [[ "false" == "true" ]]; then Apr 04 01:57:19 crc kubenswrapper[4681]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Apr 04 01:57:19 crc kubenswrapper[4681]: fi Apr 04 01:57:19 crc kubenswrapper[4681]: Apr 04 01:57:19 crc kubenswrapper[4681]: persistent_ips_enabled_flag= Apr 04 01:57:19 crc kubenswrapper[4681]: if [[ "true" == "true" ]]; then Apr 04 01:57:19 crc kubenswrapper[4681]: persistent_ips_enabled_flag="--enable-persistent-ips" Apr 04 01:57:19 crc kubenswrapper[4681]: fi Apr 04 01:57:19 crc kubenswrapper[4681]: Apr 04 01:57:19 crc kubenswrapper[4681]: # This is needed so that converting clusters from GA to TP Apr 04 01:57:19 crc kubenswrapper[4681]: # will rollout control plane pods as well Apr 04 01:57:19 crc kubenswrapper[4681]: network_segmentation_enabled_flag= Apr 04 01:57:19 crc kubenswrapper[4681]: multi_network_enabled_flag= Apr 04 01:57:19 crc kubenswrapper[4681]: if [[ "true" == "true" ]]; then Apr 04 01:57:19 crc kubenswrapper[4681]: multi_network_enabled_flag="--enable-multi-network" Apr 04 01:57:19 crc kubenswrapper[4681]: network_segmentation_enabled_flag="--enable-network-segmentation" Apr 04 01:57:19 crc kubenswrapper[4681]: fi Apr 04 01:57:19 crc kubenswrapper[4681]: Apr 04 01:57:19 crc kubenswrapper[4681]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Apr 04 01:57:19 crc kubenswrapper[4681]: exec /usr/bin/ovnkube \ Apr 04 01:57:19 crc kubenswrapper[4681]: --enable-interconnect \ Apr 04 01:57:19 crc kubenswrapper[4681]: --init-cluster-manager "${K8S_NODE}" \ Apr 04 01:57:19 crc kubenswrapper[4681]: --config-file=/run/ovnkube-config/ovnkube.conf \ Apr 04 01:57:19 crc kubenswrapper[4681]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Apr 04 01:57:19 crc kubenswrapper[4681]: --metrics-bind-address "127.0.0.1:29108" \ Apr 04 01:57:19 crc kubenswrapper[4681]: --metrics-enable-pprof \ Apr 04 01:57:19 crc kubenswrapper[4681]: --metrics-enable-config-duration \ Apr 04 01:57:19 crc kubenswrapper[4681]: ${ovn_v4_join_subnet_opt} \ Apr 04 01:57:19 crc kubenswrapper[4681]: ${ovn_v6_join_subnet_opt} \ Apr 04 01:57:19 crc kubenswrapper[4681]: ${ovn_v4_transit_switch_subnet_opt} \ Apr 04 01:57:19 crc kubenswrapper[4681]: ${ovn_v6_transit_switch_subnet_opt} \ Apr 04 01:57:19 crc kubenswrapper[4681]: ${dns_name_resolver_enabled_flag} \ Apr 04 01:57:19 crc kubenswrapper[4681]: ${persistent_ips_enabled_flag} \ Apr 04 01:57:19 crc kubenswrapper[4681]: ${multi_network_enabled_flag} \ Apr 04 01:57:19 crc kubenswrapper[4681]: ${network_segmentation_enabled_flag} Apr 04 01:57:19 crc kubenswrapper[4681]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kvgg6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-749d76644c-sswhf_openshift-ovn-kubernetes(fc4fe566-3f65-4de4-9595-80b23fe4149c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:57:19 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:57:19 crc kubenswrapper[4681]: E0404 01:57:19.207604 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" podUID="fc4fe566-3f65-4de4-9595-80b23fe4149c" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.224767 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.224831 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.224847 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.224873 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.224891 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:19Z","lastTransitionTime":"2026-04-04T01:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.328030 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.328098 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.328115 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.328141 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.328163 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:19Z","lastTransitionTime":"2026-04-04T01:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.432156 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.432219 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.432239 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.432301 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.432319 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:19Z","lastTransitionTime":"2026-04-04T01:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.535162 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.535200 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.535210 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.535229 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.535245 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:19Z","lastTransitionTime":"2026-04-04T01:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.638759 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.638800 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.638810 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.638824 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.638835 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:19Z","lastTransitionTime":"2026-04-04T01:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.741740 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.741786 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.741799 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.741814 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.741826 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:19Z","lastTransitionTime":"2026-04-04T01:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.844546 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.844582 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.844592 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.844607 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.844619 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:19Z","lastTransitionTime":"2026-04-04T01:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.946956 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.946986 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.946994 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.947006 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:19 crc kubenswrapper[4681]: I0404 01:57:19.947015 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:19Z","lastTransitionTime":"2026-04-04T01:57:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.048976 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.049046 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.049067 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.049097 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.049118 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:20Z","lastTransitionTime":"2026-04-04T01:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.156843 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.156928 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.156944 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.156963 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.156975 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:20Z","lastTransitionTime":"2026-04-04T01:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.200899 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.201215 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.201155 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.201144 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:20 crc kubenswrapper[4681]: E0404 01:57:20.201471 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:57:20 crc kubenswrapper[4681]: E0404 01:57:20.201859 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:57:20 crc kubenswrapper[4681]: E0404 01:57:20.201921 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:57:20 crc kubenswrapper[4681]: E0404 01:57:20.201996 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:57:20 crc kubenswrapper[4681]: E0404 01:57:20.203199 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Apr 04 01:57:20 crc kubenswrapper[4681]: E0404 01:57:20.204630 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.218580 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.218631 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.218642 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.218657 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.218669 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:20Z","lastTransitionTime":"2026-04-04T01:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:20 crc kubenswrapper[4681]: E0404 01:57:20.229965 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.233852 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.234042 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.234211 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.234416 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.234547 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:20Z","lastTransitionTime":"2026-04-04T01:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:20 crc kubenswrapper[4681]: E0404 01:57:20.245464 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.248669 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.248709 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.248722 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.248740 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.248753 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:20Z","lastTransitionTime":"2026-04-04T01:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:20 crc kubenswrapper[4681]: E0404 01:57:20.259015 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.263382 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.263420 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.263432 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.263451 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.263463 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:20Z","lastTransitionTime":"2026-04-04T01:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:20 crc kubenswrapper[4681]: E0404 01:57:20.274111 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.278017 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.278051 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.278063 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.278080 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.278092 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:20Z","lastTransitionTime":"2026-04-04T01:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:20 crc kubenswrapper[4681]: E0404 01:57:20.289191 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:20 crc kubenswrapper[4681]: E0404 01:57:20.289362 4681 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.291496 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.291528 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.291539 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.291554 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.291565 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:20Z","lastTransitionTime":"2026-04-04T01:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.394051 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.394089 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.394098 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.394114 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.394125 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:20Z","lastTransitionTime":"2026-04-04T01:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.497390 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.497465 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.497487 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.497516 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.497537 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:20Z","lastTransitionTime":"2026-04-04T01:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.600253 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.600311 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.600322 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.600337 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.600351 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:20Z","lastTransitionTime":"2026-04-04T01:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.702699 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.702739 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.702750 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.702767 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.702778 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:20Z","lastTransitionTime":"2026-04-04T01:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.805120 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.805183 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.805196 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.805212 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.805225 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:20Z","lastTransitionTime":"2026-04-04T01:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.909085 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.909218 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.909240 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.909292 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:20 crc kubenswrapper[4681]: I0404 01:57:20.909309 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:20Z","lastTransitionTime":"2026-04-04T01:57:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:21 crc kubenswrapper[4681]: I0404 01:57:21.012553 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:21 crc kubenswrapper[4681]: I0404 01:57:21.012623 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:21 crc kubenswrapper[4681]: I0404 01:57:21.012649 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:21 crc kubenswrapper[4681]: I0404 01:57:21.012710 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:21 crc kubenswrapper[4681]: I0404 01:57:21.012735 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:21Z","lastTransitionTime":"2026-04-04T01:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:21 crc kubenswrapper[4681]: I0404 01:57:21.117120 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:21 crc kubenswrapper[4681]: I0404 01:57:21.117175 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:21 crc kubenswrapper[4681]: I0404 01:57:21.117191 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:21 crc kubenswrapper[4681]: I0404 01:57:21.117213 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:21 crc kubenswrapper[4681]: I0404 01:57:21.117227 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:21Z","lastTransitionTime":"2026-04-04T01:57:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:21 crc kubenswrapper[4681]: E0404 01:57:21.218075 4681 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Apr 04 01:57:21 crc kubenswrapper[4681]: I0404 01:57:21.218307 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:21 crc kubenswrapper[4681]: I0404 01:57:21.248926 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:21 crc kubenswrapper[4681]: I0404 01:57:21.266057 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:21 crc kubenswrapper[4681]: I0404 01:57:21.281651 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:21 crc kubenswrapper[4681]: I0404 01:57:21.300394 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:21 crc kubenswrapper[4681]: E0404 01:57:21.301903 4681 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 04 01:57:21 crc kubenswrapper[4681]: I0404 01:57:21.314527 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:21 crc kubenswrapper[4681]: I0404 01:57:21.328446 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:21 crc kubenswrapper[4681]: I0404 01:57:21.342677 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:21 crc kubenswrapper[4681]: I0404 01:57:21.359750 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:21 crc kubenswrapper[4681]: I0404 01:57:21.375155 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:21 crc kubenswrapper[4681]: I0404 01:57:21.389886 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:21 crc kubenswrapper[4681]: I0404 01:57:21.402542 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:21 crc kubenswrapper[4681]: I0404 01:57:21.415711 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:21 crc kubenswrapper[4681]: I0404 01:57:21.431978 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:21 crc kubenswrapper[4681]: I0404 01:57:21.444611 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:21 crc kubenswrapper[4681]: I0404 01:57:21.454338 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:22 crc kubenswrapper[4681]: I0404 01:57:22.200649 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:22 crc kubenswrapper[4681]: I0404 01:57:22.200674 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:22 crc kubenswrapper[4681]: I0404 01:57:22.200991 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:22 crc kubenswrapper[4681]: I0404 01:57:22.201126 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:22 crc kubenswrapper[4681]: E0404 01:57:22.201126 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:57:22 crc kubenswrapper[4681]: E0404 01:57:22.201599 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:57:22 crc kubenswrapper[4681]: E0404 01:57:22.201802 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:57:22 crc kubenswrapper[4681]: E0404 01:57:22.201964 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:57:22 crc kubenswrapper[4681]: E0404 01:57:22.203340 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q824j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-bqtgx_openshift-multus(5918fa67-6cfa-4c3b-bc04-7cc7888abf1c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Apr 04 01:57:22 crc kubenswrapper[4681]: E0404 01:57:22.203448 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:57:22 crc kubenswrapper[4681]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Apr 04 01:57:22 crc kubenswrapper[4681]: apiVersion: v1 Apr 04 01:57:22 crc kubenswrapper[4681]: clusters: Apr 04 01:57:22 crc kubenswrapper[4681]: - cluster: Apr 04 01:57:22 crc kubenswrapper[4681]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Apr 04 01:57:22 crc kubenswrapper[4681]: server: https://api-int.crc.testing:6443 Apr 04 01:57:22 crc kubenswrapper[4681]: name: default-cluster Apr 04 01:57:22 crc kubenswrapper[4681]: contexts: Apr 04 01:57:22 crc kubenswrapper[4681]: - context: Apr 04 01:57:22 crc kubenswrapper[4681]: cluster: default-cluster Apr 04 01:57:22 crc kubenswrapper[4681]: namespace: default Apr 04 01:57:22 crc kubenswrapper[4681]: user: default-auth Apr 04 01:57:22 crc kubenswrapper[4681]: name: default-context Apr 04 01:57:22 crc kubenswrapper[4681]: current-context: default-context Apr 04 01:57:22 crc kubenswrapper[4681]: kind: Config Apr 04 01:57:22 crc kubenswrapper[4681]: preferences: {} Apr 04 01:57:22 crc kubenswrapper[4681]: users: Apr 04 01:57:22 crc kubenswrapper[4681]: - name: default-auth Apr 04 01:57:22 crc kubenswrapper[4681]: user: Apr 04 01:57:22 crc kubenswrapper[4681]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Apr 04 01:57:22 crc kubenswrapper[4681]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Apr 04 01:57:22 crc kubenswrapper[4681]: EOF Apr 04 01:57:22 crc kubenswrapper[4681]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vz8jr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-cntwc_openshift-ovn-kubernetes(d004639b-c07a-4401-8588-8af4ed981db3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:57:22 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:57:22 crc kubenswrapper[4681]: E0404 01:57:22.204514 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:57:22 crc kubenswrapper[4681]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Apr 04 01:57:22 crc kubenswrapper[4681]: if [[ -f "/env/_master" ]]; then Apr 04 01:57:22 crc kubenswrapper[4681]: set -o allexport Apr 04 01:57:22 crc kubenswrapper[4681]: source "/env/_master" Apr 04 01:57:22 crc kubenswrapper[4681]: set +o allexport Apr 04 01:57:22 crc kubenswrapper[4681]: fi Apr 04 01:57:22 crc kubenswrapper[4681]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Apr 04 01:57:22 crc kubenswrapper[4681]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Apr 04 01:57:22 crc kubenswrapper[4681]: ho_enable="--enable-hybrid-overlay" Apr 04 01:57:22 crc kubenswrapper[4681]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Apr 04 01:57:22 crc kubenswrapper[4681]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Apr 04 01:57:22 crc kubenswrapper[4681]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Apr 04 01:57:22 crc kubenswrapper[4681]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Apr 04 01:57:22 crc kubenswrapper[4681]: --webhook-cert-dir="/etc/webhook-cert" \ Apr 04 01:57:22 crc kubenswrapper[4681]: --webhook-host=127.0.0.1 \ Apr 04 01:57:22 crc kubenswrapper[4681]: --webhook-port=9743 \ Apr 04 01:57:22 crc kubenswrapper[4681]: ${ho_enable} \ Apr 04 01:57:22 crc kubenswrapper[4681]: --enable-interconnect \ Apr 04 01:57:22 crc kubenswrapper[4681]: --disable-approver \ Apr 04 01:57:22 crc kubenswrapper[4681]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Apr 04 01:57:22 crc kubenswrapper[4681]: --wait-for-kubernetes-api=200s \ Apr 04 01:57:22 crc kubenswrapper[4681]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Apr 04 01:57:22 crc kubenswrapper[4681]: --loglevel="${LOGLEVEL}" Apr 04 01:57:22 crc kubenswrapper[4681]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:57:22 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:57:22 crc kubenswrapper[4681]: E0404 01:57:22.204587 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" podUID="5918fa67-6cfa-4c3b-bc04-7cc7888abf1c" Apr 04 01:57:22 crc kubenswrapper[4681]: E0404 01:57:22.204597 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:57:22 crc kubenswrapper[4681]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Apr 04 01:57:22 crc kubenswrapper[4681]: set -uo pipefail Apr 04 01:57:22 crc kubenswrapper[4681]: Apr 04 01:57:22 crc kubenswrapper[4681]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Apr 04 01:57:22 crc kubenswrapper[4681]: Apr 04 01:57:22 crc kubenswrapper[4681]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Apr 04 01:57:22 crc kubenswrapper[4681]: HOSTS_FILE="/etc/hosts" Apr 04 01:57:22 crc kubenswrapper[4681]: TEMP_FILE="/etc/hosts.tmp" Apr 04 01:57:22 crc kubenswrapper[4681]: Apr 04 01:57:22 crc kubenswrapper[4681]: IFS=', ' read -r -a services <<< "${SERVICES}" Apr 04 01:57:22 crc kubenswrapper[4681]: Apr 04 01:57:22 crc kubenswrapper[4681]: # Make a temporary file with the old hosts file's attributes. Apr 04 01:57:22 crc kubenswrapper[4681]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Apr 04 01:57:22 crc kubenswrapper[4681]: echo "Failed to preserve hosts file. Exiting." Apr 04 01:57:22 crc kubenswrapper[4681]: exit 1 Apr 04 01:57:22 crc kubenswrapper[4681]: fi Apr 04 01:57:22 crc kubenswrapper[4681]: Apr 04 01:57:22 crc kubenswrapper[4681]: while true; do Apr 04 01:57:22 crc kubenswrapper[4681]: declare -A svc_ips Apr 04 01:57:22 crc kubenswrapper[4681]: for svc in "${services[@]}"; do Apr 04 01:57:22 crc kubenswrapper[4681]: # Fetch service IP from cluster dns if present. We make several tries Apr 04 01:57:22 crc kubenswrapper[4681]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Apr 04 01:57:22 crc kubenswrapper[4681]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Apr 04 01:57:22 crc kubenswrapper[4681]: # support UDP loadbalancers and require reaching DNS through TCP. Apr 04 01:57:22 crc kubenswrapper[4681]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Apr 04 01:57:22 crc kubenswrapper[4681]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Apr 04 01:57:22 crc kubenswrapper[4681]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Apr 04 01:57:22 crc kubenswrapper[4681]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Apr 04 01:57:22 crc kubenswrapper[4681]: for i in ${!cmds[*]} Apr 04 01:57:22 crc kubenswrapper[4681]: do Apr 04 01:57:22 crc kubenswrapper[4681]: ips=($(eval "${cmds[i]}")) Apr 04 01:57:22 crc kubenswrapper[4681]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Apr 04 01:57:22 crc kubenswrapper[4681]: svc_ips["${svc}"]="${ips[@]}" Apr 04 01:57:22 crc kubenswrapper[4681]: break Apr 04 01:57:22 crc kubenswrapper[4681]: fi Apr 04 01:57:22 crc kubenswrapper[4681]: done Apr 04 01:57:22 crc kubenswrapper[4681]: done Apr 04 01:57:22 crc kubenswrapper[4681]: Apr 04 01:57:22 crc kubenswrapper[4681]: # Update /etc/hosts only if we get valid service IPs Apr 04 01:57:22 crc kubenswrapper[4681]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Apr 04 01:57:22 crc kubenswrapper[4681]: # Stale entries could exist in /etc/hosts if the service is deleted Apr 04 01:57:22 crc kubenswrapper[4681]: if [[ -n "${svc_ips[*]-}" ]]; then Apr 04 01:57:22 crc kubenswrapper[4681]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Apr 04 01:57:22 crc kubenswrapper[4681]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Apr 04 01:57:22 crc kubenswrapper[4681]: # Only continue rebuilding the hosts entries if its original content is preserved Apr 04 01:57:22 crc kubenswrapper[4681]: sleep 60 & wait Apr 04 01:57:22 crc kubenswrapper[4681]: continue Apr 04 01:57:22 crc kubenswrapper[4681]: fi Apr 04 01:57:22 crc kubenswrapper[4681]: Apr 04 01:57:22 crc kubenswrapper[4681]: # Append resolver entries for services Apr 04 01:57:22 crc kubenswrapper[4681]: rc=0 Apr 04 01:57:22 crc kubenswrapper[4681]: for svc in "${!svc_ips[@]}"; do Apr 04 01:57:22 crc kubenswrapper[4681]: for ip in ${svc_ips[${svc}]}; do Apr 04 01:57:22 crc kubenswrapper[4681]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Apr 04 01:57:22 crc kubenswrapper[4681]: done Apr 04 01:57:22 crc kubenswrapper[4681]: done Apr 04 01:57:22 crc kubenswrapper[4681]: if [[ $rc -ne 0 ]]; then Apr 04 01:57:22 crc kubenswrapper[4681]: sleep 60 & wait Apr 04 01:57:22 crc kubenswrapper[4681]: continue Apr 04 01:57:22 crc kubenswrapper[4681]: fi Apr 04 01:57:22 crc kubenswrapper[4681]: Apr 04 01:57:22 crc kubenswrapper[4681]: Apr 04 01:57:22 crc kubenswrapper[4681]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Apr 04 01:57:22 crc kubenswrapper[4681]: # Replace /etc/hosts with our modified version if needed Apr 04 01:57:22 crc kubenswrapper[4681]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Apr 04 01:57:22 crc kubenswrapper[4681]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Apr 04 01:57:22 crc kubenswrapper[4681]: fi Apr 04 01:57:22 crc kubenswrapper[4681]: sleep 60 & wait Apr 04 01:57:22 crc kubenswrapper[4681]: unset svc_ips Apr 04 01:57:22 crc kubenswrapper[4681]: done Apr 04 01:57:22 crc kubenswrapper[4681]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n97gc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-jsn9l_openshift-dns(e4e1568b-1dc4-41c2-a74f-38bfabcf1280): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:57:22 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:57:22 crc kubenswrapper[4681]: E0404 01:57:22.204601 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" podUID="d004639b-c07a-4401-8588-8af4ed981db3" Apr 04 01:57:22 crc kubenswrapper[4681]: E0404 01:57:22.205740 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-jsn9l" podUID="e4e1568b-1dc4-41c2-a74f-38bfabcf1280" Apr 04 01:57:22 crc kubenswrapper[4681]: E0404 01:57:22.207309 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:57:22 crc kubenswrapper[4681]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Apr 04 01:57:22 crc kubenswrapper[4681]: if [[ -f "/env/_master" ]]; then Apr 04 01:57:22 crc kubenswrapper[4681]: set -o allexport Apr 04 01:57:22 crc kubenswrapper[4681]: source "/env/_master" Apr 04 01:57:22 crc kubenswrapper[4681]: set +o allexport Apr 04 01:57:22 crc kubenswrapper[4681]: fi Apr 04 01:57:22 crc kubenswrapper[4681]: Apr 04 01:57:22 crc kubenswrapper[4681]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Apr 04 01:57:22 crc kubenswrapper[4681]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Apr 04 01:57:22 crc kubenswrapper[4681]: --disable-webhook \ Apr 04 01:57:22 crc kubenswrapper[4681]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Apr 04 01:57:22 crc kubenswrapper[4681]: --loglevel="${LOGLEVEL}" Apr 04 01:57:22 crc kubenswrapper[4681]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:57:22 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:57:22 crc kubenswrapper[4681]: E0404 01:57:22.208542 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Apr 04 01:57:22 crc kubenswrapper[4681]: I0404 01:57:22.281562 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41bdd8e6-130d-4e3e-b466-313031c233d1-metrics-certs\") pod \"network-metrics-daemon-jk6f6\" (UID: \"41bdd8e6-130d-4e3e-b466-313031c233d1\") " pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:22 crc kubenswrapper[4681]: E0404 01:57:22.281825 4681 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 04 01:57:22 crc kubenswrapper[4681]: E0404 01:57:22.281948 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41bdd8e6-130d-4e3e-b466-313031c233d1-metrics-certs podName:41bdd8e6-130d-4e3e-b466-313031c233d1 nodeName:}" failed. No retries permitted until 2026-04-04 01:57:38.28191531 +0000 UTC m=+137.947690460 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/41bdd8e6-130d-4e3e-b466-313031c233d1-metrics-certs") pod "network-metrics-daemon-jk6f6" (UID: "41bdd8e6-130d-4e3e-b466-313031c233d1") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 04 01:57:23 crc kubenswrapper[4681]: E0404 01:57:23.203085 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:57:23 crc kubenswrapper[4681]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Apr 04 01:57:23 crc kubenswrapper[4681]: set -o allexport Apr 04 01:57:23 crc kubenswrapper[4681]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Apr 04 01:57:23 crc kubenswrapper[4681]: source /etc/kubernetes/apiserver-url.env Apr 04 01:57:23 crc kubenswrapper[4681]: else Apr 04 01:57:23 crc kubenswrapper[4681]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Apr 04 01:57:23 crc kubenswrapper[4681]: exit 1 Apr 04 01:57:23 crc kubenswrapper[4681]: fi Apr 04 01:57:23 crc kubenswrapper[4681]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Apr 04 01:57:23 crc kubenswrapper[4681]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:57:23 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:57:23 crc kubenswrapper[4681]: E0404 01:57:23.203194 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:57:23 crc kubenswrapper[4681]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Apr 04 01:57:23 crc kubenswrapper[4681]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Apr 04 01:57:23 crc kubenswrapper[4681]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wqjdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-w5wbs_openshift-multus(cab7ffc5-0101-48b8-87ab-de8324bacc38): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:57:23 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:57:23 crc kubenswrapper[4681]: E0404 01:57:23.204513 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Apr 04 01:57:23 crc kubenswrapper[4681]: E0404 01:57:23.204574 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-w5wbs" podUID="cab7ffc5-0101-48b8-87ab-de8324bacc38" Apr 04 01:57:24 crc kubenswrapper[4681]: I0404 01:57:24.200346 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:24 crc kubenswrapper[4681]: I0404 01:57:24.200458 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:24 crc kubenswrapper[4681]: E0404 01:57:24.200521 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:57:24 crc kubenswrapper[4681]: I0404 01:57:24.200571 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:24 crc kubenswrapper[4681]: E0404 01:57:24.200693 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:57:24 crc kubenswrapper[4681]: I0404 01:57:24.200716 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:24 crc kubenswrapper[4681]: E0404 01:57:24.200997 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:57:24 crc kubenswrapper[4681]: E0404 01:57:24.201129 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:57:24 crc kubenswrapper[4681]: E0404 01:57:24.203475 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nt54j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Apr 04 01:57:24 crc kubenswrapper[4681]: E0404 01:57:24.206215 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nt54j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Apr 04 01:57:24 crc kubenswrapper[4681]: E0404 01:57:24.207451 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 01:57:25 crc kubenswrapper[4681]: E0404 01:57:25.202915 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 01:57:25 crc kubenswrapper[4681]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Apr 04 01:57:25 crc kubenswrapper[4681]: while [ true ]; Apr 04 01:57:25 crc kubenswrapper[4681]: do Apr 04 01:57:25 crc kubenswrapper[4681]: for f in $(ls /tmp/serviceca); do Apr 04 01:57:25 crc kubenswrapper[4681]: echo $f Apr 04 01:57:25 crc kubenswrapper[4681]: ca_file_path="/tmp/serviceca/${f}" Apr 04 01:57:25 crc kubenswrapper[4681]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Apr 04 01:57:25 crc kubenswrapper[4681]: reg_dir_path="/etc/docker/certs.d/${f}" Apr 04 01:57:25 crc kubenswrapper[4681]: if [ -e "${reg_dir_path}" ]; then Apr 04 01:57:25 crc kubenswrapper[4681]: cp -u $ca_file_path $reg_dir_path/ca.crt Apr 04 01:57:25 crc kubenswrapper[4681]: else Apr 04 01:57:25 crc kubenswrapper[4681]: mkdir $reg_dir_path Apr 04 01:57:25 crc kubenswrapper[4681]: cp $ca_file_path $reg_dir_path/ca.crt Apr 04 01:57:25 crc kubenswrapper[4681]: fi Apr 04 01:57:25 crc kubenswrapper[4681]: done Apr 04 01:57:25 crc kubenswrapper[4681]: for d in $(ls /etc/docker/certs.d); do Apr 04 01:57:25 crc kubenswrapper[4681]: echo $d Apr 04 01:57:25 crc kubenswrapper[4681]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Apr 04 01:57:25 crc kubenswrapper[4681]: reg_conf_path="/tmp/serviceca/${dp}" Apr 04 01:57:25 crc kubenswrapper[4681]: if [ ! -e "${reg_conf_path}" ]; then Apr 04 01:57:25 crc kubenswrapper[4681]: rm -rf /etc/docker/certs.d/$d Apr 04 01:57:25 crc kubenswrapper[4681]: fi Apr 04 01:57:25 crc kubenswrapper[4681]: done Apr 04 01:57:25 crc kubenswrapper[4681]: sleep 60 & wait ${!} Apr 04 01:57:25 crc kubenswrapper[4681]: done Apr 04 01:57:25 crc kubenswrapper[4681]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sxjst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-8jsq4_openshift-image-registry(3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Apr 04 01:57:25 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 01:57:25 crc kubenswrapper[4681]: E0404 01:57:25.204218 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-8jsq4" podUID="3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1" Apr 04 01:57:26 crc kubenswrapper[4681]: I0404 01:57:26.200365 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:26 crc kubenswrapper[4681]: I0404 01:57:26.200457 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:26 crc kubenswrapper[4681]: E0404 01:57:26.200540 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:57:26 crc kubenswrapper[4681]: I0404 01:57:26.200552 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:26 crc kubenswrapper[4681]: E0404 01:57:26.200644 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:57:26 crc kubenswrapper[4681]: E0404 01:57:26.200792 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:57:26 crc kubenswrapper[4681]: I0404 01:57:26.200389 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:26 crc kubenswrapper[4681]: E0404 01:57:26.201519 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:57:26 crc kubenswrapper[4681]: E0404 01:57:26.303554 4681 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 04 01:57:27 crc kubenswrapper[4681]: I0404 01:57:27.131153 4681 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Apr 04 01:57:28 crc kubenswrapper[4681]: I0404 01:57:28.049511 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:57:28 crc kubenswrapper[4681]: I0404 01:57:28.049703 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:28 crc kubenswrapper[4681]: E0404 01:57:28.049830 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:58:00.049794901 +0000 UTC m=+159.715570061 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:57:28 crc kubenswrapper[4681]: I0404 01:57:28.049884 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:28 crc kubenswrapper[4681]: E0404 01:57:28.049900 4681 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Apr 04 01:57:28 crc kubenswrapper[4681]: E0404 01:57:28.049988 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-04-04 01:58:00.049966656 +0000 UTC m=+159.715741816 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Apr 04 01:57:28 crc kubenswrapper[4681]: E0404 01:57:28.050013 4681 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Apr 04 01:57:28 crc kubenswrapper[4681]: E0404 01:57:28.050050 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-04-04 01:58:00.050040318 +0000 UTC m=+159.715815448 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Apr 04 01:57:28 crc kubenswrapper[4681]: I0404 01:57:28.151226 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:28 crc kubenswrapper[4681]: E0404 01:57:28.151590 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 04 01:57:28 crc kubenswrapper[4681]: E0404 01:57:28.151642 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 04 01:57:28 crc kubenswrapper[4681]: E0404 01:57:28.151670 4681 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:57:28 crc kubenswrapper[4681]: E0404 01:57:28.151709 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 04 01:57:28 crc kubenswrapper[4681]: E0404 01:57:28.151739 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 04 01:57:28 crc kubenswrapper[4681]: E0404 01:57:28.151758 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-04-04 01:58:00.151729779 +0000 UTC m=+159.817504939 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:57:28 crc kubenswrapper[4681]: E0404 01:57:28.151761 4681 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:57:28 crc kubenswrapper[4681]: E0404 01:57:28.151862 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-04-04 01:58:00.151836212 +0000 UTC m=+159.817611372 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:57:28 crc kubenswrapper[4681]: I0404 01:57:28.151602 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:28 crc kubenswrapper[4681]: I0404 01:57:28.200149 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:28 crc kubenswrapper[4681]: I0404 01:57:28.200186 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:28 crc kubenswrapper[4681]: I0404 01:57:28.200221 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:28 crc kubenswrapper[4681]: I0404 01:57:28.200164 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:28 crc kubenswrapper[4681]: E0404 01:57:28.200404 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:57:28 crc kubenswrapper[4681]: E0404 01:57:28.200523 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:57:28 crc kubenswrapper[4681]: E0404 01:57:28.200796 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:57:28 crc kubenswrapper[4681]: E0404 01:57:28.200959 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:57:30 crc kubenswrapper[4681]: I0404 01:57:30.200203 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:30 crc kubenswrapper[4681]: I0404 01:57:30.200310 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:30 crc kubenswrapper[4681]: I0404 01:57:30.200247 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:30 crc kubenswrapper[4681]: I0404 01:57:30.200219 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:30 crc kubenswrapper[4681]: E0404 01:57:30.200440 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:57:30 crc kubenswrapper[4681]: E0404 01:57:30.200614 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:57:30 crc kubenswrapper[4681]: E0404 01:57:30.200955 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:57:30 crc kubenswrapper[4681]: E0404 01:57:30.201008 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:57:30 crc kubenswrapper[4681]: I0404 01:57:30.615241 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:30 crc kubenswrapper[4681]: I0404 01:57:30.615315 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:30 crc kubenswrapper[4681]: I0404 01:57:30.615328 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:30 crc kubenswrapper[4681]: I0404 01:57:30.615347 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:30 crc kubenswrapper[4681]: I0404 01:57:30.615361 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:30Z","lastTransitionTime":"2026-04-04T01:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:30 crc kubenswrapper[4681]: E0404 01:57:30.625326 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:30 crc kubenswrapper[4681]: I0404 01:57:30.629328 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:30 crc kubenswrapper[4681]: I0404 01:57:30.629366 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:30 crc kubenswrapper[4681]: I0404 01:57:30.629378 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:30 crc kubenswrapper[4681]: I0404 01:57:30.629396 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:30 crc kubenswrapper[4681]: I0404 01:57:30.629408 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:30Z","lastTransitionTime":"2026-04-04T01:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:30 crc kubenswrapper[4681]: E0404 01:57:30.643900 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:30 crc kubenswrapper[4681]: I0404 01:57:30.647914 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:30 crc kubenswrapper[4681]: I0404 01:57:30.647973 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:30 crc kubenswrapper[4681]: I0404 01:57:30.647988 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:30 crc kubenswrapper[4681]: I0404 01:57:30.648008 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:30 crc kubenswrapper[4681]: I0404 01:57:30.648020 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:30Z","lastTransitionTime":"2026-04-04T01:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:30 crc kubenswrapper[4681]: E0404 01:57:30.661724 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:30 crc kubenswrapper[4681]: I0404 01:57:30.665872 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:30 crc kubenswrapper[4681]: I0404 01:57:30.665910 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:30 crc kubenswrapper[4681]: I0404 01:57:30.665937 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:30 crc kubenswrapper[4681]: I0404 01:57:30.665953 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:30 crc kubenswrapper[4681]: I0404 01:57:30.665967 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:30Z","lastTransitionTime":"2026-04-04T01:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:30 crc kubenswrapper[4681]: E0404 01:57:30.679866 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:30 crc kubenswrapper[4681]: I0404 01:57:30.683570 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:30 crc kubenswrapper[4681]: I0404 01:57:30.683607 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:30 crc kubenswrapper[4681]: I0404 01:57:30.683619 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:30 crc kubenswrapper[4681]: I0404 01:57:30.683637 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:30 crc kubenswrapper[4681]: I0404 01:57:30.683648 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:30Z","lastTransitionTime":"2026-04-04T01:57:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:30 crc kubenswrapper[4681]: E0404 01:57:30.697562 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:30 crc kubenswrapper[4681]: E0404 01:57:30.697799 4681 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Apr 04 01:57:31 crc kubenswrapper[4681]: I0404 01:57:31.217710 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:31 crc kubenswrapper[4681]: I0404 01:57:31.240718 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:31 crc kubenswrapper[4681]: I0404 01:57:31.254077 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:31 crc kubenswrapper[4681]: I0404 01:57:31.270425 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:31 crc kubenswrapper[4681]: I0404 01:57:31.285481 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:31 crc kubenswrapper[4681]: E0404 01:57:31.304653 4681 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 04 01:57:31 crc kubenswrapper[4681]: I0404 01:57:31.305997 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:31 crc kubenswrapper[4681]: I0404 01:57:31.319988 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:31 crc kubenswrapper[4681]: I0404 01:57:31.333672 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:31 crc kubenswrapper[4681]: I0404 01:57:31.349048 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:31 crc kubenswrapper[4681]: I0404 01:57:31.360447 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:31 crc kubenswrapper[4681]: I0404 01:57:31.373640 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:31 crc kubenswrapper[4681]: I0404 01:57:31.402921 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:31 crc kubenswrapper[4681]: I0404 01:57:31.418792 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:31 crc kubenswrapper[4681]: I0404 01:57:31.430123 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:31 crc kubenswrapper[4681]: I0404 01:57:31.446132 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:31 crc kubenswrapper[4681]: I0404 01:57:31.469964 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:32 crc kubenswrapper[4681]: I0404 01:57:32.200476 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:32 crc kubenswrapper[4681]: E0404 01:57:32.200833 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:57:32 crc kubenswrapper[4681]: I0404 01:57:32.200471 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:32 crc kubenswrapper[4681]: E0404 01:57:32.201113 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:57:32 crc kubenswrapper[4681]: I0404 01:57:32.200476 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:32 crc kubenswrapper[4681]: E0404 01:57:32.201367 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:57:32 crc kubenswrapper[4681]: I0404 01:57:32.200565 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:32 crc kubenswrapper[4681]: E0404 01:57:32.201659 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.431128 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.446507 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.457864 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.473242 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.481992 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.494040 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.503480 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.514523 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.523470 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.536824 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.555160 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.576539 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.590258 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.603221 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.613876 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.625542 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.634225 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.742896 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0"} Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.742975 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a"} Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.754924 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.772068 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.781189 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.791015 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.800317 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.821084 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.833974 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.848142 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.865754 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.888737 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.903359 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.917345 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.927608 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.939531 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.956618 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:33 crc kubenswrapper[4681]: I0404 01:57:33.971292 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Apr 04 01:57:34 crc kubenswrapper[4681]: I0404 01:57:34.199927 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:34 crc kubenswrapper[4681]: I0404 01:57:34.200005 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:34 crc kubenswrapper[4681]: E0404 01:57:34.200085 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:57:34 crc kubenswrapper[4681]: I0404 01:57:34.200162 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:34 crc kubenswrapper[4681]: E0404 01:57:34.200201 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:57:34 crc kubenswrapper[4681]: I0404 01:57:34.200310 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:34 crc kubenswrapper[4681]: E0404 01:57:34.200425 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:57:34 crc kubenswrapper[4681]: E0404 01:57:34.200563 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:57:34 crc kubenswrapper[4681]: I0404 01:57:34.748815 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3"} Apr 04 01:57:34 crc kubenswrapper[4681]: I0404 01:57:34.753772 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" event={"ID":"fc4fe566-3f65-4de4-9595-80b23fe4149c","Type":"ContainerStarted","Data":"b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f"} Apr 04 01:57:34 crc kubenswrapper[4681]: I0404 01:57:34.753865 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" event={"ID":"fc4fe566-3f65-4de4-9595-80b23fe4149c","Type":"ContainerStarted","Data":"91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf"} Apr 04 01:57:34 crc kubenswrapper[4681]: I0404 01:57:34.768027 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:34Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:34 crc kubenswrapper[4681]: I0404 01:57:34.789816 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:34Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:34 crc kubenswrapper[4681]: I0404 01:57:34.811996 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:34Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:34 crc kubenswrapper[4681]: I0404 01:57:34.831152 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:34Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:34 crc kubenswrapper[4681]: I0404 01:57:34.851447 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:34Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:34 crc kubenswrapper[4681]: I0404 01:57:34.865934 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:34Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:34 crc kubenswrapper[4681]: I0404 01:57:34.882795 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:34Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:34 crc kubenswrapper[4681]: I0404 01:57:34.898541 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:34Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:34 crc kubenswrapper[4681]: I0404 01:57:34.927757 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:34Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:34 crc kubenswrapper[4681]: I0404 01:57:34.946937 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:34Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:34 crc kubenswrapper[4681]: I0404 01:57:34.961541 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:34Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:34 crc kubenswrapper[4681]: I0404 01:57:34.977048 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:34Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:35 crc kubenswrapper[4681]: I0404 01:57:35.007315 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:35Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:35 crc kubenswrapper[4681]: I0404 01:57:35.024475 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:35Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:35 crc kubenswrapper[4681]: I0404 01:57:35.047546 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:35Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:35 crc kubenswrapper[4681]: I0404 01:57:35.060487 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:35Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:35 crc kubenswrapper[4681]: I0404 01:57:35.084717 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:35Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:35 crc kubenswrapper[4681]: I0404 01:57:35.112587 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:35Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:35 crc kubenswrapper[4681]: I0404 01:57:35.131799 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:35Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:35 crc kubenswrapper[4681]: I0404 01:57:35.149387 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:35Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:35 crc kubenswrapper[4681]: I0404 01:57:35.165946 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:35Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:35 crc kubenswrapper[4681]: I0404 01:57:35.180225 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:35Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:35 crc kubenswrapper[4681]: I0404 01:57:35.196239 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:35Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:35 crc kubenswrapper[4681]: I0404 01:57:35.208275 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:35Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:35 crc kubenswrapper[4681]: I0404 01:57:35.222389 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:35Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:35 crc kubenswrapper[4681]: I0404 01:57:35.240828 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:35Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:35 crc kubenswrapper[4681]: I0404 01:57:35.256143 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:35Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:35 crc kubenswrapper[4681]: I0404 01:57:35.282128 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:35Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:35 crc kubenswrapper[4681]: I0404 01:57:35.327426 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:35Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:35 crc kubenswrapper[4681]: I0404 01:57:35.348125 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:35Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:35 crc kubenswrapper[4681]: I0404 01:57:35.358888 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:35Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:35 crc kubenswrapper[4681]: I0404 01:57:35.368651 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:35Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:36 crc kubenswrapper[4681]: I0404 01:57:36.200257 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:36 crc kubenswrapper[4681]: I0404 01:57:36.200315 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:36 crc kubenswrapper[4681]: I0404 01:57:36.200327 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:36 crc kubenswrapper[4681]: E0404 01:57:36.200505 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:57:36 crc kubenswrapper[4681]: I0404 01:57:36.200545 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:36 crc kubenswrapper[4681]: E0404 01:57:36.200731 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:57:36 crc kubenswrapper[4681]: E0404 01:57:36.200839 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:57:36 crc kubenswrapper[4681]: E0404 01:57:36.200940 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:57:36 crc kubenswrapper[4681]: E0404 01:57:36.306497 4681 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 04 01:57:36 crc kubenswrapper[4681]: I0404 01:57:36.759526 4681 generic.go:334] "Generic (PLEG): container finished" podID="d004639b-c07a-4401-8588-8af4ed981db3" containerID="776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab" exitCode=0 Apr 04 01:57:36 crc kubenswrapper[4681]: I0404 01:57:36.759575 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" event={"ID":"d004639b-c07a-4401-8588-8af4ed981db3","Type":"ContainerDied","Data":"776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab"} Apr 04 01:57:36 crc kubenswrapper[4681]: I0404 01:57:36.777456 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:36Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:36 crc kubenswrapper[4681]: I0404 01:57:36.789115 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:36Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:36 crc kubenswrapper[4681]: I0404 01:57:36.801358 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:36Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:36 crc kubenswrapper[4681]: I0404 01:57:36.812974 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:36Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:36 crc kubenswrapper[4681]: I0404 01:57:36.823192 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:36Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:36 crc kubenswrapper[4681]: I0404 01:57:36.831903 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:36Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:36 crc kubenswrapper[4681]: I0404 01:57:36.841547 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:36Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:36 crc kubenswrapper[4681]: I0404 01:57:36.860034 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:36Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:36 crc kubenswrapper[4681]: I0404 01:57:36.888073 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:36Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:36 crc kubenswrapper[4681]: I0404 01:57:36.910058 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:36Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:36 crc kubenswrapper[4681]: I0404 01:57:36.919894 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:36Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:36 crc kubenswrapper[4681]: I0404 01:57:36.930151 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:36Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:36 crc kubenswrapper[4681]: I0404 01:57:36.956419 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:36Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:36 crc kubenswrapper[4681]: I0404 01:57:36.973829 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:36Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:36 crc kubenswrapper[4681]: I0404 01:57:36.995521 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:36Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:37 crc kubenswrapper[4681]: I0404 01:57:37.007513 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:37Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:37 crc kubenswrapper[4681]: I0404 01:57:37.765775 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cntwc_d004639b-c07a-4401-8588-8af4ed981db3/ovn-acl-logging/0.log" Apr 04 01:57:37 crc kubenswrapper[4681]: I0404 01:57:37.766745 4681 generic.go:334] "Generic (PLEG): container finished" podID="d004639b-c07a-4401-8588-8af4ed981db3" containerID="88384a096ba35de77fe905c618d6700477502a7f5d8dad360e3783f908234037" exitCode=1 Apr 04 01:57:37 crc kubenswrapper[4681]: I0404 01:57:37.766823 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" event={"ID":"d004639b-c07a-4401-8588-8af4ed981db3","Type":"ContainerStarted","Data":"1eaf3363b57f81c1d30955e2f079097ee67801f30ce59a31744cbafe32358635"} Apr 04 01:57:37 crc kubenswrapper[4681]: I0404 01:57:37.766859 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" event={"ID":"d004639b-c07a-4401-8588-8af4ed981db3","Type":"ContainerStarted","Data":"4d2840027a88428593a03be2fe95869f9aec36765ef0a70345f5e7832bbe6500"} Apr 04 01:57:37 crc kubenswrapper[4681]: I0404 01:57:37.766876 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" event={"ID":"d004639b-c07a-4401-8588-8af4ed981db3","Type":"ContainerStarted","Data":"d6e4534b33b6e5ef0b5215513f32180fc0ca211032267b4a8c4234f8898f568b"} Apr 04 01:57:37 crc kubenswrapper[4681]: I0404 01:57:37.766888 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" event={"ID":"d004639b-c07a-4401-8588-8af4ed981db3","Type":"ContainerDied","Data":"88384a096ba35de77fe905c618d6700477502a7f5d8dad360e3783f908234037"} Apr 04 01:57:37 crc kubenswrapper[4681]: I0404 01:57:37.766903 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" event={"ID":"d004639b-c07a-4401-8588-8af4ed981db3","Type":"ContainerStarted","Data":"efab081b44f2dca4164229bbc1cc45821aa0e010e6a68f83087c3f2eff932840"} Apr 04 01:57:37 crc kubenswrapper[4681]: I0404 01:57:37.769314 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerStarted","Data":"a0266f6c217cfb69ed899bb926acc9f1c43b4429ff30b6fb54db89b92adffb93"} Apr 04 01:57:37 crc kubenswrapper[4681]: I0404 01:57:37.769353 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerStarted","Data":"e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5"} Apr 04 01:57:37 crc kubenswrapper[4681]: I0404 01:57:37.773445 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" event={"ID":"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c","Type":"ContainerStarted","Data":"12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811"} Apr 04 01:57:37 crc kubenswrapper[4681]: I0404 01:57:37.775503 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-jsn9l" event={"ID":"e4e1568b-1dc4-41c2-a74f-38bfabcf1280","Type":"ContainerStarted","Data":"7fcdf2be16bb9aeb8d28d6805badefaab604b3ba020eb5d101baaeebe2fb7d2f"} Apr 04 01:57:37 crc kubenswrapper[4681]: I0404 01:57:37.777830 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622"} Apr 04 01:57:37 crc kubenswrapper[4681]: I0404 01:57:37.787877 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:37Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:37 crc kubenswrapper[4681]: I0404 01:57:37.800530 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:37Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:37 crc kubenswrapper[4681]: I0404 01:57:37.816422 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:37Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:37 crc kubenswrapper[4681]: I0404 01:57:37.827223 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:37Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:37 crc kubenswrapper[4681]: I0404 01:57:37.840512 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:37Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:37 crc kubenswrapper[4681]: I0404 01:57:37.852555 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0266f6c217cfb69ed899bb926acc9f1c43b4429ff30b6fb54db89b92adffb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:37Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:37 crc kubenswrapper[4681]: I0404 01:57:37.864376 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:37Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:37 crc kubenswrapper[4681]: I0404 01:57:37.875582 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:37Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:37 crc kubenswrapper[4681]: I0404 01:57:37.893640 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:37Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:37 crc kubenswrapper[4681]: I0404 01:57:37.917986 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:37Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:37 crc kubenswrapper[4681]: I0404 01:57:37.944383 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:37Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:37 crc kubenswrapper[4681]: I0404 01:57:37.960099 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:37Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:37 crc kubenswrapper[4681]: I0404 01:57:37.973732 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:37Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:37 crc kubenswrapper[4681]: I0404 01:57:37.987094 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:37Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.005292 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:38Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.018081 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:38Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.030751 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:38Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.049208 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:38Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.066078 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:38Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.079638 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0266f6c217cfb69ed899bb926acc9f1c43b4429ff30b6fb54db89b92adffb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:38Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.091168 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:38Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.102763 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fcdf2be16bb9aeb8d28d6805badefaab604b3ba020eb5d101baaeebe2fb7d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:38Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.114047 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:38Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.128836 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:38Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.148427 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:38Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.163853 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:38Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.175226 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:38Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.188646 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:38Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.200318 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.200394 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.200394 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:38 crc kubenswrapper[4681]: E0404 01:57:38.200469 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.200549 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:38 crc kubenswrapper[4681]: E0404 01:57:38.200637 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:57:38 crc kubenswrapper[4681]: E0404 01:57:38.200708 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:57:38 crc kubenswrapper[4681]: E0404 01:57:38.200922 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.209607 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:38Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.227601 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:38Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.251096 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:38Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.264512 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:38Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.372463 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41bdd8e6-130d-4e3e-b466-313031c233d1-metrics-certs\") pod \"network-metrics-daemon-jk6f6\" (UID: \"41bdd8e6-130d-4e3e-b466-313031c233d1\") " pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:38 crc kubenswrapper[4681]: E0404 01:57:38.372691 4681 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 04 01:57:38 crc kubenswrapper[4681]: E0404 01:57:38.372824 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41bdd8e6-130d-4e3e-b466-313031c233d1-metrics-certs podName:41bdd8e6-130d-4e3e-b466-313031c233d1 nodeName:}" failed. No retries permitted until 2026-04-04 01:58:10.372792066 +0000 UTC m=+170.038567346 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/41bdd8e6-130d-4e3e-b466-313031c233d1-metrics-certs") pod "network-metrics-daemon-jk6f6" (UID: "41bdd8e6-130d-4e3e-b466-313031c233d1") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.789304 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cntwc_d004639b-c07a-4401-8588-8af4ed981db3/ovn-acl-logging/0.log" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.790542 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" event={"ID":"d004639b-c07a-4401-8588-8af4ed981db3","Type":"ContainerStarted","Data":"0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4"} Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.792045 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-w5wbs" event={"ID":"cab7ffc5-0101-48b8-87ab-de8324bacc38","Type":"ContainerStarted","Data":"78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb"} Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.793192 4681 generic.go:334] "Generic (PLEG): container finished" podID="5918fa67-6cfa-4c3b-bc04-7cc7888abf1c" containerID="12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811" exitCode=0 Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.793239 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" event={"ID":"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c","Type":"ContainerDied","Data":"12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811"} Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.812745 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fcdf2be16bb9aeb8d28d6805badefaab604b3ba020eb5d101baaeebe2fb7d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:38Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.832311 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:38Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.854575 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:38Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.867098 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0266f6c217cfb69ed899bb926acc9f1c43b4429ff30b6fb54db89b92adffb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:38Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.880055 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:38Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.895437 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:38Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.913583 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:38Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.929583 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:38Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.950715 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:38Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.968159 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:38Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.977859 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:38Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:38 crc kubenswrapper[4681]: I0404 01:57:38.990467 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:38Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.005983 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.019993 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.035555 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.047472 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.064650 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.076068 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.085972 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.097056 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.114201 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.126631 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.139253 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.151013 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.164180 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.182085 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.199461 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.215753 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0266f6c217cfb69ed899bb926acc9f1c43b4429ff30b6fb54db89b92adffb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.230501 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.241673 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fcdf2be16bb9aeb8d28d6805badefaab604b3ba020eb5d101baaeebe2fb7d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.251456 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.262717 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.798886 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" event={"ID":"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c","Type":"ContainerStarted","Data":"4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0"} Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.811396 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.821188 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.838286 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.850259 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0266f6c217cfb69ed899bb926acc9f1c43b4429ff30b6fb54db89b92adffb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.862800 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.875584 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fcdf2be16bb9aeb8d28d6805badefaab604b3ba020eb5d101baaeebe2fb7d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.886039 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.911599 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.928910 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.942015 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.955368 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.966653 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.983984 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:39 crc kubenswrapper[4681]: I0404 01:57:39.996141 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:39Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:40 crc kubenswrapper[4681]: I0404 01:57:40.012063 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:40Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:40 crc kubenswrapper[4681]: I0404 01:57:40.022556 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:40Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:40 crc kubenswrapper[4681]: I0404 01:57:40.199845 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:40 crc kubenswrapper[4681]: I0404 01:57:40.199866 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:40 crc kubenswrapper[4681]: I0404 01:57:40.199921 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:40 crc kubenswrapper[4681]: I0404 01:57:40.200184 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:40 crc kubenswrapper[4681]: E0404 01:57:40.200789 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:57:40 crc kubenswrapper[4681]: E0404 01:57:40.200788 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:57:40 crc kubenswrapper[4681]: E0404 01:57:40.200831 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:57:40 crc kubenswrapper[4681]: E0404 01:57:40.201356 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:57:40 crc kubenswrapper[4681]: I0404 01:57:40.805601 4681 generic.go:334] "Generic (PLEG): container finished" podID="5918fa67-6cfa-4c3b-bc04-7cc7888abf1c" containerID="4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0" exitCode=0 Apr 04 01:57:40 crc kubenswrapper[4681]: I0404 01:57:40.805888 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" event={"ID":"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c","Type":"ContainerDied","Data":"4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0"} Apr 04 01:57:40 crc kubenswrapper[4681]: I0404 01:57:40.813698 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cntwc_d004639b-c07a-4401-8588-8af4ed981db3/ovn-acl-logging/0.log" Apr 04 01:57:40 crc kubenswrapper[4681]: I0404 01:57:40.814208 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" event={"ID":"d004639b-c07a-4401-8588-8af4ed981db3","Type":"ContainerStarted","Data":"4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2"} Apr 04 01:57:40 crc kubenswrapper[4681]: I0404 01:57:40.818130 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-8jsq4" event={"ID":"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1","Type":"ContainerStarted","Data":"6d5a80321a01a279ac896396acffe25447fc28d3250dba73c82999553f39554d"} Apr 04 01:57:40 crc kubenswrapper[4681]: I0404 01:57:40.826592 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:40Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:40 crc kubenswrapper[4681]: I0404 01:57:40.845105 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:40Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:40 crc kubenswrapper[4681]: I0404 01:57:40.867696 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:40Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:40 crc kubenswrapper[4681]: I0404 01:57:40.889678 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:40Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:40 crc kubenswrapper[4681]: I0404 01:57:40.909959 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:40Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:40 crc kubenswrapper[4681]: I0404 01:57:40.927406 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:40Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:40 crc kubenswrapper[4681]: I0404 01:57:40.943728 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:40Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:40 crc kubenswrapper[4681]: I0404 01:57:40.957841 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fcdf2be16bb9aeb8d28d6805badefaab604b3ba020eb5d101baaeebe2fb7d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:40Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:40 crc kubenswrapper[4681]: I0404 01:57:40.972313 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:40Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:40 crc kubenswrapper[4681]: I0404 01:57:40.988690 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:40Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.007733 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0266f6c217cfb69ed899bb926acc9f1c43b4429ff30b6fb54db89b92adffb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.025733 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.028569 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.028595 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.028605 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.028621 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.028634 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:41Z","lastTransitionTime":"2026-04-04T01:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.043645 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: E0404 01:57:41.044239 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.049153 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.049194 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.049205 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.049222 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.049236 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:41Z","lastTransitionTime":"2026-04-04T01:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.057180 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: E0404 01:57:41.063319 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.067050 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.067085 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.067095 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.067110 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.067120 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:41Z","lastTransitionTime":"2026-04-04T01:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.079093 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: E0404 01:57:41.080240 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.084117 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.084163 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.084176 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.084197 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.084210 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:41Z","lastTransitionTime":"2026-04-04T01:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:41 crc kubenswrapper[4681]: E0404 01:57:41.098167 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.101936 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.101969 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.101979 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.101994 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.102005 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:41Z","lastTransitionTime":"2026-04-04T01:57:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.106452 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: E0404 01:57:41.115700 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: E0404 01:57:41.115853 4681 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.117702 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.133102 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.153344 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.166878 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.180769 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.196744 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.210111 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fcdf2be16bb9aeb8d28d6805badefaab604b3ba020eb5d101baaeebe2fb7d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.220895 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d5a80321a01a279ac896396acffe25447fc28d3250dba73c82999553f39554d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.231543 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.241679 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0266f6c217cfb69ed899bb926acc9f1c43b4429ff30b6fb54db89b92adffb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.254158 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.267138 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.279316 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.300954 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: E0404 01:57:41.307715 4681 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.322183 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.345377 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.369398 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.387566 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.403508 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.420536 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.442064 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.465509 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.481066 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0266f6c217cfb69ed899bb926acc9f1c43b4429ff30b6fb54db89b92adffb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.495832 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.508074 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fcdf2be16bb9aeb8d28d6805badefaab604b3ba020eb5d101baaeebe2fb7d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.519445 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d5a80321a01a279ac896396acffe25447fc28d3250dba73c82999553f39554d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.537305 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.568954 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.595823 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.616968 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.630524 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.658680 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.826105 4681 generic.go:334] "Generic (PLEG): container finished" podID="5918fa67-6cfa-4c3b-bc04-7cc7888abf1c" containerID="7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8" exitCode=0 Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.826154 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" event={"ID":"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c","Type":"ContainerDied","Data":"7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8"} Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.845775 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fcdf2be16bb9aeb8d28d6805badefaab604b3ba020eb5d101baaeebe2fb7d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.863755 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d5a80321a01a279ac896396acffe25447fc28d3250dba73c82999553f39554d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.885932 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.901432 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0266f6c217cfb69ed899bb926acc9f1c43b4429ff30b6fb54db89b92adffb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.918791 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.933514 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.955191 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:41 crc kubenswrapper[4681]: I0404 01:57:41.979824 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:41Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:42 crc kubenswrapper[4681]: I0404 01:57:42.006748 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:42Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:42 crc kubenswrapper[4681]: I0404 01:57:42.029530 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:42Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:42 crc kubenswrapper[4681]: I0404 01:57:42.043406 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:42Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:42 crc kubenswrapper[4681]: I0404 01:57:42.058979 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:42Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:42 crc kubenswrapper[4681]: I0404 01:57:42.076589 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:42Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:42 crc kubenswrapper[4681]: I0404 01:57:42.094540 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:42Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:42 crc kubenswrapper[4681]: I0404 01:57:42.113220 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:42Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:42 crc kubenswrapper[4681]: I0404 01:57:42.129078 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:42Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:42 crc kubenswrapper[4681]: I0404 01:57:42.199983 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:42 crc kubenswrapper[4681]: I0404 01:57:42.200015 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:42 crc kubenswrapper[4681]: I0404 01:57:42.200051 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:42 crc kubenswrapper[4681]: I0404 01:57:42.200081 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:42 crc kubenswrapper[4681]: E0404 01:57:42.200196 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:57:42 crc kubenswrapper[4681]: E0404 01:57:42.200290 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:57:42 crc kubenswrapper[4681]: E0404 01:57:42.200400 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:57:42 crc kubenswrapper[4681]: E0404 01:57:42.200487 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:57:42 crc kubenswrapper[4681]: I0404 01:57:42.834629 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cntwc_d004639b-c07a-4401-8588-8af4ed981db3/ovn-acl-logging/0.log" Apr 04 01:57:42 crc kubenswrapper[4681]: I0404 01:57:42.836334 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" event={"ID":"d004639b-c07a-4401-8588-8af4ed981db3","Type":"ContainerStarted","Data":"af1d06a7c84f707fcf0f64b3333b066d8e168e416c32ed193a3089676d82a23a"} Apr 04 01:57:42 crc kubenswrapper[4681]: I0404 01:57:42.836884 4681 scope.go:117] "RemoveContainer" containerID="88384a096ba35de77fe905c618d6700477502a7f5d8dad360e3783f908234037" Apr 04 01:57:42 crc kubenswrapper[4681]: I0404 01:57:42.837417 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:57:42 crc kubenswrapper[4681]: I0404 01:57:42.837562 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:57:42 crc kubenswrapper[4681]: I0404 01:57:42.837583 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:57:42 crc kubenswrapper[4681]: I0404 01:57:42.842221 4681 generic.go:334] "Generic (PLEG): container finished" podID="5918fa67-6cfa-4c3b-bc04-7cc7888abf1c" containerID="99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860" exitCode=0 Apr 04 01:57:42 crc kubenswrapper[4681]: I0404 01:57:42.842288 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" event={"ID":"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c","Type":"ContainerDied","Data":"99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860"} Apr 04 01:57:42 crc kubenswrapper[4681]: I0404 01:57:42.857768 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:42Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:42 crc kubenswrapper[4681]: I0404 01:57:42.887012 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:42Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:42 crc kubenswrapper[4681]: I0404 01:57:42.901288 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:42Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:42 crc kubenswrapper[4681]: I0404 01:57:42.914202 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:57:42 crc kubenswrapper[4681]: I0404 01:57:42.929224 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-acl-logging nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-acl-logging nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6e4534b33b6e5ef0b5215513f32180fc0ca211032267b4a8c4234f8898f568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2840027a88428593a03be2fe95869f9aec36765ef0a70345f5e7832bbe6500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eaf3363b57f81c1d30955e2f079097ee67801f30ce59a31744cbafe32358635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88384a096ba35de77fe905c618d6700477502a7f5d8dad360e3783f908234037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88384a096ba35de77fe905c618d6700477502a7f5d8dad360e3783f908234037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efab081b44f2dca4164229bbc1cc45821aa0e010e6a68f83087c3f2eff932840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af1d06a7c84f707fcf0f64b3333b066d8e168e416c32ed193a3089676d82a23a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:42Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:42 crc kubenswrapper[4681]: I0404 01:57:42.930390 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:57:42 crc kubenswrapper[4681]: I0404 01:57:42.953800 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:42Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:42 crc kubenswrapper[4681]: I0404 01:57:42.974804 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:42Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:42 crc kubenswrapper[4681]: I0404 01:57:42.989912 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:42Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.013987 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:43Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.029636 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:43Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.048305 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:43Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.065669 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:43Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.091144 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:43Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.125948 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fcdf2be16bb9aeb8d28d6805badefaab604b3ba020eb5d101baaeebe2fb7d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:43Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.160046 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d5a80321a01a279ac896396acffe25447fc28d3250dba73c82999553f39554d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:43Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.174621 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:43Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.188651 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0266f6c217cfb69ed899bb926acc9f1c43b4429ff30b6fb54db89b92adffb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:43Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.214120 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:43Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.231287 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:43Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.246543 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:43Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.264093 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:43Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.280689 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:43Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.297398 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:43Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.312011 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d5a80321a01a279ac896396acffe25447fc28d3250dba73c82999553f39554d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:43Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.325033 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:43Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.340005 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0266f6c217cfb69ed899bb926acc9f1c43b4429ff30b6fb54db89b92adffb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:43Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.357968 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:43Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.374138 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fcdf2be16bb9aeb8d28d6805badefaab604b3ba020eb5d101baaeebe2fb7d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:43Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.390034 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:43Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.409979 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-acl-logging ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-acl-logging ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6e4534b33b6e5ef0b5215513f32180fc0ca211032267b4a8c4234f8898f568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2840027a88428593a03be2fe95869f9aec36765ef0a70345f5e7832bbe6500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eaf3363b57f81c1d30955e2f079097ee67801f30ce59a31744cbafe32358635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88384a096ba35de77fe905c618d6700477502a7f5d8dad360e3783f908234037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88384a096ba35de77fe905c618d6700477502a7f5d8dad360e3783f908234037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efab081b44f2dca4164229bbc1cc45821aa0e010e6a68f83087c3f2eff932840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af1d06a7c84f707fcf0f64b3333b066d8e168e416c32ed193a3089676d82a23a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:43Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.433977 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:43Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.455055 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:43Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.471999 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:43Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.855697 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" event={"ID":"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c","Type":"ContainerStarted","Data":"50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735"} Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.898983 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cntwc_d004639b-c07a-4401-8588-8af4ed981db3/ovn-acl-logging/0.log" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.899927 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" event={"ID":"d004639b-c07a-4401-8588-8af4ed981db3","Type":"ContainerStarted","Data":"781945902b239ddb43e807866e874c9772627c2a89a780bb78915f9d56be3af7"} Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.925450 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0266f6c217cfb69ed899bb926acc9f1c43b4429ff30b6fb54db89b92adffb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:43Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.946694 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:43Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.958495 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fcdf2be16bb9aeb8d28d6805badefaab604b3ba020eb5d101baaeebe2fb7d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:43Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.969173 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d5a80321a01a279ac896396acffe25447fc28d3250dba73c82999553f39554d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:43Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:43 crc kubenswrapper[4681]: I0404 01:57:43.980352 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:43Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.004160 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:44Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.018307 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:44Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.036165 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:44Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.057082 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:44Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.076758 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-acl-logging ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-acl-logging ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6e4534b33b6e5ef0b5215513f32180fc0ca211032267b4a8c4234f8898f568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2840027a88428593a03be2fe95869f9aec36765ef0a70345f5e7832bbe6500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eaf3363b57f81c1d30955e2f079097ee67801f30ce59a31744cbafe32358635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88384a096ba35de77fe905c618d6700477502a7f5d8dad360e3783f908234037\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88384a096ba35de77fe905c618d6700477502a7f5d8dad360e3783f908234037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efab081b44f2dca4164229bbc1cc45821aa0e010e6a68f83087c3f2eff932840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af1d06a7c84f707fcf0f64b3333b066d8e168e416c32ed193a3089676d82a23a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:44Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.093384 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:44Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.107625 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:44Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.122120 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:44Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.135019 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:44Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.148081 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:44Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.161830 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:44Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.172792 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fcdf2be16bb9aeb8d28d6805badefaab604b3ba020eb5d101baaeebe2fb7d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:44Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.183514 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d5a80321a01a279ac896396acffe25447fc28d3250dba73c82999553f39554d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:44Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.197426 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:44Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.200058 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.200120 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.200149 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:44 crc kubenswrapper[4681]: E0404 01:57:44.200384 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.200394 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:44 crc kubenswrapper[4681]: E0404 01:57:44.201047 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:57:44 crc kubenswrapper[4681]: E0404 01:57:44.200876 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:57:44 crc kubenswrapper[4681]: E0404 01:57:44.200459 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.210293 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.210334 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0266f6c217cfb69ed899bb926acc9f1c43b4429ff30b6fb54db89b92adffb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:44Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.227862 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:44Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.247658 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:44Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.263328 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:44Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.293011 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6e4534b33b6e5ef0b5215513f32180fc0ca211032267b4a8c4234f8898f568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2840027a88428593a03be2fe95869f9aec36765ef0a70345f5e7832bbe6500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eaf3363b57f81c1d30955e2f079097ee67801f30ce59a31744cbafe32358635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://781945902b239ddb43e807866e874c9772627c2a89a780bb78915f9d56be3af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88384a096ba35de77fe905c618d6700477502a7f5d8dad360e3783f908234037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efab081b44f2dca4164229bbc1cc45821aa0e010e6a68f83087c3f2eff932840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af1d06a7c84f707fcf0f64b3333b066d8e168e416c32ed193a3089676d82a23a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:44Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.313083 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:44Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.328147 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:44Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.339486 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:44Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.353051 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:44Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.368697 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:44Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.380559 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:44Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.395051 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:44Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:44 crc kubenswrapper[4681]: I0404 01:57:44.409131 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:44Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:45 crc kubenswrapper[4681]: I0404 01:57:45.912863 4681 generic.go:334] "Generic (PLEG): container finished" podID="5918fa67-6cfa-4c3b-bc04-7cc7888abf1c" containerID="50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735" exitCode=0 Apr 04 01:57:45 crc kubenswrapper[4681]: I0404 01:57:45.912930 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" event={"ID":"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c","Type":"ContainerDied","Data":"50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735"} Apr 04 01:57:45 crc kubenswrapper[4681]: I0404 01:57:45.946142 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:45Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:45 crc kubenswrapper[4681]: I0404 01:57:45.968417 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:45Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:45 crc kubenswrapper[4681]: I0404 01:57:45.990456 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:45Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:46 crc kubenswrapper[4681]: I0404 01:57:46.003198 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:46Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:46 crc kubenswrapper[4681]: I0404 01:57:46.020433 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6e4534b33b6e5ef0b5215513f32180fc0ca211032267b4a8c4234f8898f568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2840027a88428593a03be2fe95869f9aec36765ef0a70345f5e7832bbe6500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eaf3363b57f81c1d30955e2f079097ee67801f30ce59a31744cbafe32358635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://781945902b239ddb43e807866e874c9772627c2a89a780bb78915f9d56be3af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88384a096ba35de77fe905c618d6700477502a7f5d8dad360e3783f908234037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efab081b44f2dca4164229bbc1cc45821aa0e010e6a68f83087c3f2eff932840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af1d06a7c84f707fcf0f64b3333b066d8e168e416c32ed193a3089676d82a23a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:46Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:46 crc kubenswrapper[4681]: I0404 01:57:46.031479 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:46Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:46 crc kubenswrapper[4681]: I0404 01:57:46.048190 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:46Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:46 crc kubenswrapper[4681]: I0404 01:57:46.059777 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:46Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:46 crc kubenswrapper[4681]: I0404 01:57:46.072222 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:46Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:46 crc kubenswrapper[4681]: I0404 01:57:46.088047 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:46Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:46 crc kubenswrapper[4681]: I0404 01:57:46.101831 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:46Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:46 crc kubenswrapper[4681]: I0404 01:57:46.113807 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f8dda2d-d2e0-4669-8a9d-f3ae21eaa22e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7eb0459a3d135d52a1c4168c3def80940997d4a4286ffa23d2abdde13b7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5fc946678318218c1dc93f59aea7fc760172225cdb94efee9ea779b0a3ff53c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:55:51Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0404 01:55:23.581321 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0404 01:55:23.584508 1 observer_polling.go:159] Starting file observer\\\\nI0404 01:55:23.630158 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0404 01:55:23.635460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0404 01:55:51.649805 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0404 01:55:51.649953 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:51Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a52ccf1a40f8ddec3fbfd455dfaae3a08af9e38f488b774ae61283eb539214d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6196c5b3a05c942055f25e31a350843406ea37406e1af62c782dc9f8a4f1c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7096a59f168cc8b9e7c61688d8ae41239ce4d62f3f0aa859bdb8586c85afe752\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:46Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:46 crc kubenswrapper[4681]: I0404 01:57:46.125955 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0266f6c217cfb69ed899bb926acc9f1c43b4429ff30b6fb54db89b92adffb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:46Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:46 crc kubenswrapper[4681]: I0404 01:57:46.137159 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:46Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:46 crc kubenswrapper[4681]: I0404 01:57:46.147415 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fcdf2be16bb9aeb8d28d6805badefaab604b3ba020eb5d101baaeebe2fb7d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:46Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:46 crc kubenswrapper[4681]: I0404 01:57:46.157011 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d5a80321a01a279ac896396acffe25447fc28d3250dba73c82999553f39554d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:46Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:46 crc kubenswrapper[4681]: I0404 01:57:46.167256 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:46Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:46 crc kubenswrapper[4681]: I0404 01:57:46.199969 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:46 crc kubenswrapper[4681]: I0404 01:57:46.200154 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:46 crc kubenswrapper[4681]: E0404 01:57:46.200232 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:57:46 crc kubenswrapper[4681]: I0404 01:57:46.200380 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:46 crc kubenswrapper[4681]: I0404 01:57:46.200473 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:46 crc kubenswrapper[4681]: E0404 01:57:46.200714 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:57:46 crc kubenswrapper[4681]: E0404 01:57:46.200801 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:57:46 crc kubenswrapper[4681]: E0404 01:57:46.200674 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:57:46 crc kubenswrapper[4681]: I0404 01:57:46.213228 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Apr 04 01:57:46 crc kubenswrapper[4681]: E0404 01:57:46.309213 4681 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 04 01:57:46 crc kubenswrapper[4681]: I0404 01:57:46.924875 4681 generic.go:334] "Generic (PLEG): container finished" podID="5918fa67-6cfa-4c3b-bc04-7cc7888abf1c" containerID="77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6" exitCode=0 Apr 04 01:57:46 crc kubenswrapper[4681]: I0404 01:57:46.925797 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" event={"ID":"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c","Type":"ContainerDied","Data":"77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6"} Apr 04 01:57:47 crc kubenswrapper[4681]: I0404 01:57:47.021881 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:47Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:47 crc kubenswrapper[4681]: I0404 01:57:47.040192 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:47Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:47 crc kubenswrapper[4681]: I0404 01:57:47.058620 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:47Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:47 crc kubenswrapper[4681]: I0404 01:57:47.071559 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:47Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:47 crc kubenswrapper[4681]: I0404 01:57:47.083407 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f8dda2d-d2e0-4669-8a9d-f3ae21eaa22e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7eb0459a3d135d52a1c4168c3def80940997d4a4286ffa23d2abdde13b7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5fc946678318218c1dc93f59aea7fc760172225cdb94efee9ea779b0a3ff53c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:55:51Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0404 01:55:23.581321 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0404 01:55:23.584508 1 observer_polling.go:159] Starting file observer\\\\nI0404 01:55:23.630158 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0404 01:55:23.635460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0404 01:55:51.649805 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0404 01:55:51.649953 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:51Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a52ccf1a40f8ddec3fbfd455dfaae3a08af9e38f488b774ae61283eb539214d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6196c5b3a05c942055f25e31a350843406ea37406e1af62c782dc9f8a4f1c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7096a59f168cc8b9e7c61688d8ae41239ce4d62f3f0aa859bdb8586c85afe752\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:47Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:47 crc kubenswrapper[4681]: I0404 01:57:47.092790 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0266f6c217cfb69ed899bb926acc9f1c43b4429ff30b6fb54db89b92adffb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:47Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:47 crc kubenswrapper[4681]: I0404 01:57:47.102755 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:47Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:47 crc kubenswrapper[4681]: I0404 01:57:47.110691 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fcdf2be16bb9aeb8d28d6805badefaab604b3ba020eb5d101baaeebe2fb7d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:47Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:47 crc kubenswrapper[4681]: I0404 01:57:47.120376 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d5a80321a01a279ac896396acffe25447fc28d3250dba73c82999553f39554d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:47Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:47 crc kubenswrapper[4681]: I0404 01:57:47.137983 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6e4534b33b6e5ef0b5215513f32180fc0ca211032267b4a8c4234f8898f568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2840027a88428593a03be2fe95869f9aec36765ef0a70345f5e7832bbe6500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eaf3363b57f81c1d30955e2f079097ee67801f30ce59a31744cbafe32358635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://781945902b239ddb43e807866e874c9772627c2a89a780bb78915f9d56be3af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88384a096ba35de77fe905c618d6700477502a7f5d8dad360e3783f908234037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efab081b44f2dca4164229bbc1cc45821aa0e010e6a68f83087c3f2eff932840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af1d06a7c84f707fcf0f64b3333b066d8e168e416c32ed193a3089676d82a23a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:47Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:47 crc kubenswrapper[4681]: I0404 01:57:47.156032 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:47Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:47 crc kubenswrapper[4681]: I0404 01:57:47.168631 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:47Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:47 crc kubenswrapper[4681]: I0404 01:57:47.182472 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:47Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:47 crc kubenswrapper[4681]: I0404 01:57:47.194222 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:47Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:47 crc kubenswrapper[4681]: I0404 01:57:47.205457 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21fa3376-6f50-4819-b268-579c7500c923\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce18a007870156d09b31ab38dd939beba4fc47418b8e07d70945eb4838f93130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9640effcf3619f733c3f5082251165355776fabe5d8c3ade7f06409ec459d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a4d1e6ca047390c262fe05912f5de2fb5d4fa0f6d51f4462d9b602901aa99dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:47Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:47 crc kubenswrapper[4681]: I0404 01:57:47.219657 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:47Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:47 crc kubenswrapper[4681]: I0404 01:57:47.237325 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:47Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:47 crc kubenswrapper[4681]: I0404 01:57:47.254298 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:47Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:47 crc kubenswrapper[4681]: I0404 01:57:47.932823 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" event={"ID":"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c","Type":"ContainerStarted","Data":"c03cfac4cd4e9c86986c359493cc6c5551cc537bf55e0a9fb52f9f5c80bb94a3"} Apr 04 01:57:47 crc kubenswrapper[4681]: I0404 01:57:47.935073 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cntwc_d004639b-c07a-4401-8588-8af4ed981db3/ovnkube-controller/0.log" Apr 04 01:57:47 crc kubenswrapper[4681]: I0404 01:57:47.937359 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cntwc_d004639b-c07a-4401-8588-8af4ed981db3/ovn-acl-logging/0.log" Apr 04 01:57:47 crc kubenswrapper[4681]: I0404 01:57:47.938323 4681 generic.go:334] "Generic (PLEG): container finished" podID="d004639b-c07a-4401-8588-8af4ed981db3" containerID="af1d06a7c84f707fcf0f64b3333b066d8e168e416c32ed193a3089676d82a23a" exitCode=1 Apr 04 01:57:47 crc kubenswrapper[4681]: I0404 01:57:47.938366 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" event={"ID":"d004639b-c07a-4401-8588-8af4ed981db3","Type":"ContainerDied","Data":"af1d06a7c84f707fcf0f64b3333b066d8e168e416c32ed193a3089676d82a23a"} Apr 04 01:57:47 crc kubenswrapper[4681]: I0404 01:57:47.939034 4681 scope.go:117] "RemoveContainer" containerID="af1d06a7c84f707fcf0f64b3333b066d8e168e416c32ed193a3089676d82a23a" Apr 04 01:57:47 crc kubenswrapper[4681]: I0404 01:57:47.949938 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:47Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:47 crc kubenswrapper[4681]: I0404 01:57:47.964233 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:47Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:47 crc kubenswrapper[4681]: I0404 01:57:47.977670 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:47Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:47 crc kubenswrapper[4681]: I0404 01:57:47.991075 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0266f6c217cfb69ed899bb926acc9f1c43b4429ff30b6fb54db89b92adffb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:47Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.005637 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.015746 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fcdf2be16bb9aeb8d28d6805badefaab604b3ba020eb5d101baaeebe2fb7d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.025698 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d5a80321a01a279ac896396acffe25447fc28d3250dba73c82999553f39554d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.035962 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.048199 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f8dda2d-d2e0-4669-8a9d-f3ae21eaa22e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7eb0459a3d135d52a1c4168c3def80940997d4a4286ffa23d2abdde13b7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5fc946678318218c1dc93f59aea7fc760172225cdb94efee9ea779b0a3ff53c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:55:51Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0404 01:55:23.581321 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0404 01:55:23.584508 1 observer_polling.go:159] Starting file observer\\\\nI0404 01:55:23.630158 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0404 01:55:23.635460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0404 01:55:51.649805 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0404 01:55:51.649953 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:51Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a52ccf1a40f8ddec3fbfd455dfaae3a08af9e38f488b774ae61283eb539214d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6196c5b3a05c942055f25e31a350843406ea37406e1af62c782dc9f8a4f1c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7096a59f168cc8b9e7c61688d8ae41239ce4d62f3f0aa859bdb8586c85afe752\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.072004 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.089002 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.100207 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.113849 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.129821 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6e4534b33b6e5ef0b5215513f32180fc0ca211032267b4a8c4234f8898f568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2840027a88428593a03be2fe95869f9aec36765ef0a70345f5e7832bbe6500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eaf3363b57f81c1d30955e2f079097ee67801f30ce59a31744cbafe32358635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://781945902b239ddb43e807866e874c9772627c2a89a780bb78915f9d56be3af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88384a096ba35de77fe905c618d6700477502a7f5d8dad360e3783f908234037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efab081b44f2dca4164229bbc1cc45821aa0e010e6a68f83087c3f2eff932840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af1d06a7c84f707fcf0f64b3333b066d8e168e416c32ed193a3089676d82a23a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.141388 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.154029 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c03cfac4cd4e9c86986c359493cc6c5551cc537bf55e0a9fb52f9f5c80bb94a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.163631 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.175109 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21fa3376-6f50-4819-b268-579c7500c923\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce18a007870156d09b31ab38dd939beba4fc47418b8e07d70945eb4838f93130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9640effcf3619f733c3f5082251165355776fabe5d8c3ade7f06409ec459d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a4d1e6ca047390c262fe05912f5de2fb5d4fa0f6d51f4462d9b602901aa99dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.184938 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d5a80321a01a279ac896396acffe25447fc28d3250dba73c82999553f39554d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.200256 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.200378 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:48 crc kubenswrapper[4681]: E0404 01:57:48.200427 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.200457 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.200486 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:48 crc kubenswrapper[4681]: E0404 01:57:48.200658 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:57:48 crc kubenswrapper[4681]: E0404 01:57:48.200744 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:57:48 crc kubenswrapper[4681]: E0404 01:57:48.200867 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.205053 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.218725 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f8dda2d-d2e0-4669-8a9d-f3ae21eaa22e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7eb0459a3d135d52a1c4168c3def80940997d4a4286ffa23d2abdde13b7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5fc946678318218c1dc93f59aea7fc760172225cdb94efee9ea779b0a3ff53c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:55:51Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0404 01:55:23.581321 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0404 01:55:23.584508 1 observer_polling.go:159] Starting file observer\\\\nI0404 01:55:23.630158 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0404 01:55:23.635460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0404 01:55:51.649805 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0404 01:55:51.649953 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:51Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a52ccf1a40f8ddec3fbfd455dfaae3a08af9e38f488b774ae61283eb539214d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6196c5b3a05c942055f25e31a350843406ea37406e1af62c782dc9f8a4f1c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7096a59f168cc8b9e7c61688d8ae41239ce4d62f3f0aa859bdb8586c85afe752\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.231060 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0266f6c217cfb69ed899bb926acc9f1c43b4429ff30b6fb54db89b92adffb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.248856 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.266557 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fcdf2be16bb9aeb8d28d6805badefaab604b3ba020eb5d101baaeebe2fb7d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.278249 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.304434 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6e4534b33b6e5ef0b5215513f32180fc0ca211032267b4a8c4234f8898f568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2840027a88428593a03be2fe95869f9aec36765ef0a70345f5e7832bbe6500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eaf3363b57f81c1d30955e2f079097ee67801f30ce59a31744cbafe32358635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://781945902b239ddb43e807866e874c9772627c2a89a780bb78915f9d56be3af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88384a096ba35de77fe905c618d6700477502a7f5d8dad360e3783f908234037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efab081b44f2dca4164229bbc1cc45821aa0e010e6a68f83087c3f2eff932840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af1d06a7c84f707fcf0f64b3333b066d8e168e416c32ed193a3089676d82a23a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af1d06a7c84f707fcf0f64b3333b066d8e168e416c32ed193a3089676d82a23a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"message\\\":\\\".658748 6611 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0404 01:57:47.658761 6611 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0404 01:57:47.658766 6611 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0404 01:57:47.658783 6611 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0404 01:57:47.658780 6611 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0404 01:57:47.658796 6611 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0404 01:57:47.658810 6611 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0404 01:57:47.658820 6611 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0404 01:57:47.658825 6611 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0404 01:57:47.658831 6611 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0404 01:57:47.658837 6611 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0404 01:57:47.658854 6611 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0404 01:57:47.658870 6611 factory.go:656] Stopping watch factory\\\\nI0404 01:57:47.658883 6611 ovnkube.go:599] Stopped ovnkube\\\\nI0404 01:57:47.658900 6611 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0404 01:57:47.658909 6611 metrics.go:553] Stopping metrics server at address \\\\\\\"\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.325554 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.336390 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.346170 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.355540 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21fa3376-6f50-4819-b268-579c7500c923\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce18a007870156d09b31ab38dd939beba4fc47418b8e07d70945eb4838f93130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9640effcf3619f733c3f5082251165355776fabe5d8c3ade7f06409ec459d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a4d1e6ca047390c262fe05912f5de2fb5d4fa0f6d51f4462d9b602901aa99dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.364931 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.378548 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c03cfac4cd4e9c86986c359493cc6c5551cc537bf55e0a9fb52f9f5c80bb94a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.390589 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.403027 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.413581 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.425748 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.944991 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cntwc_d004639b-c07a-4401-8588-8af4ed981db3/ovnkube-controller/1.log" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.947413 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cntwc_d004639b-c07a-4401-8588-8af4ed981db3/ovnkube-controller/0.log" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.950685 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cntwc_d004639b-c07a-4401-8588-8af4ed981db3/ovn-acl-logging/0.log" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.951983 4681 generic.go:334] "Generic (PLEG): container finished" podID="d004639b-c07a-4401-8588-8af4ed981db3" containerID="f2291b9a7723ee2221806bf4fceca4535aad0031deddd6cd7bb26cf4feb2ff67" exitCode=1 Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.952042 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" event={"ID":"d004639b-c07a-4401-8588-8af4ed981db3","Type":"ContainerDied","Data":"f2291b9a7723ee2221806bf4fceca4535aad0031deddd6cd7bb26cf4feb2ff67"} Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.952090 4681 scope.go:117] "RemoveContainer" containerID="af1d06a7c84f707fcf0f64b3333b066d8e168e416c32ed193a3089676d82a23a" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.953074 4681 scope.go:117] "RemoveContainer" containerID="f2291b9a7723ee2221806bf4fceca4535aad0031deddd6cd7bb26cf4feb2ff67" Apr 04 01:57:48 crc kubenswrapper[4681]: E0404 01:57:48.953259 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cntwc_openshift-ovn-kubernetes(d004639b-c07a-4401-8588-8af4ed981db3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" podUID="d004639b-c07a-4401-8588-8af4ed981db3" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.977028 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:48 crc kubenswrapper[4681]: I0404 01:57:48.994927 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:48Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:49 crc kubenswrapper[4681]: I0404 01:57:49.010854 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:49Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:49 crc kubenswrapper[4681]: I0404 01:57:49.023489 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d5a80321a01a279ac896396acffe25447fc28d3250dba73c82999553f39554d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:49Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:49 crc kubenswrapper[4681]: I0404 01:57:49.037363 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:49Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:49 crc kubenswrapper[4681]: I0404 01:57:49.051952 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f8dda2d-d2e0-4669-8a9d-f3ae21eaa22e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7eb0459a3d135d52a1c4168c3def80940997d4a4286ffa23d2abdde13b7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5fc946678318218c1dc93f59aea7fc760172225cdb94efee9ea779b0a3ff53c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:55:51Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0404 01:55:23.581321 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0404 01:55:23.584508 1 observer_polling.go:159] Starting file observer\\\\nI0404 01:55:23.630158 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0404 01:55:23.635460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0404 01:55:51.649805 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0404 01:55:51.649953 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:51Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a52ccf1a40f8ddec3fbfd455dfaae3a08af9e38f488b774ae61283eb539214d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6196c5b3a05c942055f25e31a350843406ea37406e1af62c782dc9f8a4f1c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7096a59f168cc8b9e7c61688d8ae41239ce4d62f3f0aa859bdb8586c85afe752\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:49Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:49 crc kubenswrapper[4681]: I0404 01:57:49.063931 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0266f6c217cfb69ed899bb926acc9f1c43b4429ff30b6fb54db89b92adffb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:49Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:49 crc kubenswrapper[4681]: I0404 01:57:49.082423 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:49Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:49 crc kubenswrapper[4681]: I0404 01:57:49.096197 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fcdf2be16bb9aeb8d28d6805badefaab604b3ba020eb5d101baaeebe2fb7d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:49Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:49 crc kubenswrapper[4681]: I0404 01:57:49.108818 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:49Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:49 crc kubenswrapper[4681]: I0404 01:57:49.132578 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6e4534b33b6e5ef0b5215513f32180fc0ca211032267b4a8c4234f8898f568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2840027a88428593a03be2fe95869f9aec36765ef0a70345f5e7832bbe6500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eaf3363b57f81c1d30955e2f079097ee67801f30ce59a31744cbafe32358635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://781945902b239ddb43e807866e874c9772627c2a89a780bb78915f9d56be3af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88384a096ba35de77fe905c618d6700477502a7f5d8dad360e3783f908234037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efab081b44f2dca4164229bbc1cc45821aa0e010e6a68f83087c3f2eff932840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2291b9a7723ee2221806bf4fceca4535aad0031deddd6cd7bb26cf4feb2ff67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af1d06a7c84f707fcf0f64b3333b066d8e168e416c32ed193a3089676d82a23a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"message\\\":\\\".658748 6611 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0404 01:57:47.658761 6611 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0404 01:57:47.658766 6611 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0404 01:57:47.658783 6611 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0404 01:57:47.658780 6611 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0404 01:57:47.658796 6611 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0404 01:57:47.658810 6611 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0404 01:57:47.658820 6611 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0404 01:57:47.658825 6611 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0404 01:57:47.658831 6611 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0404 01:57:47.658837 6611 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0404 01:57:47.658854 6611 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0404 01:57:47.658870 6611 factory.go:656] Stopping watch factory\\\\nI0404 01:57:47.658883 6611 ovnkube.go:599] Stopped ovnkube\\\\nI0404 01:57:47.658900 6611 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0404 01:57:47.658909 6611 metrics.go:553] Stopping metrics server at address \\\\\\\"\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2291b9a7723ee2221806bf4fceca4535aad0031deddd6cd7bb26cf4feb2ff67\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:57:48Z\\\",\\\"message\\\":\\\"ting logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0404 01:57:48.680011 6844 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0404 01:57:48.680300 6844 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0404 01:57:48.679966 6844 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0404 01:57:48.680326 6844 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0404 01:57:48.679953 6844 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-8jsq4 after 0 failed attempt(s)\\\\nI0404 01:57:48.680341 6844 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-8jsq4\\\\nF0404 01:57:48.680200 6844 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:49Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:49 crc kubenswrapper[4681]: I0404 01:57:49.156972 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:49Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:49 crc kubenswrapper[4681]: I0404 01:57:49.172451 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:49Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:49 crc kubenswrapper[4681]: I0404 01:57:49.190056 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:49Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:49 crc kubenswrapper[4681]: I0404 01:57:49.205704 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21fa3376-6f50-4819-b268-579c7500c923\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce18a007870156d09b31ab38dd939beba4fc47418b8e07d70945eb4838f93130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9640effcf3619f733c3f5082251165355776fabe5d8c3ade7f06409ec459d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a4d1e6ca047390c262fe05912f5de2fb5d4fa0f6d51f4462d9b602901aa99dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:49Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:49 crc kubenswrapper[4681]: I0404 01:57:49.218222 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:49Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:49 crc kubenswrapper[4681]: I0404 01:57:49.230437 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c03cfac4cd4e9c86986c359493cc6c5551cc537bf55e0a9fb52f9f5c80bb94a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:49Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:49 crc kubenswrapper[4681]: I0404 01:57:49.238901 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:49Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:49 crc kubenswrapper[4681]: I0404 01:57:49.958934 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cntwc_d004639b-c07a-4401-8588-8af4ed981db3/ovnkube-controller/1.log" Apr 04 01:57:49 crc kubenswrapper[4681]: I0404 01:57:49.965173 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cntwc_d004639b-c07a-4401-8588-8af4ed981db3/ovn-acl-logging/0.log" Apr 04 01:57:50 crc kubenswrapper[4681]: I0404 01:57:50.200681 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:50 crc kubenswrapper[4681]: E0404 01:57:50.200832 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:57:50 crc kubenswrapper[4681]: I0404 01:57:50.201043 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:50 crc kubenswrapper[4681]: E0404 01:57:50.201091 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:57:50 crc kubenswrapper[4681]: I0404 01:57:50.201319 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:50 crc kubenswrapper[4681]: E0404 01:57:50.201376 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:57:50 crc kubenswrapper[4681]: I0404 01:57:50.201328 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:50 crc kubenswrapper[4681]: E0404 01:57:50.201692 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.214973 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:51Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.229001 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:51Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.242383 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:51Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.258159 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:51Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.276714 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f8dda2d-d2e0-4669-8a9d-f3ae21eaa22e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7eb0459a3d135d52a1c4168c3def80940997d4a4286ffa23d2abdde13b7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5fc946678318218c1dc93f59aea7fc760172225cdb94efee9ea779b0a3ff53c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:55:51Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0404 01:55:23.581321 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0404 01:55:23.584508 1 observer_polling.go:159] Starting file observer\\\\nI0404 01:55:23.630158 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0404 01:55:23.635460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0404 01:55:51.649805 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0404 01:55:51.649953 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:51Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a52ccf1a40f8ddec3fbfd455dfaae3a08af9e38f488b774ae61283eb539214d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6196c5b3a05c942055f25e31a350843406ea37406e1af62c782dc9f8a4f1c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7096a59f168cc8b9e7c61688d8ae41239ce4d62f3f0aa859bdb8586c85afe752\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:51Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.289132 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0266f6c217cfb69ed899bb926acc9f1c43b4429ff30b6fb54db89b92adffb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:51Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:51 crc kubenswrapper[4681]: E0404 01:57:51.309902 4681 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.311805 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:51Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.321814 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fcdf2be16bb9aeb8d28d6805badefaab604b3ba020eb5d101baaeebe2fb7d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:51Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.333968 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d5a80321a01a279ac896396acffe25447fc28d3250dba73c82999553f39554d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:51Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.356364 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6e4534b33b6e5ef0b5215513f32180fc0ca211032267b4a8c4234f8898f568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2840027a88428593a03be2fe95869f9aec36765ef0a70345f5e7832bbe6500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eaf3363b57f81c1d30955e2f079097ee67801f30ce59a31744cbafe32358635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://781945902b239ddb43e807866e874c9772627c2a89a780bb78915f9d56be3af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88384a096ba35de77fe905c618d6700477502a7f5d8dad360e3783f908234037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efab081b44f2dca4164229bbc1cc45821aa0e010e6a68f83087c3f2eff932840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2291b9a7723ee2221806bf4fceca4535aad0031deddd6cd7bb26cf4feb2ff67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af1d06a7c84f707fcf0f64b3333b066d8e168e416c32ed193a3089676d82a23a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"message\\\":\\\".658748 6611 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0404 01:57:47.658761 6611 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0404 01:57:47.658766 6611 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0404 01:57:47.658783 6611 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0404 01:57:47.658780 6611 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0404 01:57:47.658796 6611 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0404 01:57:47.658810 6611 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0404 01:57:47.658820 6611 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0404 01:57:47.658825 6611 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0404 01:57:47.658831 6611 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0404 01:57:47.658837 6611 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0404 01:57:47.658854 6611 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0404 01:57:47.658870 6611 factory.go:656] Stopping watch factory\\\\nI0404 01:57:47.658883 6611 ovnkube.go:599] Stopped ovnkube\\\\nI0404 01:57:47.658900 6611 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0404 01:57:47.658909 6611 metrics.go:553] Stopping metrics server at address \\\\\\\"\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:42Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2291b9a7723ee2221806bf4fceca4535aad0031deddd6cd7bb26cf4feb2ff67\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:57:48Z\\\",\\\"message\\\":\\\"ting logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0404 01:57:48.680011 6844 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0404 01:57:48.680300 6844 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0404 01:57:48.679966 6844 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0404 01:57:48.680326 6844 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0404 01:57:48.679953 6844 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-8jsq4 after 0 failed attempt(s)\\\\nI0404 01:57:48.680341 6844 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-8jsq4\\\\nF0404 01:57:48.680200 6844 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:51Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.380976 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:51Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.393946 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:51Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.407988 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:51Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.422103 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:51Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.435323 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21fa3376-6f50-4819-b268-579c7500c923\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce18a007870156d09b31ab38dd939beba4fc47418b8e07d70945eb4838f93130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9640effcf3619f733c3f5082251165355776fabe5d8c3ade7f06409ec459d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a4d1e6ca047390c262fe05912f5de2fb5d4fa0f6d51f4462d9b602901aa99dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:51Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.455647 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:51Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.477404 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c03cfac4cd4e9c86986c359493cc6c5551cc537bf55e0a9fb52f9f5c80bb94a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:51Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.489600 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:51Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.503550 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.503583 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.503594 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.503610 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.503618 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:51Z","lastTransitionTime":"2026-04-04T01:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:51 crc kubenswrapper[4681]: E0404 01:57:51.521403 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:51Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.525707 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.525741 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.525749 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.525762 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.525770 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:51Z","lastTransitionTime":"2026-04-04T01:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:51 crc kubenswrapper[4681]: E0404 01:57:51.538105 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:51Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.541746 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.541782 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.541794 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.541816 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.541831 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:51Z","lastTransitionTime":"2026-04-04T01:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:51 crc kubenswrapper[4681]: E0404 01:57:51.558724 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:51Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.562563 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.562597 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.562606 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.562618 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.562630 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:51Z","lastTransitionTime":"2026-04-04T01:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:51 crc kubenswrapper[4681]: E0404 01:57:51.574703 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:51Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.578387 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.578426 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.578437 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.578454 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:57:51 crc kubenswrapper[4681]: I0404 01:57:51.578467 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:57:51Z","lastTransitionTime":"2026-04-04T01:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:57:51 crc kubenswrapper[4681]: E0404 01:57:51.595688 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:51Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:51 crc kubenswrapper[4681]: E0404 01:57:51.595851 4681 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Apr 04 01:57:52 crc kubenswrapper[4681]: I0404 01:57:52.200611 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:52 crc kubenswrapper[4681]: I0404 01:57:52.200704 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:52 crc kubenswrapper[4681]: E0404 01:57:52.200757 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:57:52 crc kubenswrapper[4681]: E0404 01:57:52.200904 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:57:52 crc kubenswrapper[4681]: I0404 01:57:52.200966 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:52 crc kubenswrapper[4681]: E0404 01:57:52.201017 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:57:52 crc kubenswrapper[4681]: I0404 01:57:52.201080 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:52 crc kubenswrapper[4681]: E0404 01:57:52.201185 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:57:54 crc kubenswrapper[4681]: I0404 01:57:54.200577 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:54 crc kubenswrapper[4681]: E0404 01:57:54.200946 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:57:54 crc kubenswrapper[4681]: I0404 01:57:54.200656 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:54 crc kubenswrapper[4681]: E0404 01:57:54.201022 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:57:54 crc kubenswrapper[4681]: I0404 01:57:54.200617 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:54 crc kubenswrapper[4681]: I0404 01:57:54.200732 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:54 crc kubenswrapper[4681]: E0404 01:57:54.201106 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:57:54 crc kubenswrapper[4681]: E0404 01:57:54.201214 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:57:56 crc kubenswrapper[4681]: I0404 01:57:56.200288 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:56 crc kubenswrapper[4681]: I0404 01:57:56.200339 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:56 crc kubenswrapper[4681]: I0404 01:57:56.200393 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:56 crc kubenswrapper[4681]: E0404 01:57:56.200425 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:57:56 crc kubenswrapper[4681]: I0404 01:57:56.200304 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:56 crc kubenswrapper[4681]: E0404 01:57:56.200571 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:57:56 crc kubenswrapper[4681]: E0404 01:57:56.200632 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:57:56 crc kubenswrapper[4681]: E0404 01:57:56.200706 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:57:56 crc kubenswrapper[4681]: E0404 01:57:56.311635 4681 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 04 01:57:56 crc kubenswrapper[4681]: I0404 01:57:56.563108 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:57:56 crc kubenswrapper[4681]: I0404 01:57:56.570071 4681 scope.go:117] "RemoveContainer" containerID="f2291b9a7723ee2221806bf4fceca4535aad0031deddd6cd7bb26cf4feb2ff67" Apr 04 01:57:56 crc kubenswrapper[4681]: E0404 01:57:56.570622 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cntwc_openshift-ovn-kubernetes(d004639b-c07a-4401-8588-8af4ed981db3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" podUID="d004639b-c07a-4401-8588-8af4ed981db3" Apr 04 01:57:56 crc kubenswrapper[4681]: I0404 01:57:56.581931 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:56Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:56 crc kubenswrapper[4681]: I0404 01:57:56.594256 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21fa3376-6f50-4819-b268-579c7500c923\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce18a007870156d09b31ab38dd939beba4fc47418b8e07d70945eb4838f93130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9640effcf3619f733c3f5082251165355776fabe5d8c3ade7f06409ec459d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a4d1e6ca047390c262fe05912f5de2fb5d4fa0f6d51f4462d9b602901aa99dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:56Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:56 crc kubenswrapper[4681]: I0404 01:57:56.608779 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:56Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:56 crc kubenswrapper[4681]: I0404 01:57:56.624951 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c03cfac4cd4e9c86986c359493cc6c5551cc537bf55e0a9fb52f9f5c80bb94a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:56Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:56 crc kubenswrapper[4681]: I0404 01:57:56.638114 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:56Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:56 crc kubenswrapper[4681]: I0404 01:57:56.650045 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:56Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:56 crc kubenswrapper[4681]: I0404 01:57:56.661971 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:56Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:56 crc kubenswrapper[4681]: I0404 01:57:56.671360 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fcdf2be16bb9aeb8d28d6805badefaab604b3ba020eb5d101baaeebe2fb7d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:56Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:56 crc kubenswrapper[4681]: I0404 01:57:56.679241 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d5a80321a01a279ac896396acffe25447fc28d3250dba73c82999553f39554d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:56Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:56 crc kubenswrapper[4681]: I0404 01:57:56.688760 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:56Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:56 crc kubenswrapper[4681]: I0404 01:57:56.699738 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f8dda2d-d2e0-4669-8a9d-f3ae21eaa22e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7eb0459a3d135d52a1c4168c3def80940997d4a4286ffa23d2abdde13b7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5fc946678318218c1dc93f59aea7fc760172225cdb94efee9ea779b0a3ff53c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:55:51Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0404 01:55:23.581321 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0404 01:55:23.584508 1 observer_polling.go:159] Starting file observer\\\\nI0404 01:55:23.630158 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0404 01:55:23.635460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0404 01:55:51.649805 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0404 01:55:51.649953 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:51Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a52ccf1a40f8ddec3fbfd455dfaae3a08af9e38f488b774ae61283eb539214d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6196c5b3a05c942055f25e31a350843406ea37406e1af62c782dc9f8a4f1c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7096a59f168cc8b9e7c61688d8ae41239ce4d62f3f0aa859bdb8586c85afe752\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:56Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:56 crc kubenswrapper[4681]: I0404 01:57:56.714121 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0266f6c217cfb69ed899bb926acc9f1c43b4429ff30b6fb54db89b92adffb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:56Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:56 crc kubenswrapper[4681]: I0404 01:57:56.727497 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:56Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:56 crc kubenswrapper[4681]: I0404 01:57:56.740190 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:56Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:56 crc kubenswrapper[4681]: I0404 01:57:56.752705 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:56Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:56 crc kubenswrapper[4681]: I0404 01:57:56.774321 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6e4534b33b6e5ef0b5215513f32180fc0ca211032267b4a8c4234f8898f568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2840027a88428593a03be2fe95869f9aec36765ef0a70345f5e7832bbe6500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eaf3363b57f81c1d30955e2f079097ee67801f30ce59a31744cbafe32358635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://781945902b239ddb43e807866e874c9772627c2a89a780bb78915f9d56be3af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88384a096ba35de77fe905c618d6700477502a7f5d8dad360e3783f908234037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efab081b44f2dca4164229bbc1cc45821aa0e010e6a68f83087c3f2eff932840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2291b9a7723ee2221806bf4fceca4535aad0031deddd6cd7bb26cf4feb2ff67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2291b9a7723ee2221806bf4fceca4535aad0031deddd6cd7bb26cf4feb2ff67\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:57:48Z\\\",\\\"message\\\":\\\"ting logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0404 01:57:48.680011 6844 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0404 01:57:48.680300 6844 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0404 01:57:48.679966 6844 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0404 01:57:48.680326 6844 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0404 01:57:48.679953 6844 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-8jsq4 after 0 failed attempt(s)\\\\nI0404 01:57:48.680341 6844 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-8jsq4\\\\nF0404 01:57:48.680200 6844 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cntwc_openshift-ovn-kubernetes(d004639b-c07a-4401-8588-8af4ed981db3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:56Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:56 crc kubenswrapper[4681]: I0404 01:57:56.798435 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:56Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:56 crc kubenswrapper[4681]: I0404 01:57:56.812960 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:57:56Z is after 2025-08-24T17:21:41Z" Apr 04 01:57:58 crc kubenswrapper[4681]: I0404 01:57:58.199851 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:57:58 crc kubenswrapper[4681]: I0404 01:57:58.199851 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:57:58 crc kubenswrapper[4681]: I0404 01:57:58.199957 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:57:58 crc kubenswrapper[4681]: I0404 01:57:58.200030 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:57:58 crc kubenswrapper[4681]: E0404 01:57:58.200232 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:57:58 crc kubenswrapper[4681]: E0404 01:57:58.200334 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:57:58 crc kubenswrapper[4681]: E0404 01:57:58.200465 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:57:58 crc kubenswrapper[4681]: E0404 01:57:58.200598 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:58:00 crc kubenswrapper[4681]: I0404 01:58:00.136811 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:58:00 crc kubenswrapper[4681]: I0404 01:58:00.137233 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:58:00 crc kubenswrapper[4681]: I0404 01:58:00.137335 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:58:00 crc kubenswrapper[4681]: E0404 01:58:00.137730 4681 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Apr 04 01:58:00 crc kubenswrapper[4681]: E0404 01:58:00.137837 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-04-04 01:59:04.137810604 +0000 UTC m=+223.803585724 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Apr 04 01:58:00 crc kubenswrapper[4681]: E0404 01:58:00.139246 4681 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Apr 04 01:58:00 crc kubenswrapper[4681]: E0404 01:58:00.139392 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-04-04 01:59:04.139355692 +0000 UTC m=+223.805130812 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Apr 04 01:58:00 crc kubenswrapper[4681]: E0404 01:58:00.139508 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:04.139416814 +0000 UTC m=+223.805191984 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:00 crc kubenswrapper[4681]: I0404 01:58:00.200416 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:58:00 crc kubenswrapper[4681]: I0404 01:58:00.200497 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:58:00 crc kubenswrapper[4681]: I0404 01:58:00.200531 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:58:00 crc kubenswrapper[4681]: E0404 01:58:00.200683 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:58:00 crc kubenswrapper[4681]: I0404 01:58:00.200701 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:58:00 crc kubenswrapper[4681]: E0404 01:58:00.200861 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:58:00 crc kubenswrapper[4681]: E0404 01:58:00.200940 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:58:00 crc kubenswrapper[4681]: E0404 01:58:00.201059 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:58:00 crc kubenswrapper[4681]: I0404 01:58:00.238826 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:58:00 crc kubenswrapper[4681]: I0404 01:58:00.238888 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:58:00 crc kubenswrapper[4681]: E0404 01:58:00.239007 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 04 01:58:00 crc kubenswrapper[4681]: E0404 01:58:00.239009 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Apr 04 01:58:00 crc kubenswrapper[4681]: E0404 01:58:00.239022 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 04 01:58:00 crc kubenswrapper[4681]: E0404 01:58:00.239033 4681 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Apr 04 01:58:00 crc kubenswrapper[4681]: E0404 01:58:00.239035 4681 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:58:00 crc kubenswrapper[4681]: E0404 01:58:00.239049 4681 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:58:00 crc kubenswrapper[4681]: E0404 01:58:00.239089 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-04-04 01:59:04.239076859 +0000 UTC m=+223.904851979 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:58:00 crc kubenswrapper[4681]: E0404 01:58:00.239104 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-04-04 01:59:04.23909913 +0000 UTC m=+223.904874250 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.215338 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:01Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.231999 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c03cfac4cd4e9c86986c359493cc6c5551cc537bf55e0a9fb52f9f5c80bb94a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:01Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.246666 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:01Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.260045 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21fa3376-6f50-4819-b268-579c7500c923\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce18a007870156d09b31ab38dd939beba4fc47418b8e07d70945eb4838f93130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9640effcf3619f733c3f5082251165355776fabe5d8c3ade7f06409ec459d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a4d1e6ca047390c262fe05912f5de2fb5d4fa0f6d51f4462d9b602901aa99dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:01Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.272977 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:01Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.288909 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:01Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.302849 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:01Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:01 crc kubenswrapper[4681]: E0404 01:58:01.312369 4681 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.316781 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0266f6c217cfb69ed899bb926acc9f1c43b4429ff30b6fb54db89b92adffb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:01Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.329649 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:01Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.341402 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fcdf2be16bb9aeb8d28d6805badefaab604b3ba020eb5d101baaeebe2fb7d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:01Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.353542 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d5a80321a01a279ac896396acffe25447fc28d3250dba73c82999553f39554d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:01Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.367084 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:01Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.382499 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f8dda2d-d2e0-4669-8a9d-f3ae21eaa22e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7eb0459a3d135d52a1c4168c3def80940997d4a4286ffa23d2abdde13b7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5fc946678318218c1dc93f59aea7fc760172225cdb94efee9ea779b0a3ff53c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:55:51Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0404 01:55:23.581321 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0404 01:55:23.584508 1 observer_polling.go:159] Starting file observer\\\\nI0404 01:55:23.630158 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0404 01:55:23.635460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0404 01:55:51.649805 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0404 01:55:51.649953 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:51Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a52ccf1a40f8ddec3fbfd455dfaae3a08af9e38f488b774ae61283eb539214d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6196c5b3a05c942055f25e31a350843406ea37406e1af62c782dc9f8a4f1c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7096a59f168cc8b9e7c61688d8ae41239ce4d62f3f0aa859bdb8586c85afe752\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:01Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.400888 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:01Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.414329 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:01Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.436061 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:01Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.446726 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:01Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.486318 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6e4534b33b6e5ef0b5215513f32180fc0ca211032267b4a8c4234f8898f568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2840027a88428593a03be2fe95869f9aec36765ef0a70345f5e7832bbe6500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eaf3363b57f81c1d30955e2f079097ee67801f30ce59a31744cbafe32358635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://781945902b239ddb43e807866e874c9772627c2a89a780bb78915f9d56be3af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88384a096ba35de77fe905c618d6700477502a7f5d8dad360e3783f908234037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efab081b44f2dca4164229bbc1cc45821aa0e010e6a68f83087c3f2eff932840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2291b9a7723ee2221806bf4fceca4535aad0031deddd6cd7bb26cf4feb2ff67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2291b9a7723ee2221806bf4fceca4535aad0031deddd6cd7bb26cf4feb2ff67\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:57:48Z\\\",\\\"message\\\":\\\"ting logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0404 01:57:48.680011 6844 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0404 01:57:48.680300 6844 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0404 01:57:48.679966 6844 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0404 01:57:48.680326 6844 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0404 01:57:48.679953 6844 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-8jsq4 after 0 failed attempt(s)\\\\nI0404 01:57:48.680341 6844 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-8jsq4\\\\nF0404 01:57:48.680200 6844 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cntwc_openshift-ovn-kubernetes(d004639b-c07a-4401-8588-8af4ed981db3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:01Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.809677 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.809790 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.809813 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.809841 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.809862 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:58:01Z","lastTransitionTime":"2026-04-04T01:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:58:01 crc kubenswrapper[4681]: E0404 01:58:01.828504 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:01Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.836958 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.837046 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.837082 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.837115 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.837139 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:58:01Z","lastTransitionTime":"2026-04-04T01:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:58:01 crc kubenswrapper[4681]: E0404 01:58:01.856199 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:01Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.860782 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.860820 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.860833 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.860849 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.860861 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:58:01Z","lastTransitionTime":"2026-04-04T01:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:58:01 crc kubenswrapper[4681]: E0404 01:58:01.876304 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:01Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.880533 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.880559 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.880568 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.880581 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.880590 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:58:01Z","lastTransitionTime":"2026-04-04T01:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:58:01 crc kubenswrapper[4681]: E0404 01:58:01.898460 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:01Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.902818 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.902919 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.902930 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.902961 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:58:01 crc kubenswrapper[4681]: I0404 01:58:01.902971 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:58:01Z","lastTransitionTime":"2026-04-04T01:58:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:58:01 crc kubenswrapper[4681]: E0404 01:58:01.927488 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:01Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:01 crc kubenswrapper[4681]: E0404 01:58:01.927643 4681 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Apr 04 01:58:02 crc kubenswrapper[4681]: I0404 01:58:02.200596 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:58:02 crc kubenswrapper[4681]: I0404 01:58:02.200669 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:58:02 crc kubenswrapper[4681]: I0404 01:58:02.200716 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:58:02 crc kubenswrapper[4681]: I0404 01:58:02.200774 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:58:02 crc kubenswrapper[4681]: E0404 01:58:02.200920 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:58:02 crc kubenswrapper[4681]: E0404 01:58:02.201095 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:58:02 crc kubenswrapper[4681]: E0404 01:58:02.201158 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:58:02 crc kubenswrapper[4681]: E0404 01:58:02.201226 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:58:04 crc kubenswrapper[4681]: I0404 01:58:04.200546 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:58:04 crc kubenswrapper[4681]: I0404 01:58:04.200586 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:58:04 crc kubenswrapper[4681]: I0404 01:58:04.200618 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:58:04 crc kubenswrapper[4681]: I0404 01:58:04.200546 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:58:04 crc kubenswrapper[4681]: E0404 01:58:04.200737 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:58:04 crc kubenswrapper[4681]: E0404 01:58:04.200838 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:58:04 crc kubenswrapper[4681]: E0404 01:58:04.201000 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:58:04 crc kubenswrapper[4681]: E0404 01:58:04.201155 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:58:06 crc kubenswrapper[4681]: I0404 01:58:06.200469 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:58:06 crc kubenswrapper[4681]: I0404 01:58:06.200537 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:58:06 crc kubenswrapper[4681]: I0404 01:58:06.200573 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:58:06 crc kubenswrapper[4681]: E0404 01:58:06.200710 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:58:06 crc kubenswrapper[4681]: I0404 01:58:06.200747 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:58:06 crc kubenswrapper[4681]: E0404 01:58:06.200913 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:58:06 crc kubenswrapper[4681]: E0404 01:58:06.201069 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:58:06 crc kubenswrapper[4681]: E0404 01:58:06.201155 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:58:06 crc kubenswrapper[4681]: E0404 01:58:06.313681 4681 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 04 01:58:08 crc kubenswrapper[4681]: I0404 01:58:08.200536 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:58:08 crc kubenswrapper[4681]: I0404 01:58:08.200645 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:58:08 crc kubenswrapper[4681]: E0404 01:58:08.201590 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:58:08 crc kubenswrapper[4681]: I0404 01:58:08.200756 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:58:08 crc kubenswrapper[4681]: E0404 01:58:08.201683 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:58:08 crc kubenswrapper[4681]: E0404 01:58:08.201740 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:58:08 crc kubenswrapper[4681]: I0404 01:58:08.200681 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:58:08 crc kubenswrapper[4681]: E0404 01:58:08.201940 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:58:10 crc kubenswrapper[4681]: I0404 01:58:10.200117 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:58:10 crc kubenswrapper[4681]: I0404 01:58:10.200192 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:58:10 crc kubenswrapper[4681]: I0404 01:58:10.200144 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:58:10 crc kubenswrapper[4681]: I0404 01:58:10.200114 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:58:10 crc kubenswrapper[4681]: E0404 01:58:10.200320 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:58:10 crc kubenswrapper[4681]: E0404 01:58:10.200557 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:58:10 crc kubenswrapper[4681]: E0404 01:58:10.200713 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:58:10 crc kubenswrapper[4681]: E0404 01:58:10.201103 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:58:10 crc kubenswrapper[4681]: I0404 01:58:10.217149 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Apr 04 01:58:10 crc kubenswrapper[4681]: I0404 01:58:10.449727 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41bdd8e6-130d-4e3e-b466-313031c233d1-metrics-certs\") pod \"network-metrics-daemon-jk6f6\" (UID: \"41bdd8e6-130d-4e3e-b466-313031c233d1\") " pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:58:10 crc kubenswrapper[4681]: E0404 01:58:10.449934 4681 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Apr 04 01:58:10 crc kubenswrapper[4681]: E0404 01:58:10.450032 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41bdd8e6-130d-4e3e-b466-313031c233d1-metrics-certs podName:41bdd8e6-130d-4e3e-b466-313031c233d1 nodeName:}" failed. No retries permitted until 2026-04-04 01:59:14.45001284 +0000 UTC m=+234.115787960 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/41bdd8e6-130d-4e3e-b466-313031c233d1-metrics-certs") pod "network-metrics-daemon-jk6f6" (UID: "41bdd8e6-130d-4e3e-b466-313031c233d1") : object "openshift-multus"/"metrics-daemon-secret" not registered Apr 04 01:58:11 crc kubenswrapper[4681]: I0404 01:58:11.202307 4681 scope.go:117] "RemoveContainer" containerID="f2291b9a7723ee2221806bf4fceca4535aad0031deddd6cd7bb26cf4feb2ff67" Apr 04 01:58:11 crc kubenswrapper[4681]: I0404 01:58:11.220620 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:11Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:11 crc kubenswrapper[4681]: I0404 01:58:11.240655 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:11Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:11 crc kubenswrapper[4681]: I0404 01:58:11.260951 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:11Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:11 crc kubenswrapper[4681]: I0404 01:58:11.274500 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d5a80321a01a279ac896396acffe25447fc28d3250dba73c82999553f39554d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:11Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:11 crc kubenswrapper[4681]: I0404 01:58:11.290354 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:11Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:11 crc kubenswrapper[4681]: I0404 01:58:11.314787 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f8dda2d-d2e0-4669-8a9d-f3ae21eaa22e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7eb0459a3d135d52a1c4168c3def80940997d4a4286ffa23d2abdde13b7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5fc946678318218c1dc93f59aea7fc760172225cdb94efee9ea779b0a3ff53c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:55:51Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0404 01:55:23.581321 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0404 01:55:23.584508 1 observer_polling.go:159] Starting file observer\\\\nI0404 01:55:23.630158 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0404 01:55:23.635460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0404 01:55:51.649805 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0404 01:55:51.649953 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:51Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a52ccf1a40f8ddec3fbfd455dfaae3a08af9e38f488b774ae61283eb539214d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6196c5b3a05c942055f25e31a350843406ea37406e1af62c782dc9f8a4f1c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7096a59f168cc8b9e7c61688d8ae41239ce4d62f3f0aa859bdb8586c85afe752\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:11Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:11 crc kubenswrapper[4681]: E0404 01:58:11.315005 4681 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 04 01:58:11 crc kubenswrapper[4681]: I0404 01:58:11.332047 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0266f6c217cfb69ed899bb926acc9f1c43b4429ff30b6fb54db89b92adffb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:11Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:11 crc kubenswrapper[4681]: I0404 01:58:11.351726 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:11Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:11 crc kubenswrapper[4681]: I0404 01:58:11.370667 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fcdf2be16bb9aeb8d28d6805badefaab604b3ba020eb5d101baaeebe2fb7d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:11Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:11 crc kubenswrapper[4681]: I0404 01:58:11.393460 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:11Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:11 crc kubenswrapper[4681]: I0404 01:58:11.426305 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6e4534b33b6e5ef0b5215513f32180fc0ca211032267b4a8c4234f8898f568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2840027a88428593a03be2fe95869f9aec36765ef0a70345f5e7832bbe6500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eaf3363b57f81c1d30955e2f079097ee67801f30ce59a31744cbafe32358635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://781945902b239ddb43e807866e874c9772627c2a89a780bb78915f9d56be3af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88384a096ba35de77fe905c618d6700477502a7f5d8dad360e3783f908234037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efab081b44f2dca4164229bbc1cc45821aa0e010e6a68f83087c3f2eff932840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2291b9a7723ee2221806bf4fceca4535aad0031deddd6cd7bb26cf4feb2ff67\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2291b9a7723ee2221806bf4fceca4535aad0031deddd6cd7bb26cf4feb2ff67\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:57:48Z\\\",\\\"message\\\":\\\"ting logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0404 01:57:48.680011 6844 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0404 01:57:48.680300 6844 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0404 01:57:48.679966 6844 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0404 01:57:48.680326 6844 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0404 01:57:48.679953 6844 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-8jsq4 after 0 failed attempt(s)\\\\nI0404 01:57:48.680341 6844 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-8jsq4\\\\nF0404 01:57:48.680200 6844 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-cntwc_openshift-ovn-kubernetes(d004639b-c07a-4401-8588-8af4ed981db3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:11Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:11 crc kubenswrapper[4681]: I0404 01:58:11.444923 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9974bfdb-b2d7-4868-b6aa-2e224af81ae5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b4cec6cb4073050c86228bfabf2ec38203bb000808aa283fbab1f7af8756c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d279e74261f5a470aec96a2cb3111a60135cc342c6d922552da3b315ab10a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d279e74261f5a470aec96a2cb3111a60135cc342c6d922552da3b315ab10a134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:11Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:11 crc kubenswrapper[4681]: I0404 01:58:11.479413 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:11Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:11 crc kubenswrapper[4681]: I0404 01:58:11.503114 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:11Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:11 crc kubenswrapper[4681]: I0404 01:58:11.524951 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:11Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:11 crc kubenswrapper[4681]: I0404 01:58:11.542989 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21fa3376-6f50-4819-b268-579c7500c923\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce18a007870156d09b31ab38dd939beba4fc47418b8e07d70945eb4838f93130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9640effcf3619f733c3f5082251165355776fabe5d8c3ade7f06409ec459d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a4d1e6ca047390c262fe05912f5de2fb5d4fa0f6d51f4462d9b602901aa99dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:11Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:11 crc kubenswrapper[4681]: I0404 01:58:11.559773 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:11Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:11 crc kubenswrapper[4681]: I0404 01:58:11.577640 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c03cfac4cd4e9c86986c359493cc6c5551cc537bf55e0a9fb52f9f5c80bb94a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:11Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:11 crc kubenswrapper[4681]: I0404 01:58:11.592330 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:11Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.051775 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cntwc_d004639b-c07a-4401-8588-8af4ed981db3/ovnkube-controller/1.log" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.054783 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.054811 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.054823 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.054839 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.054850 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:58:12Z","lastTransitionTime":"2026-04-04T01:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.055797 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cntwc_d004639b-c07a-4401-8588-8af4ed981db3/ovn-acl-logging/0.log" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.056800 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" event={"ID":"d004639b-c07a-4401-8588-8af4ed981db3","Type":"ContainerStarted","Data":"63ae47ecb180b30ba0d105b197dfb996fd35e622dcf11c5eacfbb0cc0ddafea9"} Apr 04 01:58:12 crc kubenswrapper[4681]: E0404 01:58:12.073773 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:12Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.079490 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.079547 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.079567 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.079590 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.079607 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:58:12Z","lastTransitionTime":"2026-04-04T01:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:58:12 crc kubenswrapper[4681]: E0404 01:58:12.098504 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:12Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.102802 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.102856 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.102874 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.102900 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.102917 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:58:12Z","lastTransitionTime":"2026-04-04T01:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:58:12 crc kubenswrapper[4681]: E0404 01:58:12.124866 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:12Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.128885 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.128933 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.128944 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.128961 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.128972 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:58:12Z","lastTransitionTime":"2026-04-04T01:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:58:12 crc kubenswrapper[4681]: E0404 01:58:12.144018 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:12Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.149777 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.149832 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.149850 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.149873 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.149891 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:58:12Z","lastTransitionTime":"2026-04-04T01:58:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:58:12 crc kubenswrapper[4681]: E0404 01:58:12.171040 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:12Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:12 crc kubenswrapper[4681]: E0404 01:58:12.171309 4681 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.200437 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.200488 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.200514 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:58:12 crc kubenswrapper[4681]: E0404 01:58:12.200629 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:58:12 crc kubenswrapper[4681]: I0404 01:58:12.200909 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:58:12 crc kubenswrapper[4681]: E0404 01:58:12.201041 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:58:12 crc kubenswrapper[4681]: E0404 01:58:12.201440 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:58:12 crc kubenswrapper[4681]: E0404 01:58:12.201690 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:58:13 crc kubenswrapper[4681]: I0404 01:58:13.061702 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:58:13 crc kubenswrapper[4681]: I0404 01:58:13.084554 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f8dda2d-d2e0-4669-8a9d-f3ae21eaa22e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7eb0459a3d135d52a1c4168c3def80940997d4a4286ffa23d2abdde13b7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5fc946678318218c1dc93f59aea7fc760172225cdb94efee9ea779b0a3ff53c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:55:51Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0404 01:55:23.581321 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0404 01:55:23.584508 1 observer_polling.go:159] Starting file observer\\\\nI0404 01:55:23.630158 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0404 01:55:23.635460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0404 01:55:51.649805 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0404 01:55:51.649953 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:51Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a52ccf1a40f8ddec3fbfd455dfaae3a08af9e38f488b774ae61283eb539214d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6196c5b3a05c942055f25e31a350843406ea37406e1af62c782dc9f8a4f1c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7096a59f168cc8b9e7c61688d8ae41239ce4d62f3f0aa859bdb8586c85afe752\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:13Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:13 crc kubenswrapper[4681]: I0404 01:58:13.105858 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0266f6c217cfb69ed899bb926acc9f1c43b4429ff30b6fb54db89b92adffb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:13Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:13 crc kubenswrapper[4681]: I0404 01:58:13.125484 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:13Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:13 crc kubenswrapper[4681]: I0404 01:58:13.139555 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fcdf2be16bb9aeb8d28d6805badefaab604b3ba020eb5d101baaeebe2fb7d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:13Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:13 crc kubenswrapper[4681]: I0404 01:58:13.154540 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d5a80321a01a279ac896396acffe25447fc28d3250dba73c82999553f39554d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:13Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:13 crc kubenswrapper[4681]: I0404 01:58:13.171044 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:13Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:13 crc kubenswrapper[4681]: I0404 01:58:13.189002 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9974bfdb-b2d7-4868-b6aa-2e224af81ae5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b4cec6cb4073050c86228bfabf2ec38203bb000808aa283fbab1f7af8756c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d279e74261f5a470aec96a2cb3111a60135cc342c6d922552da3b315ab10a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d279e74261f5a470aec96a2cb3111a60135cc342c6d922552da3b315ab10a134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:13Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:13 crc kubenswrapper[4681]: I0404 01:58:13.225587 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:13Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:13 crc kubenswrapper[4681]: I0404 01:58:13.249911 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:13Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:13 crc kubenswrapper[4681]: I0404 01:58:13.271876 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:13Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:13 crc kubenswrapper[4681]: I0404 01:58:13.291720 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:13Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:13 crc kubenswrapper[4681]: I0404 01:58:13.322684 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6e4534b33b6e5ef0b5215513f32180fc0ca211032267b4a8c4234f8898f568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2840027a88428593a03be2fe95869f9aec36765ef0a70345f5e7832bbe6500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eaf3363b57f81c1d30955e2f079097ee67801f30ce59a31744cbafe32358635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://781945902b239ddb43e807866e874c9772627c2a89a780bb78915f9d56be3af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88384a096ba35de77fe905c618d6700477502a7f5d8dad360e3783f908234037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efab081b44f2dca4164229bbc1cc45821aa0e010e6a68f83087c3f2eff932840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ae47ecb180b30ba0d105b197dfb996fd35e622dcf11c5eacfbb0cc0ddafea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2291b9a7723ee2221806bf4fceca4535aad0031deddd6cd7bb26cf4feb2ff67\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:57:48Z\\\",\\\"message\\\":\\\"ting logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0404 01:57:48.680011 6844 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0404 01:57:48.680300 6844 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0404 01:57:48.679966 6844 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0404 01:57:48.680326 6844 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0404 01:57:48.679953 6844 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-8jsq4 after 0 failed attempt(s)\\\\nI0404 01:57:48.680341 6844 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-8jsq4\\\\nF0404 01:57:48.680200 6844 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:58:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:13Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:13 crc kubenswrapper[4681]: I0404 01:58:13.342209 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21fa3376-6f50-4819-b268-579c7500c923\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce18a007870156d09b31ab38dd939beba4fc47418b8e07d70945eb4838f93130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9640effcf3619f733c3f5082251165355776fabe5d8c3ade7f06409ec459d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a4d1e6ca047390c262fe05912f5de2fb5d4fa0f6d51f4462d9b602901aa99dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:13Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:13 crc kubenswrapper[4681]: I0404 01:58:13.362159 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:13Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:13 crc kubenswrapper[4681]: I0404 01:58:13.387567 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c03cfac4cd4e9c86986c359493cc6c5551cc537bf55e0a9fb52f9f5c80bb94a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:13Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:13 crc kubenswrapper[4681]: I0404 01:58:13.404248 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:13Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:13 crc kubenswrapper[4681]: I0404 01:58:13.423438 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:13Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:13 crc kubenswrapper[4681]: I0404 01:58:13.443806 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:13Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:13 crc kubenswrapper[4681]: I0404 01:58:13.464147 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:13Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:14 crc kubenswrapper[4681]: I0404 01:58:14.067014 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cntwc_d004639b-c07a-4401-8588-8af4ed981db3/ovnkube-controller/2.log" Apr 04 01:58:14 crc kubenswrapper[4681]: I0404 01:58:14.068012 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cntwc_d004639b-c07a-4401-8588-8af4ed981db3/ovnkube-controller/1.log" Apr 04 01:58:14 crc kubenswrapper[4681]: I0404 01:58:14.072341 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cntwc_d004639b-c07a-4401-8588-8af4ed981db3/ovn-acl-logging/0.log" Apr 04 01:58:14 crc kubenswrapper[4681]: I0404 01:58:14.073370 4681 generic.go:334] "Generic (PLEG): container finished" podID="d004639b-c07a-4401-8588-8af4ed981db3" containerID="63ae47ecb180b30ba0d105b197dfb996fd35e622dcf11c5eacfbb0cc0ddafea9" exitCode=1 Apr 04 01:58:14 crc kubenswrapper[4681]: I0404 01:58:14.073425 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" event={"ID":"d004639b-c07a-4401-8588-8af4ed981db3","Type":"ContainerDied","Data":"63ae47ecb180b30ba0d105b197dfb996fd35e622dcf11c5eacfbb0cc0ddafea9"} Apr 04 01:58:14 crc kubenswrapper[4681]: I0404 01:58:14.073483 4681 scope.go:117] "RemoveContainer" containerID="f2291b9a7723ee2221806bf4fceca4535aad0031deddd6cd7bb26cf4feb2ff67" Apr 04 01:58:14 crc kubenswrapper[4681]: I0404 01:58:14.074525 4681 scope.go:117] "RemoveContainer" containerID="63ae47ecb180b30ba0d105b197dfb996fd35e622dcf11c5eacfbb0cc0ddafea9" Apr 04 01:58:14 crc kubenswrapper[4681]: E0404 01:58:14.074889 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cntwc_openshift-ovn-kubernetes(d004639b-c07a-4401-8588-8af4ed981db3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" podUID="d004639b-c07a-4401-8588-8af4ed981db3" Apr 04 01:58:14 crc kubenswrapper[4681]: I0404 01:58:14.102204 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:14Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:14 crc kubenswrapper[4681]: I0404 01:58:14.125420 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c03cfac4cd4e9c86986c359493cc6c5551cc537bf55e0a9fb52f9f5c80bb94a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:14Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:14 crc kubenswrapper[4681]: I0404 01:58:14.140322 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:14Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:14 crc kubenswrapper[4681]: I0404 01:58:14.154644 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21fa3376-6f50-4819-b268-579c7500c923\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce18a007870156d09b31ab38dd939beba4fc47418b8e07d70945eb4838f93130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9640effcf3619f733c3f5082251165355776fabe5d8c3ade7f06409ec459d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a4d1e6ca047390c262fe05912f5de2fb5d4fa0f6d51f4462d9b602901aa99dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:14Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:14 crc kubenswrapper[4681]: I0404 01:58:14.169902 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:14Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:14 crc kubenswrapper[4681]: I0404 01:58:14.189828 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:14Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:14 crc kubenswrapper[4681]: I0404 01:58:14.200345 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:58:14 crc kubenswrapper[4681]: I0404 01:58:14.200409 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:58:14 crc kubenswrapper[4681]: E0404 01:58:14.200542 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:58:14 crc kubenswrapper[4681]: I0404 01:58:14.200567 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:58:14 crc kubenswrapper[4681]: I0404 01:58:14.200610 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:58:14 crc kubenswrapper[4681]: E0404 01:58:14.200784 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:58:14 crc kubenswrapper[4681]: E0404 01:58:14.200954 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:58:14 crc kubenswrapper[4681]: E0404 01:58:14.201080 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:58:14 crc kubenswrapper[4681]: I0404 01:58:14.215132 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:14Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:14 crc kubenswrapper[4681]: I0404 01:58:14.230865 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0266f6c217cfb69ed899bb926acc9f1c43b4429ff30b6fb54db89b92adffb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:14Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:14 crc kubenswrapper[4681]: I0404 01:58:14.248032 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:14Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:14 crc kubenswrapper[4681]: I0404 01:58:14.262845 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fcdf2be16bb9aeb8d28d6805badefaab604b3ba020eb5d101baaeebe2fb7d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:14Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:14 crc kubenswrapper[4681]: I0404 01:58:14.285390 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d5a80321a01a279ac896396acffe25447fc28d3250dba73c82999553f39554d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:14Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:14 crc kubenswrapper[4681]: I0404 01:58:14.301252 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:14Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:14 crc kubenswrapper[4681]: I0404 01:58:14.316755 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f8dda2d-d2e0-4669-8a9d-f3ae21eaa22e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7eb0459a3d135d52a1c4168c3def80940997d4a4286ffa23d2abdde13b7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5fc946678318218c1dc93f59aea7fc760172225cdb94efee9ea779b0a3ff53c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:55:51Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0404 01:55:23.581321 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0404 01:55:23.584508 1 observer_polling.go:159] Starting file observer\\\\nI0404 01:55:23.630158 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0404 01:55:23.635460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0404 01:55:51.649805 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0404 01:55:51.649953 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:51Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a52ccf1a40f8ddec3fbfd455dfaae3a08af9e38f488b774ae61283eb539214d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6196c5b3a05c942055f25e31a350843406ea37406e1af62c782dc9f8a4f1c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7096a59f168cc8b9e7c61688d8ae41239ce4d62f3f0aa859bdb8586c85afe752\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:14Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:14 crc kubenswrapper[4681]: I0404 01:58:14.349193 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:14Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:14 crc kubenswrapper[4681]: I0404 01:58:14.375705 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:14Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:14 crc kubenswrapper[4681]: I0404 01:58:14.397254 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:14Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:14 crc kubenswrapper[4681]: I0404 01:58:14.418390 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:14Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:14 crc kubenswrapper[4681]: I0404 01:58:14.472503 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6e4534b33b6e5ef0b5215513f32180fc0ca211032267b4a8c4234f8898f568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2840027a88428593a03be2fe95869f9aec36765ef0a70345f5e7832bbe6500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eaf3363b57f81c1d30955e2f079097ee67801f30ce59a31744cbafe32358635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://781945902b239ddb43e807866e874c9772627c2a89a780bb78915f9d56be3af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88384a096ba35de77fe905c618d6700477502a7f5d8dad360e3783f908234037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efab081b44f2dca4164229bbc1cc45821aa0e010e6a68f83087c3f2eff932840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ae47ecb180b30ba0d105b197dfb996fd35e622dcf11c5eacfbb0cc0ddafea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2291b9a7723ee2221806bf4fceca4535aad0031deddd6cd7bb26cf4feb2ff67\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:57:48Z\\\",\\\"message\\\":\\\"ting logical port openshift-network-console_networking-console-plugin-85b44fc459-gdk6g for pod on switch crc\\\\nI0404 01:57:48.680011 6844 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0404 01:57:48.680300 6844 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nI0404 01:57:48.679966 6844 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc after 0 failed attempt(s)\\\\nI0404 01:57:48.680326 6844 default_network_controller.go:776] Recording success event on pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI0404 01:57:48.679953 6844 obj_retry.go:386] Retry successful for *v1.Pod openshift-image-registry/node-ca-8jsq4 after 0 failed attempt(s)\\\\nI0404 01:57:48.680341 6844 default_network_controller.go:776] Recording success event on pod openshift-image-registry/node-ca-8jsq4\\\\nF0404 01:57:48.680200 6844 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63ae47ecb180b30ba0d105b197dfb996fd35e622dcf11c5eacfbb0cc0ddafea9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:58:13Z\\\",\\\"message\\\":\\\"from k8s.io/client-go/informers/factory.go:160\\\\nI0404 01:58:12.871234 7111 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0404 01:58:12.871279 7111 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0404 01:58:12.871800 7111 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0404 01:58:12.871843 7111 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0404 01:58:12.871925 7111 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0404 01:58:12.871974 7111 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0404 01:58:12.872038 7111 factory.go:656] Stopping watch factory\\\\nI0404 01:58:12.872056 7111 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0404 01:58:12.872070 7111 ovnkube.go:599] Stopped ovnkube\\\\nI0404 01:58:12.872073 7111 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0404 01:58:12.872036 7111 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0404 01:58:12.872136 7111 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0404 01:58:12.872152 7111 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0404 01:58:12.872242 7111 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:58:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:14Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:14 crc kubenswrapper[4681]: I0404 01:58:14.490107 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9974bfdb-b2d7-4868-b6aa-2e224af81ae5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b4cec6cb4073050c86228bfabf2ec38203bb000808aa283fbab1f7af8756c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d279e74261f5a470aec96a2cb3111a60135cc342c6d922552da3b315ab10a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d279e74261f5a470aec96a2cb3111a60135cc342c6d922552da3b315ab10a134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:14Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:15 crc kubenswrapper[4681]: I0404 01:58:15.080599 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cntwc_d004639b-c07a-4401-8588-8af4ed981db3/ovnkube-controller/2.log" Apr 04 01:58:15 crc kubenswrapper[4681]: I0404 01:58:15.087182 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cntwc_d004639b-c07a-4401-8588-8af4ed981db3/ovn-acl-logging/0.log" Apr 04 01:58:15 crc kubenswrapper[4681]: I0404 01:58:15.089584 4681 scope.go:117] "RemoveContainer" containerID="63ae47ecb180b30ba0d105b197dfb996fd35e622dcf11c5eacfbb0cc0ddafea9" Apr 04 01:58:15 crc kubenswrapper[4681]: E0404 01:58:15.089840 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cntwc_openshift-ovn-kubernetes(d004639b-c07a-4401-8588-8af4ed981db3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" podUID="d004639b-c07a-4401-8588-8af4ed981db3" Apr 04 01:58:15 crc kubenswrapper[4681]: I0404 01:58:15.111083 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21fa3376-6f50-4819-b268-579c7500c923\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce18a007870156d09b31ab38dd939beba4fc47418b8e07d70945eb4838f93130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9640effcf3619f733c3f5082251165355776fabe5d8c3ade7f06409ec459d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a4d1e6ca047390c262fe05912f5de2fb5d4fa0f6d51f4462d9b602901aa99dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:15Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:15 crc kubenswrapper[4681]: I0404 01:58:15.131143 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:15Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:15 crc kubenswrapper[4681]: I0404 01:58:15.154432 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c03cfac4cd4e9c86986c359493cc6c5551cc537bf55e0a9fb52f9f5c80bb94a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:15Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:15 crc kubenswrapper[4681]: I0404 01:58:15.170560 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:15Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:15 crc kubenswrapper[4681]: I0404 01:58:15.193427 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:15Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:15 crc kubenswrapper[4681]: I0404 01:58:15.214200 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:15Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:15 crc kubenswrapper[4681]: I0404 01:58:15.235483 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:15Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:15 crc kubenswrapper[4681]: I0404 01:58:15.259872 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f8dda2d-d2e0-4669-8a9d-f3ae21eaa22e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7eb0459a3d135d52a1c4168c3def80940997d4a4286ffa23d2abdde13b7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5fc946678318218c1dc93f59aea7fc760172225cdb94efee9ea779b0a3ff53c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:55:51Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0404 01:55:23.581321 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0404 01:55:23.584508 1 observer_polling.go:159] Starting file observer\\\\nI0404 01:55:23.630158 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0404 01:55:23.635460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0404 01:55:51.649805 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0404 01:55:51.649953 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:51Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a52ccf1a40f8ddec3fbfd455dfaae3a08af9e38f488b774ae61283eb539214d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6196c5b3a05c942055f25e31a350843406ea37406e1af62c782dc9f8a4f1c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7096a59f168cc8b9e7c61688d8ae41239ce4d62f3f0aa859bdb8586c85afe752\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:15Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:15 crc kubenswrapper[4681]: I0404 01:58:15.278102 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0266f6c217cfb69ed899bb926acc9f1c43b4429ff30b6fb54db89b92adffb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:15Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:15 crc kubenswrapper[4681]: I0404 01:58:15.297671 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:15Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:15 crc kubenswrapper[4681]: I0404 01:58:15.313163 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fcdf2be16bb9aeb8d28d6805badefaab604b3ba020eb5d101baaeebe2fb7d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:15Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:15 crc kubenswrapper[4681]: I0404 01:58:15.328231 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d5a80321a01a279ac896396acffe25447fc28d3250dba73c82999553f39554d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:15Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:15 crc kubenswrapper[4681]: I0404 01:58:15.345323 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:15Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:15 crc kubenswrapper[4681]: I0404 01:58:15.360404 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9974bfdb-b2d7-4868-b6aa-2e224af81ae5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b4cec6cb4073050c86228bfabf2ec38203bb000808aa283fbab1f7af8756c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d279e74261f5a470aec96a2cb3111a60135cc342c6d922552da3b315ab10a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d279e74261f5a470aec96a2cb3111a60135cc342c6d922552da3b315ab10a134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:15Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:15 crc kubenswrapper[4681]: I0404 01:58:15.393884 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:15Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:15 crc kubenswrapper[4681]: I0404 01:58:15.419811 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:15Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:15 crc kubenswrapper[4681]: I0404 01:58:15.440985 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:15Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:15 crc kubenswrapper[4681]: I0404 01:58:15.460143 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:15Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:15 crc kubenswrapper[4681]: I0404 01:58:15.479707 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6e4534b33b6e5ef0b5215513f32180fc0ca211032267b4a8c4234f8898f568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2840027a88428593a03be2fe95869f9aec36765ef0a70345f5e7832bbe6500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eaf3363b57f81c1d30955e2f079097ee67801f30ce59a31744cbafe32358635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://781945902b239ddb43e807866e874c9772627c2a89a780bb78915f9d56be3af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88384a096ba35de77fe905c618d6700477502a7f5d8dad360e3783f908234037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efab081b44f2dca4164229bbc1cc45821aa0e010e6a68f83087c3f2eff932840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ae47ecb180b30ba0d105b197dfb996fd35e622dcf11c5eacfbb0cc0ddafea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63ae47ecb180b30ba0d105b197dfb996fd35e622dcf11c5eacfbb0cc0ddafea9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:58:13Z\\\",\\\"message\\\":\\\"from k8s.io/client-go/informers/factory.go:160\\\\nI0404 01:58:12.871234 7111 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0404 01:58:12.871279 7111 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0404 01:58:12.871800 7111 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0404 01:58:12.871843 7111 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0404 01:58:12.871925 7111 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0404 01:58:12.871974 7111 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0404 01:58:12.872038 7111 factory.go:656] Stopping watch factory\\\\nI0404 01:58:12.872056 7111 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0404 01:58:12.872070 7111 ovnkube.go:599] Stopped ovnkube\\\\nI0404 01:58:12.872073 7111 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0404 01:58:12.872036 7111 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0404 01:58:12.872136 7111 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0404 01:58:12.872152 7111 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0404 01:58:12.872242 7111 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:58:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cntwc_openshift-ovn-kubernetes(d004639b-c07a-4401-8588-8af4ed981db3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:15Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:16 crc kubenswrapper[4681]: I0404 01:58:16.200819 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:58:16 crc kubenswrapper[4681]: I0404 01:58:16.200854 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:58:16 crc kubenswrapper[4681]: I0404 01:58:16.200872 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:58:16 crc kubenswrapper[4681]: I0404 01:58:16.200838 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:58:16 crc kubenswrapper[4681]: E0404 01:58:16.200945 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:58:16 crc kubenswrapper[4681]: E0404 01:58:16.201067 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:58:16 crc kubenswrapper[4681]: E0404 01:58:16.201093 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:58:16 crc kubenswrapper[4681]: E0404 01:58:16.201154 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:58:16 crc kubenswrapper[4681]: E0404 01:58:16.315915 4681 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 04 01:58:18 crc kubenswrapper[4681]: I0404 01:58:18.199811 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:58:18 crc kubenswrapper[4681]: I0404 01:58:18.199842 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:58:18 crc kubenswrapper[4681]: I0404 01:58:18.199961 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:58:18 crc kubenswrapper[4681]: I0404 01:58:18.200473 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:58:18 crc kubenswrapper[4681]: E0404 01:58:18.200600 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:58:18 crc kubenswrapper[4681]: E0404 01:58:18.200851 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:58:18 crc kubenswrapper[4681]: E0404 01:58:18.201004 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:58:18 crc kubenswrapper[4681]: E0404 01:58:18.201238 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:58:20 crc kubenswrapper[4681]: I0404 01:58:20.200770 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:58:20 crc kubenswrapper[4681]: I0404 01:58:20.200905 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:58:20 crc kubenswrapper[4681]: E0404 01:58:20.201375 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:58:20 crc kubenswrapper[4681]: I0404 01:58:20.200934 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:58:20 crc kubenswrapper[4681]: I0404 01:58:20.200965 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:58:20 crc kubenswrapper[4681]: E0404 01:58:20.201893 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:58:20 crc kubenswrapper[4681]: E0404 01:58:20.202071 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:58:20 crc kubenswrapper[4681]: E0404 01:58:20.202130 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:58:21 crc kubenswrapper[4681]: I0404 01:58:21.230686 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0266f6c217cfb69ed899bb926acc9f1c43b4429ff30b6fb54db89b92adffb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:21Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:21 crc kubenswrapper[4681]: I0404 01:58:21.249354 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:21Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:21 crc kubenswrapper[4681]: I0404 01:58:21.265950 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fcdf2be16bb9aeb8d28d6805badefaab604b3ba020eb5d101baaeebe2fb7d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:21Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:21 crc kubenswrapper[4681]: E0404 01:58:21.330689 4681 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 04 01:58:21 crc kubenswrapper[4681]: I0404 01:58:21.332785 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d5a80321a01a279ac896396acffe25447fc28d3250dba73c82999553f39554d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:21Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:21 crc kubenswrapper[4681]: I0404 01:58:21.356169 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:21Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:21 crc kubenswrapper[4681]: I0404 01:58:21.373638 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f8dda2d-d2e0-4669-8a9d-f3ae21eaa22e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7eb0459a3d135d52a1c4168c3def80940997d4a4286ffa23d2abdde13b7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5fc946678318218c1dc93f59aea7fc760172225cdb94efee9ea779b0a3ff53c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:55:51Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0404 01:55:23.581321 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0404 01:55:23.584508 1 observer_polling.go:159] Starting file observer\\\\nI0404 01:55:23.630158 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0404 01:55:23.635460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0404 01:55:51.649805 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0404 01:55:51.649953 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:51Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a52ccf1a40f8ddec3fbfd455dfaae3a08af9e38f488b774ae61283eb539214d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6196c5b3a05c942055f25e31a350843406ea37406e1af62c782dc9f8a4f1c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7096a59f168cc8b9e7c61688d8ae41239ce4d62f3f0aa859bdb8586c85afe752\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:21Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:21 crc kubenswrapper[4681]: I0404 01:58:21.395600 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:21Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:21 crc kubenswrapper[4681]: I0404 01:58:21.417869 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:21Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:21 crc kubenswrapper[4681]: I0404 01:58:21.438769 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:21Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:21 crc kubenswrapper[4681]: I0404 01:58:21.460083 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:21Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:21 crc kubenswrapper[4681]: I0404 01:58:21.483605 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6e4534b33b6e5ef0b5215513f32180fc0ca211032267b4a8c4234f8898f568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2840027a88428593a03be2fe95869f9aec36765ef0a70345f5e7832bbe6500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eaf3363b57f81c1d30955e2f079097ee67801f30ce59a31744cbafe32358635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://781945902b239ddb43e807866e874c9772627c2a89a780bb78915f9d56be3af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88384a096ba35de77fe905c618d6700477502a7f5d8dad360e3783f908234037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efab081b44f2dca4164229bbc1cc45821aa0e010e6a68f83087c3f2eff932840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ae47ecb180b30ba0d105b197dfb996fd35e622dcf11c5eacfbb0cc0ddafea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63ae47ecb180b30ba0d105b197dfb996fd35e622dcf11c5eacfbb0cc0ddafea9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:58:13Z\\\",\\\"message\\\":\\\"from k8s.io/client-go/informers/factory.go:160\\\\nI0404 01:58:12.871234 7111 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0404 01:58:12.871279 7111 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0404 01:58:12.871800 7111 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0404 01:58:12.871843 7111 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0404 01:58:12.871925 7111 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0404 01:58:12.871974 7111 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0404 01:58:12.872038 7111 factory.go:656] Stopping watch factory\\\\nI0404 01:58:12.872056 7111 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0404 01:58:12.872070 7111 ovnkube.go:599] Stopped ovnkube\\\\nI0404 01:58:12.872073 7111 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0404 01:58:12.872036 7111 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0404 01:58:12.872136 7111 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0404 01:58:12.872152 7111 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0404 01:58:12.872242 7111 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:58:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cntwc_openshift-ovn-kubernetes(d004639b-c07a-4401-8588-8af4ed981db3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:21Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:21 crc kubenswrapper[4681]: I0404 01:58:21.494306 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9974bfdb-b2d7-4868-b6aa-2e224af81ae5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b4cec6cb4073050c86228bfabf2ec38203bb000808aa283fbab1f7af8756c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d279e74261f5a470aec96a2cb3111a60135cc342c6d922552da3b315ab10a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d279e74261f5a470aec96a2cb3111a60135cc342c6d922552da3b315ab10a134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:21Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:21 crc kubenswrapper[4681]: I0404 01:58:21.508033 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:21Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:21 crc kubenswrapper[4681]: I0404 01:58:21.529785 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c03cfac4cd4e9c86986c359493cc6c5551cc537bf55e0a9fb52f9f5c80bb94a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:21Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:21 crc kubenswrapper[4681]: I0404 01:58:21.543668 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:21Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:21 crc kubenswrapper[4681]: I0404 01:58:21.560750 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21fa3376-6f50-4819-b268-579c7500c923\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce18a007870156d09b31ab38dd939beba4fc47418b8e07d70945eb4838f93130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9640effcf3619f733c3f5082251165355776fabe5d8c3ade7f06409ec459d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a4d1e6ca047390c262fe05912f5de2fb5d4fa0f6d51f4462d9b602901aa99dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:21Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:21 crc kubenswrapper[4681]: I0404 01:58:21.573483 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:21Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:21 crc kubenswrapper[4681]: I0404 01:58:21.586323 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:21Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:21 crc kubenswrapper[4681]: I0404 01:58:21.599546 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:21Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:22 crc kubenswrapper[4681]: I0404 01:58:22.200836 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:58:22 crc kubenswrapper[4681]: I0404 01:58:22.200855 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:58:22 crc kubenswrapper[4681]: I0404 01:58:22.200930 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:58:22 crc kubenswrapper[4681]: I0404 01:58:22.200994 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:58:22 crc kubenswrapper[4681]: E0404 01:58:22.201125 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:58:22 crc kubenswrapper[4681]: E0404 01:58:22.201361 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:58:22 crc kubenswrapper[4681]: E0404 01:58:22.201658 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:58:22 crc kubenswrapper[4681]: E0404 01:58:22.201803 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:58:22 crc kubenswrapper[4681]: I0404 01:58:22.550852 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:58:22 crc kubenswrapper[4681]: I0404 01:58:22.551996 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:58:22 crc kubenswrapper[4681]: I0404 01:58:22.552199 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:58:22 crc kubenswrapper[4681]: I0404 01:58:22.552427 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:58:22 crc kubenswrapper[4681]: I0404 01:58:22.552493 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:58:22Z","lastTransitionTime":"2026-04-04T01:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:58:22 crc kubenswrapper[4681]: E0404 01:58:22.572323 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:22Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:22 crc kubenswrapper[4681]: I0404 01:58:22.577931 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:58:22 crc kubenswrapper[4681]: I0404 01:58:22.577988 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:58:22 crc kubenswrapper[4681]: I0404 01:58:22.578007 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:58:22 crc kubenswrapper[4681]: I0404 01:58:22.578030 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:58:22 crc kubenswrapper[4681]: I0404 01:58:22.578047 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:58:22Z","lastTransitionTime":"2026-04-04T01:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:58:22 crc kubenswrapper[4681]: E0404 01:58:22.599576 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:22Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:22 crc kubenswrapper[4681]: I0404 01:58:22.604735 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:58:22 crc kubenswrapper[4681]: I0404 01:58:22.604792 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:58:22 crc kubenswrapper[4681]: I0404 01:58:22.604810 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:58:22 crc kubenswrapper[4681]: I0404 01:58:22.604832 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:58:22 crc kubenswrapper[4681]: I0404 01:58:22.604849 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:58:22Z","lastTransitionTime":"2026-04-04T01:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:58:22 crc kubenswrapper[4681]: E0404 01:58:22.623980 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:22Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:22 crc kubenswrapper[4681]: I0404 01:58:22.630699 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:58:22 crc kubenswrapper[4681]: I0404 01:58:22.630773 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:58:22 crc kubenswrapper[4681]: I0404 01:58:22.630794 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:58:22 crc kubenswrapper[4681]: I0404 01:58:22.630821 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:58:22 crc kubenswrapper[4681]: I0404 01:58:22.630850 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:58:22Z","lastTransitionTime":"2026-04-04T01:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:58:22 crc kubenswrapper[4681]: E0404 01:58:22.659809 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:22Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:22 crc kubenswrapper[4681]: I0404 01:58:22.665373 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:58:22 crc kubenswrapper[4681]: I0404 01:58:22.665433 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:58:22 crc kubenswrapper[4681]: I0404 01:58:22.665457 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:58:22 crc kubenswrapper[4681]: I0404 01:58:22.665489 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:58:22 crc kubenswrapper[4681]: I0404 01:58:22.665513 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:58:22Z","lastTransitionTime":"2026-04-04T01:58:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:58:22 crc kubenswrapper[4681]: E0404 01:58:22.685620 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:22Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:22 crc kubenswrapper[4681]: E0404 01:58:22.685835 4681 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Apr 04 01:58:24 crc kubenswrapper[4681]: I0404 01:58:24.200311 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:58:24 crc kubenswrapper[4681]: I0404 01:58:24.200381 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:58:24 crc kubenswrapper[4681]: I0404 01:58:24.200422 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:58:24 crc kubenswrapper[4681]: E0404 01:58:24.200562 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:58:24 crc kubenswrapper[4681]: E0404 01:58:24.200724 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:58:24 crc kubenswrapper[4681]: E0404 01:58:24.200960 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:58:24 crc kubenswrapper[4681]: I0404 01:58:24.202421 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:58:24 crc kubenswrapper[4681]: E0404 01:58:24.202570 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:58:26 crc kubenswrapper[4681]: I0404 01:58:26.129609 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-w5wbs_cab7ffc5-0101-48b8-87ab-de8324bacc38/kube-multus/0.log" Apr 04 01:58:26 crc kubenswrapper[4681]: I0404 01:58:26.129664 4681 generic.go:334] "Generic (PLEG): container finished" podID="cab7ffc5-0101-48b8-87ab-de8324bacc38" containerID="78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb" exitCode=1 Apr 04 01:58:26 crc kubenswrapper[4681]: I0404 01:58:26.129702 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-w5wbs" event={"ID":"cab7ffc5-0101-48b8-87ab-de8324bacc38","Type":"ContainerDied","Data":"78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb"} Apr 04 01:58:26 crc kubenswrapper[4681]: I0404 01:58:26.130188 4681 scope.go:117] "RemoveContainer" containerID="78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb" Apr 04 01:58:26 crc kubenswrapper[4681]: I0404 01:58:26.155566 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:26Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:26 crc kubenswrapper[4681]: I0404 01:58:26.173933 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c03cfac4cd4e9c86986c359493cc6c5551cc537bf55e0a9fb52f9f5c80bb94a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:26Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:26 crc kubenswrapper[4681]: I0404 01:58:26.189074 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:26Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:26 crc kubenswrapper[4681]: I0404 01:58:26.200619 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:58:26 crc kubenswrapper[4681]: I0404 01:58:26.200791 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:58:26 crc kubenswrapper[4681]: I0404 01:58:26.201019 4681 scope.go:117] "RemoveContainer" containerID="63ae47ecb180b30ba0d105b197dfb996fd35e622dcf11c5eacfbb0cc0ddafea9" Apr 04 01:58:26 crc kubenswrapper[4681]: E0404 01:58:26.201033 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:58:26 crc kubenswrapper[4681]: E0404 01:58:26.201325 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cntwc_openshift-ovn-kubernetes(d004639b-c07a-4401-8588-8af4ed981db3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" podUID="d004639b-c07a-4401-8588-8af4ed981db3" Apr 04 01:58:26 crc kubenswrapper[4681]: I0404 01:58:26.201390 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:58:26 crc kubenswrapper[4681]: I0404 01:58:26.201435 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:58:26 crc kubenswrapper[4681]: E0404 01:58:26.201501 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:58:26 crc kubenswrapper[4681]: E0404 01:58:26.201567 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:58:26 crc kubenswrapper[4681]: E0404 01:58:26.201630 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:58:26 crc kubenswrapper[4681]: I0404 01:58:26.206248 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21fa3376-6f50-4819-b268-579c7500c923\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce18a007870156d09b31ab38dd939beba4fc47418b8e07d70945eb4838f93130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9640effcf3619f733c3f5082251165355776fabe5d8c3ade7f06409ec459d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a4d1e6ca047390c262fe05912f5de2fb5d4fa0f6d51f4462d9b602901aa99dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:26Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:26 crc kubenswrapper[4681]: I0404 01:58:26.223752 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:26Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:26 crc kubenswrapper[4681]: I0404 01:58:26.256989 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:26Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:58:25Z\\\",\\\"message\\\":\\\"2026-04-04T01:57:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a314ff8b-9d12-4778-aeb9-f3940fb56051\\\\n2026-04-04T01:57:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a314ff8b-9d12-4778-aeb9-f3940fb56051 to /host/opt/cni/bin/\\\\n2026-04-04T01:57:40Z [verbose] multus-daemon started\\\\n2026-04-04T01:57:40Z [verbose] Readiness Indicator file check\\\\n2026-04-04T01:58:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:26Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:26 crc kubenswrapper[4681]: I0404 01:58:26.278701 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:26Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:26 crc kubenswrapper[4681]: I0404 01:58:26.295701 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0266f6c217cfb69ed899bb926acc9f1c43b4429ff30b6fb54db89b92adffb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:26Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:26 crc kubenswrapper[4681]: I0404 01:58:26.312254 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:26Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:26 crc kubenswrapper[4681]: I0404 01:58:26.328237 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fcdf2be16bb9aeb8d28d6805badefaab604b3ba020eb5d101baaeebe2fb7d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:26Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:26 crc kubenswrapper[4681]: E0404 01:58:26.332037 4681 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 04 01:58:26 crc kubenswrapper[4681]: I0404 01:58:26.347289 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d5a80321a01a279ac896396acffe25447fc28d3250dba73c82999553f39554d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:26Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:26 crc kubenswrapper[4681]: I0404 01:58:26.371203 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:26Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:26 crc kubenswrapper[4681]: I0404 01:58:26.390216 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f8dda2d-d2e0-4669-8a9d-f3ae21eaa22e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7eb0459a3d135d52a1c4168c3def80940997d4a4286ffa23d2abdde13b7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5fc946678318218c1dc93f59aea7fc760172225cdb94efee9ea779b0a3ff53c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:55:51Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0404 01:55:23.581321 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0404 01:55:23.584508 1 observer_polling.go:159] Starting file observer\\\\nI0404 01:55:23.630158 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0404 01:55:23.635460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0404 01:55:51.649805 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0404 01:55:51.649953 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:51Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a52ccf1a40f8ddec3fbfd455dfaae3a08af9e38f488b774ae61283eb539214d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6196c5b3a05c942055f25e31a350843406ea37406e1af62c782dc9f8a4f1c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7096a59f168cc8b9e7c61688d8ae41239ce4d62f3f0aa859bdb8586c85afe752\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:26Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:26 crc kubenswrapper[4681]: I0404 01:58:26.422506 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:26Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:26 crc kubenswrapper[4681]: I0404 01:58:26.436676 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:26Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:26 crc kubenswrapper[4681]: I0404 01:58:26.457293 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:26Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:26 crc kubenswrapper[4681]: I0404 01:58:26.476765 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:26Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:26 crc kubenswrapper[4681]: I0404 01:58:26.500059 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6e4534b33b6e5ef0b5215513f32180fc0ca211032267b4a8c4234f8898f568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2840027a88428593a03be2fe95869f9aec36765ef0a70345f5e7832bbe6500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eaf3363b57f81c1d30955e2f079097ee67801f30ce59a31744cbafe32358635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://781945902b239ddb43e807866e874c9772627c2a89a780bb78915f9d56be3af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88384a096ba35de77fe905c618d6700477502a7f5d8dad360e3783f908234037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efab081b44f2dca4164229bbc1cc45821aa0e010e6a68f83087c3f2eff932840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ae47ecb180b30ba0d105b197dfb996fd35e622dcf11c5eacfbb0cc0ddafea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63ae47ecb180b30ba0d105b197dfb996fd35e622dcf11c5eacfbb0cc0ddafea9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:58:13Z\\\",\\\"message\\\":\\\"from k8s.io/client-go/informers/factory.go:160\\\\nI0404 01:58:12.871234 7111 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0404 01:58:12.871279 7111 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0404 01:58:12.871800 7111 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0404 01:58:12.871843 7111 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0404 01:58:12.871925 7111 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0404 01:58:12.871974 7111 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0404 01:58:12.872038 7111 factory.go:656] Stopping watch factory\\\\nI0404 01:58:12.872056 7111 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0404 01:58:12.872070 7111 ovnkube.go:599] Stopped ovnkube\\\\nI0404 01:58:12.872073 7111 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0404 01:58:12.872036 7111 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0404 01:58:12.872136 7111 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0404 01:58:12.872152 7111 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0404 01:58:12.872242 7111 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:58:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cntwc_openshift-ovn-kubernetes(d004639b-c07a-4401-8588-8af4ed981db3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:26Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:26 crc kubenswrapper[4681]: I0404 01:58:26.517542 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9974bfdb-b2d7-4868-b6aa-2e224af81ae5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b4cec6cb4073050c86228bfabf2ec38203bb000808aa283fbab1f7af8756c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d279e74261f5a470aec96a2cb3111a60135cc342c6d922552da3b315ab10a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d279e74261f5a470aec96a2cb3111a60135cc342c6d922552da3b315ab10a134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:26Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:27 crc kubenswrapper[4681]: I0404 01:58:27.137521 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-w5wbs_cab7ffc5-0101-48b8-87ab-de8324bacc38/kube-multus/0.log" Apr 04 01:58:27 crc kubenswrapper[4681]: I0404 01:58:27.137597 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-w5wbs" event={"ID":"cab7ffc5-0101-48b8-87ab-de8324bacc38","Type":"ContainerStarted","Data":"7bd7f38b8f100b2680bf0d1741ec304c194c7ef2b97d0e033338fb1b9ed11e00"} Apr 04 01:58:27 crc kubenswrapper[4681]: I0404 01:58:27.165876 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:27Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:27 crc kubenswrapper[4681]: I0404 01:58:27.184175 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:27Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:27 crc kubenswrapper[4681]: I0404 01:58:27.201016 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:27Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:27 crc kubenswrapper[4681]: I0404 01:58:27.221879 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:27Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:27 crc kubenswrapper[4681]: I0404 01:58:27.255065 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6e4534b33b6e5ef0b5215513f32180fc0ca211032267b4a8c4234f8898f568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2840027a88428593a03be2fe95869f9aec36765ef0a70345f5e7832bbe6500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eaf3363b57f81c1d30955e2f079097ee67801f30ce59a31744cbafe32358635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://781945902b239ddb43e807866e874c9772627c2a89a780bb78915f9d56be3af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88384a096ba35de77fe905c618d6700477502a7f5d8dad360e3783f908234037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efab081b44f2dca4164229bbc1cc45821aa0e010e6a68f83087c3f2eff932840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ae47ecb180b30ba0d105b197dfb996fd35e622dcf11c5eacfbb0cc0ddafea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63ae47ecb180b30ba0d105b197dfb996fd35e622dcf11c5eacfbb0cc0ddafea9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:58:13Z\\\",\\\"message\\\":\\\"from k8s.io/client-go/informers/factory.go:160\\\\nI0404 01:58:12.871234 7111 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0404 01:58:12.871279 7111 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0404 01:58:12.871800 7111 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0404 01:58:12.871843 7111 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0404 01:58:12.871925 7111 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0404 01:58:12.871974 7111 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0404 01:58:12.872038 7111 factory.go:656] Stopping watch factory\\\\nI0404 01:58:12.872056 7111 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0404 01:58:12.872070 7111 ovnkube.go:599] Stopped ovnkube\\\\nI0404 01:58:12.872073 7111 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0404 01:58:12.872036 7111 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0404 01:58:12.872136 7111 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0404 01:58:12.872152 7111 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0404 01:58:12.872242 7111 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:58:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cntwc_openshift-ovn-kubernetes(d004639b-c07a-4401-8588-8af4ed981db3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:27Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:27 crc kubenswrapper[4681]: I0404 01:58:27.271444 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9974bfdb-b2d7-4868-b6aa-2e224af81ae5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b4cec6cb4073050c86228bfabf2ec38203bb000808aa283fbab1f7af8756c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d279e74261f5a470aec96a2cb3111a60135cc342c6d922552da3b315ab10a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d279e74261f5a470aec96a2cb3111a60135cc342c6d922552da3b315ab10a134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:27Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:27 crc kubenswrapper[4681]: I0404 01:58:27.289956 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:27Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:27 crc kubenswrapper[4681]: I0404 01:58:27.312648 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c03cfac4cd4e9c86986c359493cc6c5551cc537bf55e0a9fb52f9f5c80bb94a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:27Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:27 crc kubenswrapper[4681]: I0404 01:58:27.327563 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:27Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:27 crc kubenswrapper[4681]: I0404 01:58:27.344957 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21fa3376-6f50-4819-b268-579c7500c923\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce18a007870156d09b31ab38dd939beba4fc47418b8e07d70945eb4838f93130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9640effcf3619f733c3f5082251165355776fabe5d8c3ade7f06409ec459d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a4d1e6ca047390c262fe05912f5de2fb5d4fa0f6d51f4462d9b602901aa99dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:27Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:27 crc kubenswrapper[4681]: I0404 01:58:27.362469 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:27Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:27 crc kubenswrapper[4681]: I0404 01:58:27.383133 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bd7f38b8f100b2680bf0d1741ec304c194c7ef2b97d0e033338fb1b9ed11e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:58:25Z\\\",\\\"message\\\":\\\"2026-04-04T01:57:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a314ff8b-9d12-4778-aeb9-f3940fb56051\\\\n2026-04-04T01:57:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a314ff8b-9d12-4778-aeb9-f3940fb56051 to /host/opt/cni/bin/\\\\n2026-04-04T01:57:40Z [verbose] multus-daemon started\\\\n2026-04-04T01:57:40Z [verbose] Readiness Indicator file check\\\\n2026-04-04T01:58:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:58:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:27Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:27 crc kubenswrapper[4681]: I0404 01:58:27.404733 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:27Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:27 crc kubenswrapper[4681]: I0404 01:58:27.420898 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0266f6c217cfb69ed899bb926acc9f1c43b4429ff30b6fb54db89b92adffb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:27Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:27 crc kubenswrapper[4681]: I0404 01:58:27.440112 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:27Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:27 crc kubenswrapper[4681]: I0404 01:58:27.457788 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fcdf2be16bb9aeb8d28d6805badefaab604b3ba020eb5d101baaeebe2fb7d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:27Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:27 crc kubenswrapper[4681]: I0404 01:58:27.474074 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d5a80321a01a279ac896396acffe25447fc28d3250dba73c82999553f39554d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:27Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:27 crc kubenswrapper[4681]: I0404 01:58:27.493758 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:27Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:27 crc kubenswrapper[4681]: I0404 01:58:27.514863 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f8dda2d-d2e0-4669-8a9d-f3ae21eaa22e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7eb0459a3d135d52a1c4168c3def80940997d4a4286ffa23d2abdde13b7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5fc946678318218c1dc93f59aea7fc760172225cdb94efee9ea779b0a3ff53c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:55:51Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0404 01:55:23.581321 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0404 01:55:23.584508 1 observer_polling.go:159] Starting file observer\\\\nI0404 01:55:23.630158 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0404 01:55:23.635460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0404 01:55:51.649805 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0404 01:55:51.649953 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:51Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a52ccf1a40f8ddec3fbfd455dfaae3a08af9e38f488b774ae61283eb539214d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6196c5b3a05c942055f25e31a350843406ea37406e1af62c782dc9f8a4f1c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7096a59f168cc8b9e7c61688d8ae41239ce4d62f3f0aa859bdb8586c85afe752\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:27Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:28 crc kubenswrapper[4681]: I0404 01:58:28.200910 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:58:28 crc kubenswrapper[4681]: I0404 01:58:28.200918 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:58:28 crc kubenswrapper[4681]: E0404 01:58:28.201461 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:58:28 crc kubenswrapper[4681]: I0404 01:58:28.200959 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:58:28 crc kubenswrapper[4681]: I0404 01:58:28.200943 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:58:28 crc kubenswrapper[4681]: E0404 01:58:28.201577 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:58:28 crc kubenswrapper[4681]: E0404 01:58:28.201661 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:58:28 crc kubenswrapper[4681]: E0404 01:58:28.201803 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:58:30 crc kubenswrapper[4681]: I0404 01:58:30.200103 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:58:30 crc kubenswrapper[4681]: I0404 01:58:30.200129 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:58:30 crc kubenswrapper[4681]: I0404 01:58:30.200132 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:58:30 crc kubenswrapper[4681]: I0404 01:58:30.200325 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:58:30 crc kubenswrapper[4681]: E0404 01:58:30.200435 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:58:30 crc kubenswrapper[4681]: E0404 01:58:30.200579 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:58:30 crc kubenswrapper[4681]: E0404 01:58:30.200696 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:58:30 crc kubenswrapper[4681]: E0404 01:58:30.200820 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:58:31 crc kubenswrapper[4681]: I0404 01:58:31.226983 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a4b857bb-f1f7-4278-b5f5-c2c92b7b5fa7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://654d585d9608ff1ab095ebc1e08e2404b59377d9483553ccc21b842d388d9566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e83f6269eab0f2cfa35b94dd5dc3749c2dcd982ccba81362084214c0aa8aa370\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7aa53b3cbd3084d0ca4316e337a32f7789c07dec7060cc8a3173ff24e16bf01c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a2d1ac9d343b3d6042021945ccc09ae1045c739b04c444c2f308a030410b9d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2e939cc0269828b2016e5fd8fc9f75f02b8e583e2e2284f2b2a94f1e7946fe6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://817e1d4ccd73ef503a5afc9d6ef1d1cec09a278bd4c7f0dcce9091f56e746ebc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://beba6115ee13b455b267dc0a532eda9a3471ae4d7fce34ee21e124158e34a250\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec6919fa25dd7f07ea1dc96e031210f277789539cbef175993ef463a8bffba57\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:31Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:31 crc kubenswrapper[4681]: I0404 01:58:31.247069 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f5d976-9467-446a-83cc-8b487a024874\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:56:23Z\\\",\\\"message\\\":\\\"ver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0404 01:56:23.616131 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616145 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0404 01:56:23.616156 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0404 01:56:23.616165 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0404 01:56:23.616172 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0404 01:56:23.616180 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0404 01:56:23.620404 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620467 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0404 01:56:23.620525 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620559 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0404 01:56:23.620630 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0404 01:56:23.620674 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0404 01:56:23.620730 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0404 01:56:23.620760 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nF0404 01:56:23.620763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:56:22Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:24Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:31Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:31 crc kubenswrapper[4681]: I0404 01:58:31.262019 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:31Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:31 crc kubenswrapper[4681]: I0404 01:58:31.278334 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:31Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:31 crc kubenswrapper[4681]: I0404 01:58:31.310345 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d004639b-c07a-4401-8588-8af4ed981db3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d6e4534b33b6e5ef0b5215513f32180fc0ca211032267b4a8c4234f8898f568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d2840027a88428593a03be2fe95869f9aec36765ef0a70345f5e7832bbe6500\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1eaf3363b57f81c1d30955e2f079097ee67801f30ce59a31744cbafe32358635\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://781945902b239ddb43e807866e874c9772627c2a89a780bb78915f9d56be3af7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88384a096ba35de77fe905c618d6700477502a7f5d8dad360e3783f908234037\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"message\\\":\\\"++ K8S_NODE=\\\\n++ [[ -n '' ]]\\\\n++ northd_pidfile=/var/run/ovn/ovn-northd.pid\\\\n++ controller_pidfile=/var/run/ovn/ovn-controller.pid\\\\n++ controller_logfile=/var/log/ovn/acl-audit-log.log\\\\n++ vswitch_dbsock=/var/run/openvswitch/db.sock\\\\n++ nbdb_pidfile=/var/run/ovn/ovnnb_db.pid\\\\n++ nbdb_sock=/var/run/ovn/ovnnb_db.sock\\\\n++ nbdb_ctl=/var/run/ovn/ovnnb_db.ctl\\\\n++ sbdb_pidfile=/var/run/ovn/ovnsb_db.pid\\\\n++ sbdb_sock=/var/run/ovn/ovnsb_db.sock\\\\n++ sbdb_ctl=/var/run/ovn/ovnsb_db.ctl\\\\n+ start-audit-log-rotation\\\\n+ MAXFILESIZE=50000000\\\\n+ MAXLOGFILES=5\\\\n++ dirname /var/log/ovn/acl-audit-log.log\\\\n+ LOGDIR=/var/log/ovn\\\\n+ local retries=0\\\\n+ [[ 30 -gt 0 ]]\\\\n+ (( retries += 1 ))\\\\n++ cat /var/run/ovn/ovn-controller.pid\\\\ncat: /var/run/ovn/ovn-controller.pid: No such file or directory\\\\n+ CONTROLLERPID=\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://efab081b44f2dca4164229bbc1cc45821aa0e010e6a68f83087c3f2eff932840\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://63ae47ecb180b30ba0d105b197dfb996fd35e622dcf11c5eacfbb0cc0ddafea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://63ae47ecb180b30ba0d105b197dfb996fd35e622dcf11c5eacfbb0cc0ddafea9\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:58:13Z\\\",\\\"message\\\":\\\"from k8s.io/client-go/informers/factory.go:160\\\\nI0404 01:58:12.871234 7111 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0404 01:58:12.871279 7111 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0404 01:58:12.871800 7111 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0404 01:58:12.871843 7111 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0404 01:58:12.871925 7111 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0404 01:58:12.871974 7111 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0404 01:58:12.872038 7111 factory.go:656] Stopping watch factory\\\\nI0404 01:58:12.872056 7111 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0404 01:58:12.872070 7111 ovnkube.go:599] Stopped ovnkube\\\\nI0404 01:58:12.872073 7111 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0404 01:58:12.872036 7111 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0404 01:58:12.872136 7111 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0404 01:58:12.872152 7111 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0404 01:58:12.872242 7111 ovnkube.go:\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:58:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-cntwc_openshift-ovn-kubernetes(d004639b-c07a-4401-8588-8af4ed981db3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:36Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vz8jr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-cntwc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:31Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:31 crc kubenswrapper[4681]: I0404 01:58:31.328835 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9974bfdb-b2d7-4868-b6aa-2e224af81ae5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b4cec6cb4073050c86228bfabf2ec38203bb000808aa283fbab1f7af8756c36\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d279e74261f5a470aec96a2cb3111a60135cc342c6d922552da3b315ab10a134\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d279e74261f5a470aec96a2cb3111a60135cc342c6d922552da3b315ab10a134\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:31Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:31 crc kubenswrapper[4681]: E0404 01:58:31.333012 4681 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 04 01:58:31 crc kubenswrapper[4681]: I0404 01:58:31.352008 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://54f5638999141b3208954eedcc112ee8e065ed5869d04143001b1fd287c18dd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea4da0687831f8abdc55273f0552c918954d2c058a4356b9c5359402083bdb6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:31Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:31 crc kubenswrapper[4681]: I0404 01:58:31.379165 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5918fa67-6cfa-4c3b-bc04-7cc7888abf1c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c03cfac4cd4e9c86986c359493cc6c5551cc537bf55e0a9fb52f9f5c80bb94a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12529585b1c610b9bbe2259494da79ca32c98c2267bb868289b005fcb2eca811\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4138a3bedbb3ba06a124d4a9edd9db72048012cae031e7071b375846ed4ae2e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7720b0315fbaf6b22c5fbd8480964da0cd980aed0ffd4c127cad9c40354021b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://99830ae80ce3ff6903cad331481eb07f89e25655f97a1e20c89adbde26f68860\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50ba29867f52d171a45997b3cb43d4007935daee10e58862f349c0b5884c2735\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://77e7a2b06d5ebbddb6cb463ee67c9d0322a52e412511a21297e45fec47d92ee6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:57:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q824j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bqtgx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:31Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:31 crc kubenswrapper[4681]: I0404 01:58:31.396633 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"41bdd8e6-130d-4e3e-b466-313031c233d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-frbh6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:06Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jk6f6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:31Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:31 crc kubenswrapper[4681]: I0404 01:58:31.414780 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"21fa3376-6f50-4819-b268-579c7500c923\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce18a007870156d09b31ab38dd939beba4fc47418b8e07d70945eb4838f93130\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d9640effcf3619f733c3f5082251165355776fabe5d8c3ade7f06409ec459d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a4d1e6ca047390c262fe05912f5de2fb5d4fa0f6d51f4462d9b602901aa99dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-04-04T01:55:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:31Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:31 crc kubenswrapper[4681]: I0404 01:58:31.428635 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:31Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:31 crc kubenswrapper[4681]: I0404 01:58:31.447811 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-w5wbs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cab7ffc5-0101-48b8-87ab-de8324bacc38\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7bd7f38b8f100b2680bf0d1741ec304c194c7ef2b97d0e033338fb1b9ed11e00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-04-04T01:58:25Z\\\",\\\"message\\\":\\\"2026-04-04T01:57:39+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_a314ff8b-9d12-4778-aeb9-f3940fb56051\\\\n2026-04-04T01:57:39+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_a314ff8b-9d12-4778-aeb9-f3940fb56051 to /host/opt/cni/bin/\\\\n2026-04-04T01:57:40Z [verbose] multus-daemon started\\\\n2026-04-04T01:57:40Z [verbose] Readiness Indicator file check\\\\n2026-04-04T01:58:25Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:57:38Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:58:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wqjdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-multus\"/\"multus-w5wbs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:31Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:31 crc kubenswrapper[4681]: I0404 01:58:31.467745 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:31Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:31 crc kubenswrapper[4681]: I0404 01:58:31.480189 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d457ca0b-43c6-4bab-940c-5aa4ab124992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0266f6c217cfb69ed899bb926acc9f1c43b4429ff30b6fb54db89b92adffb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nt54j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-v6mjr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:31Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:31 crc kubenswrapper[4681]: I0404 01:58:31.496451 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7e91df578536e121b6d93c3a410d1d8e1bc65e9a3ea5c409bcbdd33308a4cde3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:31Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:31 crc kubenswrapper[4681]: I0404 01:58:31.510336 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-jsn9l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e4e1568b-1dc4-41c2-a74f-38bfabcf1280\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7fcdf2be16bb9aeb8d28d6805badefaab604b3ba020eb5d101baaeebe2fb7d2f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n97gc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:56Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-jsn9l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:31Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:31 crc kubenswrapper[4681]: I0404 01:58:31.523747 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-8jsq4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3cf3df95-c4d1-4983-81f0-c7ec3b2b5ad1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d5a80321a01a279ac896396acffe25447fc28d3250dba73c82999553f39554d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sxjst\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:56:59Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-8jsq4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:31Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:31 crc kubenswrapper[4681]: I0404 01:58:31.540168 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4fe566-3f65-4de4-9595-80b23fe4149c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:57:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91aa9f3be56d722e328511c8192ffb904737386d2bd59f922f20af0177098fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b49348004c8bf7a0559c12eb6d66e1bcecee0427f1239e092e470f2a5e127b9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:57:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kvgg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:57:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-sswhf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:31Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:31 crc kubenswrapper[4681]: I0404 01:58:31.557675 4681 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f8dda2d-d2e0-4669-8a9d-f3ae21eaa22e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:56:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-04-04T01:55:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f7eb0459a3d135d52a1c4168c3def80940997d4a4286ffa23d2abdde13b7df7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b5fc946678318218c1dc93f59aea7fc760172225cdb94efee9ea779b0a3ff53c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-04-04T01:55:51Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0404 01:55:23.581321 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0404 01:55:23.584508 1 observer_polling.go:159] Starting file observer\\\\nI0404 01:55:23.630158 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0404 01:55:23.635460 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0404 01:55:51.649805 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0404 01:55:51.649953 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:55:51Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9a52ccf1a40f8ddec3fbfd455dfaae3a08af9e38f488b774ae61283eb539214d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2f6196c5b3a05c942055f25e31a350843406ea37406e1af62c782dc9f8a4f1c2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7096a59f168cc8b9e7c61688d8ae41239ce4d62f3f0aa859bdb8586c85afe752\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-04-04T01:55:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-04-04T01:55:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:31Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:32 crc kubenswrapper[4681]: I0404 01:58:32.200037 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:58:32 crc kubenswrapper[4681]: I0404 01:58:32.200185 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:58:32 crc kubenswrapper[4681]: E0404 01:58:32.200376 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:58:32 crc kubenswrapper[4681]: I0404 01:58:32.200512 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:58:32 crc kubenswrapper[4681]: I0404 01:58:32.200566 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:58:32 crc kubenswrapper[4681]: E0404 01:58:32.200732 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:58:32 crc kubenswrapper[4681]: E0404 01:58:32.200841 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:58:32 crc kubenswrapper[4681]: E0404 01:58:32.200983 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:58:32 crc kubenswrapper[4681]: I0404 01:58:32.738193 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:58:32 crc kubenswrapper[4681]: I0404 01:58:32.738314 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:58:32 crc kubenswrapper[4681]: I0404 01:58:32.738352 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:58:32 crc kubenswrapper[4681]: I0404 01:58:32.738382 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:58:32 crc kubenswrapper[4681]: I0404 01:58:32.738404 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:58:32Z","lastTransitionTime":"2026-04-04T01:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:58:32 crc kubenswrapper[4681]: E0404 01:58:32.760289 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:32Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:32 crc kubenswrapper[4681]: I0404 01:58:32.765653 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:58:32 crc kubenswrapper[4681]: I0404 01:58:32.765706 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:58:32 crc kubenswrapper[4681]: I0404 01:58:32.765725 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:58:32 crc kubenswrapper[4681]: I0404 01:58:32.765749 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:58:32 crc kubenswrapper[4681]: I0404 01:58:32.765764 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:58:32Z","lastTransitionTime":"2026-04-04T01:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:58:32 crc kubenswrapper[4681]: E0404 01:58:32.786448 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:32Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:32 crc kubenswrapper[4681]: I0404 01:58:32.790945 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:58:32 crc kubenswrapper[4681]: I0404 01:58:32.791023 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:58:32 crc kubenswrapper[4681]: I0404 01:58:32.791062 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:58:32 crc kubenswrapper[4681]: I0404 01:58:32.791097 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:58:32 crc kubenswrapper[4681]: I0404 01:58:32.791128 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:58:32Z","lastTransitionTime":"2026-04-04T01:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:58:32 crc kubenswrapper[4681]: E0404 01:58:32.806585 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:32Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:32 crc kubenswrapper[4681]: I0404 01:58:32.812174 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:58:32 crc kubenswrapper[4681]: I0404 01:58:32.812238 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:58:32 crc kubenswrapper[4681]: I0404 01:58:32.812251 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:58:32 crc kubenswrapper[4681]: I0404 01:58:32.812290 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:58:32 crc kubenswrapper[4681]: I0404 01:58:32.812304 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:58:32Z","lastTransitionTime":"2026-04-04T01:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:58:32 crc kubenswrapper[4681]: E0404 01:58:32.830304 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:32Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:32 crc kubenswrapper[4681]: I0404 01:58:32.835937 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:58:32 crc kubenswrapper[4681]: I0404 01:58:32.836054 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:58:32 crc kubenswrapper[4681]: I0404 01:58:32.836071 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:58:32 crc kubenswrapper[4681]: I0404 01:58:32.836094 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:58:32 crc kubenswrapper[4681]: I0404 01:58:32.836112 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:58:32Z","lastTransitionTime":"2026-04-04T01:58:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:58:32 crc kubenswrapper[4681]: E0404 01:58:32.854523 4681 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"lastTransitionTime\\\":\\\"2026-04-04T01:58:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"49e1a4fd-53f8-4e31-ae85-567f85d79a05\\\",\\\"systemUUID\\\":\\\"4891c636-dd7d-42bd-b5a2-f8934586e626\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-04-04T01:58:32Z is after 2025-08-24T17:21:41Z" Apr 04 01:58:32 crc kubenswrapper[4681]: E0404 01:58:32.854826 4681 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Apr 04 01:58:34 crc kubenswrapper[4681]: I0404 01:58:34.200021 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:58:34 crc kubenswrapper[4681]: I0404 01:58:34.200068 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:58:34 crc kubenswrapper[4681]: I0404 01:58:34.200218 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:58:34 crc kubenswrapper[4681]: E0404 01:58:34.200204 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:58:34 crc kubenswrapper[4681]: I0404 01:58:34.200306 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:58:34 crc kubenswrapper[4681]: E0404 01:58:34.200447 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:58:34 crc kubenswrapper[4681]: E0404 01:58:34.200729 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:58:34 crc kubenswrapper[4681]: E0404 01:58:34.200906 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:58:36 crc kubenswrapper[4681]: I0404 01:58:36.200602 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:58:36 crc kubenswrapper[4681]: I0404 01:58:36.200673 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:58:36 crc kubenswrapper[4681]: E0404 01:58:36.200913 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:58:36 crc kubenswrapper[4681]: I0404 01:58:36.201045 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:58:36 crc kubenswrapper[4681]: I0404 01:58:36.201144 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:58:36 crc kubenswrapper[4681]: E0404 01:58:36.201334 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:58:36 crc kubenswrapper[4681]: E0404 01:58:36.201594 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:58:36 crc kubenswrapper[4681]: E0404 01:58:36.201842 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:58:36 crc kubenswrapper[4681]: E0404 01:58:36.334695 4681 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 04 01:58:38 crc kubenswrapper[4681]: I0404 01:58:38.200776 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:58:38 crc kubenswrapper[4681]: I0404 01:58:38.200817 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:58:38 crc kubenswrapper[4681]: I0404 01:58:38.200868 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:58:38 crc kubenswrapper[4681]: I0404 01:58:38.200789 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:58:38 crc kubenswrapper[4681]: E0404 01:58:38.200942 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:58:38 crc kubenswrapper[4681]: E0404 01:58:38.201019 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:58:38 crc kubenswrapper[4681]: E0404 01:58:38.201735 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:58:38 crc kubenswrapper[4681]: E0404 01:58:38.201908 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:58:38 crc kubenswrapper[4681]: I0404 01:58:38.202203 4681 scope.go:117] "RemoveContainer" containerID="63ae47ecb180b30ba0d105b197dfb996fd35e622dcf11c5eacfbb0cc0ddafea9" Apr 04 01:58:40 crc kubenswrapper[4681]: I0404 01:58:40.189974 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cntwc_d004639b-c07a-4401-8588-8af4ed981db3/ovnkube-controller/2.log" Apr 04 01:58:40 crc kubenswrapper[4681]: I0404 01:58:40.194520 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cntwc_d004639b-c07a-4401-8588-8af4ed981db3/ovn-acl-logging/0.log" Apr 04 01:58:40 crc kubenswrapper[4681]: I0404 01:58:40.196012 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" event={"ID":"d004639b-c07a-4401-8588-8af4ed981db3","Type":"ContainerStarted","Data":"7eb98cfe208dd9452114fa20dd0da34ba28dc788cd640b4539e2b080908b2f51"} Apr 04 01:58:40 crc kubenswrapper[4681]: I0404 01:58:40.200479 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:58:40 crc kubenswrapper[4681]: I0404 01:58:40.200507 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:58:40 crc kubenswrapper[4681]: I0404 01:58:40.200533 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:58:40 crc kubenswrapper[4681]: I0404 01:58:40.200728 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:58:40 crc kubenswrapper[4681]: E0404 01:58:40.200795 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:58:40 crc kubenswrapper[4681]: E0404 01:58:40.200882 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:58:40 crc kubenswrapper[4681]: E0404 01:58:40.201049 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:58:40 crc kubenswrapper[4681]: E0404 01:58:40.201141 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:58:40 crc kubenswrapper[4681]: I0404 01:58:40.965515 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jk6f6"] Apr 04 01:58:41 crc kubenswrapper[4681]: I0404 01:58:41.200722 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:58:41 crc kubenswrapper[4681]: E0404 01:58:41.200891 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:58:41 crc kubenswrapper[4681]: I0404 01:58:41.291681 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-w5wbs" podStartSLOduration=145.29165119 podStartE2EDuration="2m25.29165119s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:58:41.2710092 +0000 UTC m=+200.936784320" watchObservedRunningTime="2026-04-04 01:58:41.29165119 +0000 UTC m=+200.957426350" Apr 04 01:58:41 crc kubenswrapper[4681]: I0404 01:58:41.314553 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-sswhf" podStartSLOduration=145.314531104 podStartE2EDuration="2m25.314531104s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:58:41.292595737 +0000 UTC m=+200.958370847" watchObservedRunningTime="2026-04-04 01:58:41.314531104 +0000 UTC m=+200.980306234" Apr 04 01:58:41 crc kubenswrapper[4681]: I0404 01:58:41.328916 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podStartSLOduration=145.328891998 podStartE2EDuration="2m25.328891998s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:58:41.328361934 +0000 UTC m=+200.994137084" watchObservedRunningTime="2026-04-04 01:58:41.328891998 +0000 UTC m=+200.994667138" Apr 04 01:58:41 crc kubenswrapper[4681]: I0404 01:58:41.329142 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=57.329133756 podStartE2EDuration="57.329133756s" podCreationTimestamp="2026-04-04 01:57:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:58:41.314211375 +0000 UTC m=+200.979986525" watchObservedRunningTime="2026-04-04 01:58:41.329133756 +0000 UTC m=+200.994908886" Apr 04 01:58:41 crc kubenswrapper[4681]: E0404 01:58:41.335546 4681 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Apr 04 01:58:41 crc kubenswrapper[4681]: I0404 01:58:41.375702 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-jsn9l" podStartSLOduration=146.375683896 podStartE2EDuration="2m26.375683896s" podCreationTimestamp="2026-04-04 01:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:58:41.365448547 +0000 UTC m=+201.031223687" watchObservedRunningTime="2026-04-04 01:58:41.375683896 +0000 UTC m=+201.041459016" Apr 04 01:58:41 crc kubenswrapper[4681]: I0404 01:58:41.376070 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-8jsq4" podStartSLOduration=146.376063866 podStartE2EDuration="2m26.376063866s" podCreationTimestamp="2026-04-04 01:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:58:41.375385177 +0000 UTC m=+201.041160307" watchObservedRunningTime="2026-04-04 01:58:41.376063866 +0000 UTC m=+201.041838986" Apr 04 01:58:41 crc kubenswrapper[4681]: I0404 01:58:41.413314 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=31.413295475 podStartE2EDuration="31.413295475s" podCreationTimestamp="2026-04-04 01:58:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:58:41.412999156 +0000 UTC m=+201.078774276" watchObservedRunningTime="2026-04-04 01:58:41.413295475 +0000 UTC m=+201.079070595" Apr 04 01:58:41 crc kubenswrapper[4681]: I0404 01:58:41.440332 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=93.440315835 podStartE2EDuration="1m33.440315835s" podCreationTimestamp="2026-04-04 01:57:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:58:41.439456591 +0000 UTC m=+201.105231721" watchObservedRunningTime="2026-04-04 01:58:41.440315835 +0000 UTC m=+201.106090955" Apr 04 01:58:41 crc kubenswrapper[4681]: I0404 01:58:41.459794 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=99.459768422 podStartE2EDuration="1m39.459768422s" podCreationTimestamp="2026-04-04 01:57:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:58:41.459363611 +0000 UTC m=+201.125138731" watchObservedRunningTime="2026-04-04 01:58:41.459768422 +0000 UTC m=+201.125543542" Apr 04 01:58:41 crc kubenswrapper[4681]: I0404 01:58:41.501140 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=55.501119997 podStartE2EDuration="55.501119997s" podCreationTimestamp="2026-04-04 01:57:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:58:41.500072338 +0000 UTC m=+201.165847478" watchObservedRunningTime="2026-04-04 01:58:41.501119997 +0000 UTC m=+201.166895117" Apr 04 01:58:41 crc kubenswrapper[4681]: I0404 01:58:41.529848 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-bqtgx" podStartSLOduration=145.529833305 podStartE2EDuration="2m25.529833305s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:58:41.52929969 +0000 UTC m=+201.195074810" watchObservedRunningTime="2026-04-04 01:58:41.529833305 +0000 UTC m=+201.195608425" Apr 04 01:58:41 crc kubenswrapper[4681]: I0404 01:58:41.562701 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" podStartSLOduration=145.562679299 podStartE2EDuration="2m25.562679299s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:58:41.561600789 +0000 UTC m=+201.227375919" watchObservedRunningTime="2026-04-04 01:58:41.562679299 +0000 UTC m=+201.228454429" Apr 04 01:58:42 crc kubenswrapper[4681]: I0404 01:58:42.200411 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:58:42 crc kubenswrapper[4681]: I0404 01:58:42.200455 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:58:42 crc kubenswrapper[4681]: I0404 01:58:42.200526 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:58:42 crc kubenswrapper[4681]: E0404 01:58:42.200611 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:58:42 crc kubenswrapper[4681]: E0404 01:58:42.200772 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:58:42 crc kubenswrapper[4681]: E0404 01:58:42.200947 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:58:43 crc kubenswrapper[4681]: I0404 01:58:43.151615 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Apr 04 01:58:43 crc kubenswrapper[4681]: I0404 01:58:43.152041 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Apr 04 01:58:43 crc kubenswrapper[4681]: I0404 01:58:43.152083 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Apr 04 01:58:43 crc kubenswrapper[4681]: I0404 01:58:43.152155 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Apr 04 01:58:43 crc kubenswrapper[4681]: I0404 01:58:43.152182 4681 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-04T01:58:43Z","lastTransitionTime":"2026-04-04T01:58:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Apr 04 01:58:43 crc kubenswrapper[4681]: I0404 01:58:43.198973 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-b88ck"] Apr 04 01:58:43 crc kubenswrapper[4681]: I0404 01:58:43.199606 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b88ck" Apr 04 01:58:43 crc kubenswrapper[4681]: I0404 01:58:43.199803 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:58:43 crc kubenswrapper[4681]: E0404 01:58:43.199915 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:58:43 crc kubenswrapper[4681]: I0404 01:58:43.202236 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Apr 04 01:58:43 crc kubenswrapper[4681]: I0404 01:58:43.202381 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Apr 04 01:58:43 crc kubenswrapper[4681]: I0404 01:58:43.202539 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Apr 04 01:58:43 crc kubenswrapper[4681]: I0404 01:58:43.202880 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Apr 04 01:58:43 crc kubenswrapper[4681]: I0404 01:58:43.239662 4681 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Apr 04 01:58:43 crc kubenswrapper[4681]: I0404 01:58:43.243030 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2516f347-9e87-4da5-904f-c9c2c73f8ce3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-b88ck\" (UID: \"2516f347-9e87-4da5-904f-c9c2c73f8ce3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b88ck" Apr 04 01:58:43 crc kubenswrapper[4681]: I0404 01:58:43.243072 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2516f347-9e87-4da5-904f-c9c2c73f8ce3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-b88ck\" (UID: \"2516f347-9e87-4da5-904f-c9c2c73f8ce3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b88ck" Apr 04 01:58:43 crc kubenswrapper[4681]: I0404 01:58:43.243104 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2516f347-9e87-4da5-904f-c9c2c73f8ce3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-b88ck\" (UID: \"2516f347-9e87-4da5-904f-c9c2c73f8ce3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b88ck" Apr 04 01:58:43 crc kubenswrapper[4681]: I0404 01:58:43.243144 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2516f347-9e87-4da5-904f-c9c2c73f8ce3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-b88ck\" (UID: \"2516f347-9e87-4da5-904f-c9c2c73f8ce3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b88ck" Apr 04 01:58:43 crc kubenswrapper[4681]: I0404 01:58:43.243179 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2516f347-9e87-4da5-904f-c9c2c73f8ce3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-b88ck\" (UID: \"2516f347-9e87-4da5-904f-c9c2c73f8ce3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b88ck" Apr 04 01:58:43 crc kubenswrapper[4681]: I0404 01:58:43.246325 4681 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Apr 04 01:58:43 crc kubenswrapper[4681]: I0404 01:58:43.344494 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2516f347-9e87-4da5-904f-c9c2c73f8ce3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-b88ck\" (UID: \"2516f347-9e87-4da5-904f-c9c2c73f8ce3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b88ck" Apr 04 01:58:43 crc kubenswrapper[4681]: I0404 01:58:43.344555 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2516f347-9e87-4da5-904f-c9c2c73f8ce3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-b88ck\" (UID: \"2516f347-9e87-4da5-904f-c9c2c73f8ce3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b88ck" Apr 04 01:58:43 crc kubenswrapper[4681]: I0404 01:58:43.344601 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2516f347-9e87-4da5-904f-c9c2c73f8ce3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-b88ck\" (UID: \"2516f347-9e87-4da5-904f-c9c2c73f8ce3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b88ck" Apr 04 01:58:43 crc kubenswrapper[4681]: I0404 01:58:43.344631 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2516f347-9e87-4da5-904f-c9c2c73f8ce3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-b88ck\" (UID: \"2516f347-9e87-4da5-904f-c9c2c73f8ce3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b88ck" Apr 04 01:58:43 crc kubenswrapper[4681]: I0404 01:58:43.344668 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2516f347-9e87-4da5-904f-c9c2c73f8ce3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-b88ck\" (UID: \"2516f347-9e87-4da5-904f-c9c2c73f8ce3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b88ck" Apr 04 01:58:43 crc kubenswrapper[4681]: I0404 01:58:43.344724 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2516f347-9e87-4da5-904f-c9c2c73f8ce3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-b88ck\" (UID: \"2516f347-9e87-4da5-904f-c9c2c73f8ce3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b88ck" Apr 04 01:58:43 crc kubenswrapper[4681]: I0404 01:58:43.344786 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2516f347-9e87-4da5-904f-c9c2c73f8ce3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-b88ck\" (UID: \"2516f347-9e87-4da5-904f-c9c2c73f8ce3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b88ck" Apr 04 01:58:43 crc kubenswrapper[4681]: I0404 01:58:43.353005 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2516f347-9e87-4da5-904f-c9c2c73f8ce3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-b88ck\" (UID: \"2516f347-9e87-4da5-904f-c9c2c73f8ce3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b88ck" Apr 04 01:58:43 crc kubenswrapper[4681]: I0404 01:58:43.354086 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2516f347-9e87-4da5-904f-c9c2c73f8ce3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-b88ck\" (UID: \"2516f347-9e87-4da5-904f-c9c2c73f8ce3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b88ck" Apr 04 01:58:43 crc kubenswrapper[4681]: I0404 01:58:43.379840 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2516f347-9e87-4da5-904f-c9c2c73f8ce3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-b88ck\" (UID: \"2516f347-9e87-4da5-904f-c9c2c73f8ce3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b88ck" Apr 04 01:58:43 crc kubenswrapper[4681]: I0404 01:58:43.516238 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b88ck" Apr 04 01:58:43 crc kubenswrapper[4681]: W0404 01:58:43.540142 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2516f347_9e87_4da5_904f_c9c2c73f8ce3.slice/crio-48f196a9184744ad838dbe0af6e13befc1e96e84a17cec3defcb97d3a9c47564 WatchSource:0}: Error finding container 48f196a9184744ad838dbe0af6e13befc1e96e84a17cec3defcb97d3a9c47564: Status 404 returned error can't find the container with id 48f196a9184744ad838dbe0af6e13befc1e96e84a17cec3defcb97d3a9c47564 Apr 04 01:58:44 crc kubenswrapper[4681]: I0404 01:58:44.200583 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:58:44 crc kubenswrapper[4681]: I0404 01:58:44.200640 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:58:44 crc kubenswrapper[4681]: I0404 01:58:44.200661 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:58:44 crc kubenswrapper[4681]: E0404 01:58:44.201943 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:58:44 crc kubenswrapper[4681]: E0404 01:58:44.202004 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:58:44 crc kubenswrapper[4681]: E0404 01:58:44.202301 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:58:44 crc kubenswrapper[4681]: I0404 01:58:44.209796 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b88ck" event={"ID":"2516f347-9e87-4da5-904f-c9c2c73f8ce3","Type":"ContainerStarted","Data":"396138f3f2ba166c4320fa5af9ca21fb3d22ee2d62fd8e473b038703149ce11b"} Apr 04 01:58:44 crc kubenswrapper[4681]: I0404 01:58:44.209939 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b88ck" event={"ID":"2516f347-9e87-4da5-904f-c9c2c73f8ce3","Type":"ContainerStarted","Data":"48f196a9184744ad838dbe0af6e13befc1e96e84a17cec3defcb97d3a9c47564"} Apr 04 01:58:44 crc kubenswrapper[4681]: I0404 01:58:44.224322 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-b88ck" podStartSLOduration=149.224306388 podStartE2EDuration="2m29.224306388s" podCreationTimestamp="2026-04-04 01:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:58:44.223385773 +0000 UTC m=+203.889160893" watchObservedRunningTime="2026-04-04 01:58:44.224306388 +0000 UTC m=+203.890081508" Apr 04 01:58:45 crc kubenswrapper[4681]: I0404 01:58:45.200115 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:58:45 crc kubenswrapper[4681]: E0404 01:58:45.200776 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jk6f6" podUID="41bdd8e6-130d-4e3e-b466-313031c233d1" Apr 04 01:58:46 crc kubenswrapper[4681]: I0404 01:58:46.200320 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:58:46 crc kubenswrapper[4681]: I0404 01:58:46.200707 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:58:46 crc kubenswrapper[4681]: I0404 01:58:46.200413 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:58:46 crc kubenswrapper[4681]: E0404 01:58:46.201056 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Apr 04 01:58:46 crc kubenswrapper[4681]: E0404 01:58:46.201319 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Apr 04 01:58:46 crc kubenswrapper[4681]: E0404 01:58:46.201392 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Apr 04 01:58:47 crc kubenswrapper[4681]: I0404 01:58:47.200042 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:58:47 crc kubenswrapper[4681]: I0404 01:58:47.202276 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Apr 04 01:58:47 crc kubenswrapper[4681]: I0404 01:58:47.202344 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Apr 04 01:58:48 crc kubenswrapper[4681]: I0404 01:58:48.200313 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:58:48 crc kubenswrapper[4681]: I0404 01:58:48.200340 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:58:48 crc kubenswrapper[4681]: I0404 01:58:48.200484 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:58:48 crc kubenswrapper[4681]: I0404 01:58:48.202423 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Apr 04 01:58:48 crc kubenswrapper[4681]: I0404 01:58:48.202423 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Apr 04 01:58:48 crc kubenswrapper[4681]: I0404 01:58:48.203008 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Apr 04 01:58:48 crc kubenswrapper[4681]: I0404 01:58:48.203278 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.352984 4681 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.396131 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xpn96"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.396668 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.398937 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.399098 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.399194 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.399443 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.399485 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.399932 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.400057 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.402009 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.403016 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.403296 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.404669 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-mftw8"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.405126 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-mftw8" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.406595 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjl4b"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.406986 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjl4b" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.408905 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-blqhv"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.409392 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-blqhv" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.414818 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.415226 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.415243 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.415291 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.415489 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.415564 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.415611 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.415631 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.415640 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.415737 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.415801 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.415935 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.416296 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.416400 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.416825 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.418307 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.420541 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.420813 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.421004 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.430906 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.430926 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.431997 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.432058 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.432097 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.431995 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.432230 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.432313 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.432192 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.432577 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.432813 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-c6ktd"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.433244 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c6ktd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.433833 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-d8sjd"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.434347 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-d8sjd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.434889 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ntv7l"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.435344 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ntv7l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.436810 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-fn5hz"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.437364 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-fn5hz" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.437716 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-tq7nn"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.441853 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.441876 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.441935 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.442182 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2fpgr"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.442324 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.442654 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-t522l"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.442950 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.442218 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.442968 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2fpgr" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.445814 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.446014 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.446247 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.446459 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.446580 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.447731 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.447863 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.448019 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.448122 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.448213 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.448317 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.448407 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.448502 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.448601 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.448675 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.449145 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.458839 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bb4d019-ae1f-4aa2-b255-a6974c4edf4a-service-ca-bundle\") pod \"authentication-operator-69f744f599-blqhv\" (UID: \"5bb4d019-ae1f-4aa2-b255-a6974c4edf4a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-blqhv" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.458883 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ea58e7c7-9e3b-42ea-898f-c161a7ce17d7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7knn5\" (UID: \"ea58e7c7-9e3b-42ea-898f-c161a7ce17d7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.458905 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea58e7c7-9e3b-42ea-898f-c161a7ce17d7-serving-cert\") pod \"apiserver-7bbb656c7d-7knn5\" (UID: \"ea58e7c7-9e3b-42ea-898f-c161a7ce17d7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.458926 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5da93ec3-d19f-40d9-97f1-994998839180-encryption-config\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.458942 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ea58e7c7-9e3b-42ea-898f-c161a7ce17d7-audit-policies\") pod \"apiserver-7bbb656c7d-7knn5\" (UID: \"ea58e7c7-9e3b-42ea-898f-c161a7ce17d7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.458959 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q74m2\" (UniqueName: \"kubernetes.io/projected/ea58e7c7-9e3b-42ea-898f-c161a7ce17d7-kube-api-access-q74m2\") pod \"apiserver-7bbb656c7d-7knn5\" (UID: \"ea58e7c7-9e3b-42ea-898f-c161a7ce17d7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.458978 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5da93ec3-d19f-40d9-97f1-994998839180-config\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.458999 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5da93ec3-d19f-40d9-97f1-994998839180-image-import-ca\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.459021 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8bqc\" (UniqueName: \"kubernetes.io/projected/15b64868-afa1-4d70-bfda-799ed31decdb-kube-api-access-n8bqc\") pod \"machine-api-operator-5694c8668f-mftw8\" (UID: \"15b64868-afa1-4d70-bfda-799ed31decdb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mftw8" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.459039 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5da93ec3-d19f-40d9-97f1-994998839180-etcd-client\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.459059 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktq45\" (UniqueName: \"kubernetes.io/projected/36b289c9-56bc-4b1a-ab7d-1777b34bcaf4-kube-api-access-ktq45\") pod \"openshift-apiserver-operator-796bbdcf4f-hjl4b\" (UID: \"36b289c9-56bc-4b1a-ab7d-1777b34bcaf4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjl4b" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.459085 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ea58e7c7-9e3b-42ea-898f-c161a7ce17d7-encryption-config\") pod \"apiserver-7bbb656c7d-7knn5\" (UID: \"ea58e7c7-9e3b-42ea-898f-c161a7ce17d7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.459121 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8ng6\" (UniqueName: \"kubernetes.io/projected/5bb4d019-ae1f-4aa2-b255-a6974c4edf4a-kube-api-access-c8ng6\") pod \"authentication-operator-69f744f599-blqhv\" (UID: \"5bb4d019-ae1f-4aa2-b255-a6974c4edf4a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-blqhv" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.459139 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5da93ec3-d19f-40d9-97f1-994998839180-serving-cert\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.459162 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bb4d019-ae1f-4aa2-b255-a6974c4edf4a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-blqhv\" (UID: \"5bb4d019-ae1f-4aa2-b255-a6974c4edf4a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-blqhv" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.459193 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrhwv\" (UniqueName: \"kubernetes.io/projected/5da93ec3-d19f-40d9-97f1-994998839180-kube-api-access-xrhwv\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.459215 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5da93ec3-d19f-40d9-97f1-994998839180-audit\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.459239 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15b64868-afa1-4d70-bfda-799ed31decdb-config\") pod \"machine-api-operator-5694c8668f-mftw8\" (UID: \"15b64868-afa1-4d70-bfda-799ed31decdb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mftw8" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.459276 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb4d019-ae1f-4aa2-b255-a6974c4edf4a-config\") pod \"authentication-operator-69f744f599-blqhv\" (UID: \"5bb4d019-ae1f-4aa2-b255-a6974c4edf4a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-blqhv" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.459303 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36b289c9-56bc-4b1a-ab7d-1777b34bcaf4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-hjl4b\" (UID: \"36b289c9-56bc-4b1a-ab7d-1777b34bcaf4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjl4b" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.459339 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bb4d019-ae1f-4aa2-b255-a6974c4edf4a-serving-cert\") pod \"authentication-operator-69f744f599-blqhv\" (UID: \"5bb4d019-ae1f-4aa2-b255-a6974c4edf4a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-blqhv" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.459362 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ea58e7c7-9e3b-42ea-898f-c161a7ce17d7-etcd-client\") pod \"apiserver-7bbb656c7d-7knn5\" (UID: \"ea58e7c7-9e3b-42ea-898f-c161a7ce17d7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.459381 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5da93ec3-d19f-40d9-97f1-994998839180-node-pullsecrets\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.459404 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea58e7c7-9e3b-42ea-898f-c161a7ce17d7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7knn5\" (UID: \"ea58e7c7-9e3b-42ea-898f-c161a7ce17d7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.459424 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5da93ec3-d19f-40d9-97f1-994998839180-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.459442 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5da93ec3-d19f-40d9-97f1-994998839180-audit-dir\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.459462 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/15b64868-afa1-4d70-bfda-799ed31decdb-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-mftw8\" (UID: \"15b64868-afa1-4d70-bfda-799ed31decdb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mftw8" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.459482 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea58e7c7-9e3b-42ea-898f-c161a7ce17d7-audit-dir\") pod \"apiserver-7bbb656c7d-7knn5\" (UID: \"ea58e7c7-9e3b-42ea-898f-c161a7ce17d7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.459500 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b289c9-56bc-4b1a-ab7d-1777b34bcaf4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-hjl4b\" (UID: \"36b289c9-56bc-4b1a-ab7d-1777b34bcaf4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjl4b" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.459518 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/15b64868-afa1-4d70-bfda-799ed31decdb-images\") pod \"machine-api-operator-5694c8668f-mftw8\" (UID: \"15b64868-afa1-4d70-bfda-799ed31decdb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mftw8" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.459535 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5da93ec3-d19f-40d9-97f1-994998839180-etcd-serving-ca\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.459854 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8g27v"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.462483 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8g27v" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.473889 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.474489 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.474571 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.474609 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.474671 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.474881 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.475037 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.475829 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.475909 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.476658 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.476837 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.476950 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.477128 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.478196 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wpzmf"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.478271 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.478403 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.478489 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.478577 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.478726 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.478843 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wpzmf" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.479051 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.479153 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.478848 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-mcv5m"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.479820 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kxzxq"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.479857 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mcv5m" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.480168 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.480437 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kxzxq" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.479055 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.481164 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.481242 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.481244 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.482485 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.483029 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.484207 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.484500 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.484602 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.484316 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.484687 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.486188 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qj9xc"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.486599 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-qj9xc" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.487118 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-9rfl7"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.487564 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9rfl7" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.487873 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.489474 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-x7dlg"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.489683 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.489844 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x7dlg" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.489983 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.490072 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.492351 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.494014 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gwptd"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.494501 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587798-tmssr"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.494789 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587798-tmssr" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.494992 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.495070 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt66q"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.495550 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt66q" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.495833 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.495942 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.496016 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.496093 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.496606 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.497888 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s28vn"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.498462 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s28vn" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.498647 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.498785 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.507793 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-sp7hg"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.508391 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.511525 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-sp7hg" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.511761 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xpn96"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.513124 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587785-ft78m"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.514596 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587785-ft78m" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.535399 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.535847 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.537059 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.538580 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.540784 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.541152 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-cxjm4"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.541824 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cxjm4" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.542829 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j4m77"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.543579 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j4m77" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.543820 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c76br"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.544347 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c76br" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.547093 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6r2x"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.547928 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hwjqf"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.548425 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hwjqf" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.548665 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-d25mp"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.548819 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6r2x" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.549458 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-d25mp" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.550822 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nzjzv"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.551349 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.554228 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dv8vg"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.555138 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dv8vg" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.557196 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tb658"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.557931 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tb658" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.559399 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.561374 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5da93ec3-d19f-40d9-97f1-994998839180-node-pullsecrets\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.561408 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.561436 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-console-serving-cert\") pod \"console-f9d7485db-c6ktd\" (UID: \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\") " pod="openshift-console/console-f9d7485db-c6ktd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.561458 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/559af3cb-f642-4e99-91e1-155840a1629c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dt66q\" (UID: \"559af3cb-f642-4e99-91e1-155840a1629c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt66q" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.561478 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea58e7c7-9e3b-42ea-898f-c161a7ce17d7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7knn5\" (UID: \"ea58e7c7-9e3b-42ea-898f-c161a7ce17d7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.561500 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.561519 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.561536 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/966e01cf-5149-43ef-8967-517e68e2bbaa-config\") pod \"service-ca-operator-777779d784-x7dlg\" (UID: \"966e01cf-5149-43ef-8967-517e68e2bbaa\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x7dlg" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.561558 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2stx\" (UniqueName: \"kubernetes.io/projected/394b01ea-0b57-4565-aa56-96b6c5372a15-kube-api-access-m2stx\") pod \"console-operator-58897d9998-tq7nn\" (UID: \"394b01ea-0b57-4565-aa56-96b6c5372a15\") " pod="openshift-console-operator/console-operator-58897d9998-tq7nn" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.561577 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5da93ec3-d19f-40d9-97f1-994998839180-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.561597 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5da93ec3-d19f-40d9-97f1-994998839180-audit-dir\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.561617 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/15b64868-afa1-4d70-bfda-799ed31decdb-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-mftw8\" (UID: \"15b64868-afa1-4d70-bfda-799ed31decdb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mftw8" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.561637 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.561656 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d0b49f8-7d1c-41ce-9735-73cfa83aa3a2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2fpgr\" (UID: \"1d0b49f8-7d1c-41ce-9735-73cfa83aa3a2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2fpgr" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.561675 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c1605aa-6f4d-4754-9e6e-5f2c2d564f73-serving-cert\") pod \"etcd-operator-b45778765-qj9xc\" (UID: \"3c1605aa-6f4d-4754-9e6e-5f2c2d564f73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qj9xc" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.561694 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzpf9\" (UniqueName: \"kubernetes.io/projected/56028b8f-0d6b-4f7f-b4d6-cefc5acec683-kube-api-access-tzpf9\") pod \"collect-profiles-29587785-ft78m\" (UID: \"56028b8f-0d6b-4f7f-b4d6-cefc5acec683\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587785-ft78m" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.561713 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fd99e1c1-f4c2-42cb-ad67-76f781407b88-webhook-cert\") pod \"packageserver-d55dfcdfc-s28vn\" (UID: \"fd99e1c1-f4c2-42cb-ad67-76f781407b88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s28vn" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.561722 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5da93ec3-d19f-40d9-97f1-994998839180-node-pullsecrets\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.561732 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b28142c-7b85-406e-b158-42517bab7f11-audit-dir\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.561779 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wzrg\" (UniqueName: \"kubernetes.io/projected/cf31d35d-7049-4892-8bd1-3dc9deb4325c-kube-api-access-5wzrg\") pod \"dns-operator-744455d44c-d8sjd\" (UID: \"cf31d35d-7049-4892-8bd1-3dc9deb4325c\") " pod="openshift-dns-operator/dns-operator-744455d44c-d8sjd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.561803 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n78gm\" (UniqueName: \"kubernetes.io/projected/fcd50c8c-38c5-4c42-930d-2235c4384328-kube-api-access-n78gm\") pod \"auto-csr-approver-29587798-tmssr\" (UID: \"fcd50c8c-38c5-4c42-930d-2235c4384328\") " pod="openshift-infra/auto-csr-approver-29587798-tmssr" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.561829 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea58e7c7-9e3b-42ea-898f-c161a7ce17d7-audit-dir\") pod \"apiserver-7bbb656c7d-7knn5\" (UID: \"ea58e7c7-9e3b-42ea-898f-c161a7ce17d7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.561848 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0881096e-2031-4e7a-8a1a-927fcceccf61-auth-proxy-config\") pod \"machine-approver-56656f9798-9rfl7\" (UID: \"0881096e-2031-4e7a-8a1a-927fcceccf61\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9rfl7" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.561866 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1d0b49f8-7d1c-41ce-9735-73cfa83aa3a2-proxy-tls\") pod \"machine-config-operator-74547568cd-2fpgr\" (UID: \"1d0b49f8-7d1c-41ce-9735-73cfa83aa3a2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2fpgr" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.561885 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c1605aa-6f4d-4754-9e6e-5f2c2d564f73-config\") pod \"etcd-operator-b45778765-qj9xc\" (UID: \"3c1605aa-6f4d-4754-9e6e-5f2c2d564f73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qj9xc" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.561908 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5da93ec3-d19f-40d9-97f1-994998839180-etcd-serving-ca\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.561937 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b289c9-56bc-4b1a-ab7d-1777b34bcaf4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-hjl4b\" (UID: \"36b289c9-56bc-4b1a-ab7d-1777b34bcaf4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjl4b" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.561958 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/15b64868-afa1-4d70-bfda-799ed31decdb-images\") pod \"machine-api-operator-5694c8668f-mftw8\" (UID: \"15b64868-afa1-4d70-bfda-799ed31decdb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mftw8" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.561976 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b54a0848-6df8-47da-8537-a01d44322ca4-client-ca\") pod \"route-controller-manager-6576b87f9c-5wdgm\" (UID: \"b54a0848-6df8-47da-8537-a01d44322ca4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.562374 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.561614 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p6wvt"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.562705 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea58e7c7-9e3b-42ea-898f-c161a7ce17d7-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-7knn5\" (UID: \"ea58e7c7-9e3b-42ea-898f-c161a7ce17d7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.562730 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5da93ec3-d19f-40d9-97f1-994998839180-etcd-serving-ca\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.562893 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p6wvt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.563020 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36b289c9-56bc-4b1a-ab7d-1777b34bcaf4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-hjl4b\" (UID: \"36b289c9-56bc-4b1a-ab7d-1777b34bcaf4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjl4b" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.563122 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5da93ec3-d19f-40d9-97f1-994998839180-audit-dir\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.563132 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0881096e-2031-4e7a-8a1a-927fcceccf61-config\") pod \"machine-approver-56656f9798-9rfl7\" (UID: \"0881096e-2031-4e7a-8a1a-927fcceccf61\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9rfl7" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.563169 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/394b01ea-0b57-4565-aa56-96b6c5372a15-config\") pod \"console-operator-58897d9998-tq7nn\" (UID: \"394b01ea-0b57-4565-aa56-96b6c5372a15\") " pod="openshift-console-operator/console-operator-58897d9998-tq7nn" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.563171 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea58e7c7-9e3b-42ea-898f-c161a7ce17d7-audit-dir\") pod \"apiserver-7bbb656c7d-7knn5\" (UID: \"ea58e7c7-9e3b-42ea-898f-c161a7ce17d7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.563246 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/394b01ea-0b57-4565-aa56-96b6c5372a15-serving-cert\") pod \"console-operator-58897d9998-tq7nn\" (UID: \"394b01ea-0b57-4565-aa56-96b6c5372a15\") " pod="openshift-console-operator/console-operator-58897d9998-tq7nn" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.563248 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9t5n9"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.563293 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.563322 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0cedaefc-2211-4575-8993-8aff39f0d5a3-serving-cert\") pod \"openshift-config-operator-7777fb866f-gwptd\" (UID: \"0cedaefc-2211-4575-8993-8aff39f0d5a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.563366 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.563389 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2lcv\" (UniqueName: \"kubernetes.io/projected/645ae111-522a-4216-aadd-0901313020ce-kube-api-access-z2lcv\") pod \"controller-manager-879f6c89f-wpzmf\" (UID: \"645ae111-522a-4216-aadd-0901313020ce\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wpzmf" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.563405 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/664aa862-1bb6-421a-87b9-992ead56694b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kxzxq\" (UID: \"664aa862-1bb6-421a-87b9-992ead56694b\") " pod="openshift-marketplace/marketplace-operator-79b997595-kxzxq" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.563408 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/15b64868-afa1-4d70-bfda-799ed31decdb-images\") pod \"machine-api-operator-5694c8668f-mftw8\" (UID: \"15b64868-afa1-4d70-bfda-799ed31decdb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mftw8" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.563429 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.563468 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bb4d019-ae1f-4aa2-b255-a6974c4edf4a-service-ca-bundle\") pod \"authentication-operator-69f744f599-blqhv\" (UID: \"5bb4d019-ae1f-4aa2-b255-a6974c4edf4a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-blqhv" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.563488 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ea58e7c7-9e3b-42ea-898f-c161a7ce17d7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7knn5\" (UID: \"ea58e7c7-9e3b-42ea-898f-c161a7ce17d7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.563526 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea58e7c7-9e3b-42ea-898f-c161a7ce17d7-serving-cert\") pod \"apiserver-7bbb656c7d-7knn5\" (UID: \"ea58e7c7-9e3b-42ea-898f-c161a7ce17d7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.563545 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fd99e1c1-f4c2-42cb-ad67-76f781407b88-apiservice-cert\") pod \"packageserver-d55dfcdfc-s28vn\" (UID: \"fd99e1c1-f4c2-42cb-ad67-76f781407b88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s28vn" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.563564 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/56028b8f-0d6b-4f7f-b4d6-cefc5acec683-secret-volume\") pod \"collect-profiles-29587785-ft78m\" (UID: \"56028b8f-0d6b-4f7f-b4d6-cefc5acec683\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587785-ft78m" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.563584 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5da93ec3-d19f-40d9-97f1-994998839180-encryption-config\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.564013 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ea58e7c7-9e3b-42ea-898f-c161a7ce17d7-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-7knn5\" (UID: \"ea58e7c7-9e3b-42ea-898f-c161a7ce17d7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.564061 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9t5n9" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.564457 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bb4d019-ae1f-4aa2-b255-a6974c4edf4a-service-ca-bundle\") pod \"authentication-operator-69f744f599-blqhv\" (UID: \"5bb4d019-ae1f-4aa2-b255-a6974c4edf4a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-blqhv" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.564636 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ea58e7c7-9e3b-42ea-898f-c161a7ce17d7-audit-policies\") pod \"apiserver-7bbb656c7d-7knn5\" (UID: \"ea58e7c7-9e3b-42ea-898f-c161a7ce17d7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.564747 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q74m2\" (UniqueName: \"kubernetes.io/projected/ea58e7c7-9e3b-42ea-898f-c161a7ce17d7-kube-api-access-q74m2\") pod \"apiserver-7bbb656c7d-7knn5\" (UID: \"ea58e7c7-9e3b-42ea-898f-c161a7ce17d7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.564836 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cf31d35d-7049-4892-8bd1-3dc9deb4325c-metrics-tls\") pod \"dns-operator-744455d44c-d8sjd\" (UID: \"cf31d35d-7049-4892-8bd1-3dc9deb4325c\") " pod="openshift-dns-operator/dns-operator-744455d44c-d8sjd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.564992 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghxgb"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.565109 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1d0b49f8-7d1c-41ce-9735-73cfa83aa3a2-images\") pod \"machine-config-operator-74547568cd-2fpgr\" (UID: \"1d0b49f8-7d1c-41ce-9735-73cfa83aa3a2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2fpgr" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.565473 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wphgv\" (UniqueName: \"kubernetes.io/projected/664aa862-1bb6-421a-87b9-992ead56694b-kube-api-access-wphgv\") pod \"marketplace-operator-79b997595-kxzxq\" (UID: \"664aa862-1bb6-421a-87b9-992ead56694b\") " pod="openshift-marketplace/marketplace-operator-79b997595-kxzxq" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.565592 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5da93ec3-d19f-40d9-97f1-994998839180-config\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.565671 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ea58e7c7-9e3b-42ea-898f-c161a7ce17d7-audit-policies\") pod \"apiserver-7bbb656c7d-7knn5\" (UID: \"ea58e7c7-9e3b-42ea-898f-c161a7ce17d7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.565750 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghxgb" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.565686 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5da93ec3-d19f-40d9-97f1-994998839180-image-import-ca\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.565818 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69wwr\" (UniqueName: \"kubernetes.io/projected/0cedaefc-2211-4575-8993-8aff39f0d5a3-kube-api-access-69wwr\") pod \"openshift-config-operator-7777fb866f-gwptd\" (UID: \"0cedaefc-2211-4575-8993-8aff39f0d5a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.565847 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8bqc\" (UniqueName: \"kubernetes.io/projected/15b64868-afa1-4d70-bfda-799ed31decdb-kube-api-access-n8bqc\") pod \"machine-api-operator-5694c8668f-mftw8\" (UID: \"15b64868-afa1-4d70-bfda-799ed31decdb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mftw8" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.565870 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnblp\" (UniqueName: \"kubernetes.io/projected/b3fc9a5b-081d-4321-ac46-42992adcf541-kube-api-access-cnblp\") pod \"openshift-controller-manager-operator-756b6f6bc6-ntv7l\" (UID: \"b3fc9a5b-081d-4321-ac46-42992adcf541\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ntv7l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.565893 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0b28142c-7b85-406e-b158-42517bab7f11-audit-policies\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.565911 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2sww\" (UniqueName: \"kubernetes.io/projected/867152c5-9f9e-40b4-8623-3437a9793b5d-kube-api-access-q2sww\") pod \"cluster-samples-operator-665b6dd947-8g27v\" (UID: \"867152c5-9f9e-40b4-8623-3437a9793b5d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8g27v" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.565931 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5da93ec3-d19f-40d9-97f1-994998839180-etcd-client\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.565954 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktq45\" (UniqueName: \"kubernetes.io/projected/36b289c9-56bc-4b1a-ab7d-1777b34bcaf4-kube-api-access-ktq45\") pod \"openshift-apiserver-operator-796bbdcf4f-hjl4b\" (UID: \"36b289c9-56bc-4b1a-ab7d-1777b34bcaf4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjl4b" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.565975 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6cwc\" (UniqueName: \"kubernetes.io/projected/966e01cf-5149-43ef-8967-517e68e2bbaa-kube-api-access-v6cwc\") pod \"service-ca-operator-777779d784-x7dlg\" (UID: \"966e01cf-5149-43ef-8967-517e68e2bbaa\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x7dlg" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.565994 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ad077696-8d80-47a9-9bb2-23764ccd2b6a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-sp7hg\" (UID: \"ad077696-8d80-47a9-9bb2-23764ccd2b6a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-sp7hg" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.566010 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/645ae111-522a-4216-aadd-0901313020ce-client-ca\") pod \"controller-manager-879f6c89f-wpzmf\" (UID: \"645ae111-522a-4216-aadd-0901313020ce\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wpzmf" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.566043 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tks5p\" (UniqueName: \"kubernetes.io/projected/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-kube-api-access-tks5p\") pod \"console-f9d7485db-c6ktd\" (UID: \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\") " pod="openshift-console/console-f9d7485db-c6ktd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.566061 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0881096e-2031-4e7a-8a1a-927fcceccf61-machine-approver-tls\") pod \"machine-approver-56656f9798-9rfl7\" (UID: \"0881096e-2031-4e7a-8a1a-927fcceccf61\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9rfl7" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.566091 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.566125 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ea58e7c7-9e3b-42ea-898f-c161a7ce17d7-encryption-config\") pod \"apiserver-7bbb656c7d-7knn5\" (UID: \"ea58e7c7-9e3b-42ea-898f-c161a7ce17d7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.566131 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5da93ec3-d19f-40d9-97f1-994998839180-config\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.566144 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fwwx\" (UniqueName: \"kubernetes.io/projected/559af3cb-f642-4e99-91e1-155840a1629c-kube-api-access-9fwwx\") pod \"cluster-image-registry-operator-dc59b4c8b-dt66q\" (UID: \"559af3cb-f642-4e99-91e1-155840a1629c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt66q" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.566172 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/664aa862-1bb6-421a-87b9-992ead56694b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kxzxq\" (UID: \"664aa862-1bb6-421a-87b9-992ead56694b\") " pod="openshift-marketplace/marketplace-operator-79b997595-kxzxq" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.566192 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r75w\" (UniqueName: \"kubernetes.io/projected/fd99e1c1-f4c2-42cb-ad67-76f781407b88-kube-api-access-4r75w\") pod \"packageserver-d55dfcdfc-s28vn\" (UID: \"fd99e1c1-f4c2-42cb-ad67-76f781407b88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s28vn" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.566207 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fd99e1c1-f4c2-42cb-ad67-76f781407b88-tmpfs\") pod \"packageserver-d55dfcdfc-s28vn\" (UID: \"fd99e1c1-f4c2-42cb-ad67-76f781407b88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s28vn" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.566236 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8ng6\" (UniqueName: \"kubernetes.io/projected/5bb4d019-ae1f-4aa2-b255-a6974c4edf4a-kube-api-access-c8ng6\") pod \"authentication-operator-69f744f599-blqhv\" (UID: \"5bb4d019-ae1f-4aa2-b255-a6974c4edf4a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-blqhv" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.566253 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5da93ec3-d19f-40d9-97f1-994998839180-serving-cert\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.566290 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-console-oauth-config\") pod \"console-f9d7485db-c6ktd\" (UID: \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\") " pod="openshift-console/console-f9d7485db-c6ktd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.566309 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bb4d019-ae1f-4aa2-b255-a6974c4edf4a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-blqhv\" (UID: \"5bb4d019-ae1f-4aa2-b255-a6974c4edf4a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-blqhv" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.566326 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/966e01cf-5149-43ef-8967-517e68e2bbaa-serving-cert\") pod \"service-ca-operator-777779d784-x7dlg\" (UID: \"966e01cf-5149-43ef-8967-517e68e2bbaa\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x7dlg" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.566344 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-service-ca\") pod \"console-f9d7485db-c6ktd\" (UID: \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\") " pod="openshift-console/console-f9d7485db-c6ktd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.566360 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94txw\" (UniqueName: \"kubernetes.io/projected/b54a0848-6df8-47da-8537-a01d44322ca4-kube-api-access-94txw\") pod \"route-controller-manager-6576b87f9c-5wdgm\" (UID: \"b54a0848-6df8-47da-8537-a01d44322ca4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.566382 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrhwv\" (UniqueName: \"kubernetes.io/projected/5da93ec3-d19f-40d9-97f1-994998839180-kube-api-access-xrhwv\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.566400 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xmmh\" (UniqueName: \"kubernetes.io/projected/b964ba7c-9b8c-40d8-b671-915649b4d77b-kube-api-access-7xmmh\") pod \"downloads-7954f5f757-fn5hz\" (UID: \"b964ba7c-9b8c-40d8-b671-915649b4d77b\") " pod="openshift-console/downloads-7954f5f757-fn5hz" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.566416 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.566430 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-trusted-ca-bundle\") pod \"console-f9d7485db-c6ktd\" (UID: \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\") " pod="openshift-console/console-f9d7485db-c6ktd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.566446 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3c1605aa-6f4d-4754-9e6e-5f2c2d564f73-etcd-service-ca\") pod \"etcd-operator-b45778765-qj9xc\" (UID: \"3c1605aa-6f4d-4754-9e6e-5f2c2d564f73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qj9xc" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.566478 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/645ae111-522a-4216-aadd-0901313020ce-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wpzmf\" (UID: \"645ae111-522a-4216-aadd-0901313020ce\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wpzmf" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.566923 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmxpf\" (UniqueName: \"kubernetes.io/projected/ad077696-8d80-47a9-9bb2-23764ccd2b6a-kube-api-access-wmxpf\") pod \"multus-admission-controller-857f4d67dd-sp7hg\" (UID: \"ad077696-8d80-47a9-9bb2-23764ccd2b6a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-sp7hg" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.566996 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb4d019-ae1f-4aa2-b255-a6974c4edf4a-config\") pod \"authentication-operator-69f744f599-blqhv\" (UID: \"5bb4d019-ae1f-4aa2-b255-a6974c4edf4a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-blqhv" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.567019 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5da93ec3-d19f-40d9-97f1-994998839180-audit\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.567041 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15b64868-afa1-4d70-bfda-799ed31decdb-config\") pod \"machine-api-operator-5694c8668f-mftw8\" (UID: \"15b64868-afa1-4d70-bfda-799ed31decdb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mftw8" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.567063 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/559af3cb-f642-4e99-91e1-155840a1629c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dt66q\" (UID: \"559af3cb-f642-4e99-91e1-155840a1629c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt66q" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.567127 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvqhh\" (UniqueName: \"kubernetes.io/projected/0b28142c-7b85-406e-b158-42517bab7f11-kube-api-access-wvqhh\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.567147 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/394b01ea-0b57-4565-aa56-96b6c5372a15-trusted-ca\") pod \"console-operator-58897d9998-tq7nn\" (UID: \"394b01ea-0b57-4565-aa56-96b6c5372a15\") " pod="openshift-console-operator/console-operator-58897d9998-tq7nn" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.567166 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0cedaefc-2211-4575-8993-8aff39f0d5a3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gwptd\" (UID: \"0cedaefc-2211-4575-8993-8aff39f0d5a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.567185 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/645ae111-522a-4216-aadd-0901313020ce-config\") pod \"controller-manager-879f6c89f-wpzmf\" (UID: \"645ae111-522a-4216-aadd-0901313020ce\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wpzmf" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.567202 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/645ae111-522a-4216-aadd-0901313020ce-serving-cert\") pod \"controller-manager-879f6c89f-wpzmf\" (UID: \"645ae111-522a-4216-aadd-0901313020ce\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wpzmf" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.567219 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56028b8f-0d6b-4f7f-b4d6-cefc5acec683-config-volume\") pod \"collect-profiles-29587785-ft78m\" (UID: \"56028b8f-0d6b-4f7f-b4d6-cefc5acec683\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587785-ft78m" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.567287 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36b289c9-56bc-4b1a-ab7d-1777b34bcaf4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-hjl4b\" (UID: \"36b289c9-56bc-4b1a-ab7d-1777b34bcaf4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjl4b" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.567343 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3fc9a5b-081d-4321-ac46-42992adcf541-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ntv7l\" (UID: \"b3fc9a5b-081d-4321-ac46-42992adcf541\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ntv7l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.567370 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.567388 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmbzv\" (UniqueName: \"kubernetes.io/projected/3c1605aa-6f4d-4754-9e6e-5f2c2d564f73-kube-api-access-jmbzv\") pod \"etcd-operator-b45778765-qj9xc\" (UID: \"3c1605aa-6f4d-4754-9e6e-5f2c2d564f73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qj9xc" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.567407 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/867152c5-9f9e-40b4-8623-3437a9793b5d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8g27v\" (UID: \"867152c5-9f9e-40b4-8623-3437a9793b5d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8g27v" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.567437 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cx66\" (UniqueName: \"kubernetes.io/projected/3d528d31-23f7-48f0-9e52-5357bd410c3d-kube-api-access-8cx66\") pod \"migrator-59844c95c7-mcv5m\" (UID: \"3d528d31-23f7-48f0-9e52-5357bd410c3d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mcv5m" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.567458 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzj5g\" (UniqueName: \"kubernetes.io/projected/0881096e-2031-4e7a-8a1a-927fcceccf61-kube-api-access-jzj5g\") pod \"machine-approver-56656f9798-9rfl7\" (UID: \"0881096e-2031-4e7a-8a1a-927fcceccf61\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9rfl7" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.567498 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b54a0848-6df8-47da-8537-a01d44322ca4-serving-cert\") pod \"route-controller-manager-6576b87f9c-5wdgm\" (UID: \"b54a0848-6df8-47da-8537-a01d44322ca4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.567518 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3fc9a5b-081d-4321-ac46-42992adcf541-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ntv7l\" (UID: \"b3fc9a5b-081d-4321-ac46-42992adcf541\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ntv7l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.567541 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bb4d019-ae1f-4aa2-b255-a6974c4edf4a-serving-cert\") pod \"authentication-operator-69f744f599-blqhv\" (UID: \"5bb4d019-ae1f-4aa2-b255-a6974c4edf4a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-blqhv" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.567558 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-console-config\") pod \"console-f9d7485db-c6ktd\" (UID: \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\") " pod="openshift-console/console-f9d7485db-c6ktd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.567575 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b54a0848-6df8-47da-8537-a01d44322ca4-config\") pod \"route-controller-manager-6576b87f9c-5wdgm\" (UID: \"b54a0848-6df8-47da-8537-a01d44322ca4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.567592 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3c1605aa-6f4d-4754-9e6e-5f2c2d564f73-etcd-client\") pod \"etcd-operator-b45778765-qj9xc\" (UID: \"3c1605aa-6f4d-4754-9e6e-5f2c2d564f73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qj9xc" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.567611 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/559af3cb-f642-4e99-91e1-155840a1629c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dt66q\" (UID: \"559af3cb-f642-4e99-91e1-155840a1629c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt66q" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.567631 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3c1605aa-6f4d-4754-9e6e-5f2c2d564f73-etcd-ca\") pod \"etcd-operator-b45778765-qj9xc\" (UID: \"3c1605aa-6f4d-4754-9e6e-5f2c2d564f73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qj9xc" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.567651 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ea58e7c7-9e3b-42ea-898f-c161a7ce17d7-etcd-client\") pod \"apiserver-7bbb656c7d-7knn5\" (UID: \"ea58e7c7-9e3b-42ea-898f-c161a7ce17d7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.567641 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bb4d019-ae1f-4aa2-b255-a6974c4edf4a-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-blqhv\" (UID: \"5bb4d019-ae1f-4aa2-b255-a6974c4edf4a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-blqhv" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.567668 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-oauth-serving-cert\") pod \"console-f9d7485db-c6ktd\" (UID: \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\") " pod="openshift-console/console-f9d7485db-c6ktd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.567690 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc95r\" (UniqueName: \"kubernetes.io/projected/1d0b49f8-7d1c-41ce-9735-73cfa83aa3a2-kube-api-access-dc95r\") pod \"machine-config-operator-74547568cd-2fpgr\" (UID: \"1d0b49f8-7d1c-41ce-9735-73cfa83aa3a2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2fpgr" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.568405 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5da93ec3-d19f-40d9-97f1-994998839180-encryption-config\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.568661 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea58e7c7-9e3b-42ea-898f-c161a7ce17d7-serving-cert\") pod \"apiserver-7bbb656c7d-7knn5\" (UID: \"ea58e7c7-9e3b-42ea-898f-c161a7ce17d7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.568784 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5da93ec3-d19f-40d9-97f1-994998839180-trusted-ca-bundle\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.569296 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5da93ec3-d19f-40d9-97f1-994998839180-audit\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.569501 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15b64868-afa1-4d70-bfda-799ed31decdb-config\") pod \"machine-api-operator-5694c8668f-mftw8\" (UID: \"15b64868-afa1-4d70-bfda-799ed31decdb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mftw8" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.569822 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ea58e7c7-9e3b-42ea-898f-c161a7ce17d7-encryption-config\") pod \"apiserver-7bbb656c7d-7knn5\" (UID: \"ea58e7c7-9e3b-42ea-898f-c161a7ce17d7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.569899 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5bb4d019-ae1f-4aa2-b255-a6974c4edf4a-config\") pod \"authentication-operator-69f744f599-blqhv\" (UID: \"5bb4d019-ae1f-4aa2-b255-a6974c4edf4a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-blqhv" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.570117 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5da93ec3-d19f-40d9-97f1-994998839180-image-import-ca\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.570964 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-j4q2w"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.571103 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5da93ec3-d19f-40d9-97f1-994998839180-etcd-client\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.571749 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j4q2w" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.572037 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36b289c9-56bc-4b1a-ab7d-1777b34bcaf4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-hjl4b\" (UID: \"36b289c9-56bc-4b1a-ab7d-1777b34bcaf4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjl4b" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.572363 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.572605 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/15b64868-afa1-4d70-bfda-799ed31decdb-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-mftw8\" (UID: \"15b64868-afa1-4d70-bfda-799ed31decdb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mftw8" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.572697 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ea58e7c7-9e3b-42ea-898f-c161a7ce17d7-etcd-client\") pod \"apiserver-7bbb656c7d-7knn5\" (UID: \"ea58e7c7-9e3b-42ea-898f-c161a7ce17d7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.574028 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2fpgr"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.574546 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5bb4d019-ae1f-4aa2-b255-a6974c4edf4a-serving-cert\") pod \"authentication-operator-69f744f599-blqhv\" (UID: \"5bb4d019-ae1f-4aa2-b255-a6974c4edf4a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-blqhv" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.576364 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-c6ktd"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.576420 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-x7dlg"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.577460 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-d5qvq"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.578082 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-d5qvq" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.579751 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-d8sjd"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.582636 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjl4b"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.582724 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qj9xc"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.583347 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ntv7l"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.584742 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-mftw8"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.587107 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-cxjm4"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.589000 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gwptd"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.590439 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s28vn"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.591747 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5da93ec3-d19f-40d9-97f1-994998839180-serving-cert\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.593315 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt66q"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.595943 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wpzmf"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.597936 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.600039 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-mcv5m"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.602712 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587798-tmssr"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.604468 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-blqhv"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.605737 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.616082 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-fn5hz"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.619530 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.622471 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-t522l"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.631968 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8g27v"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.632998 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kxzxq"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.634090 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c76br"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.636058 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-tq7nn"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.636781 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-d5qvq"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.638738 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.638863 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587785-ft78m"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.639981 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-sp7hg"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.640982 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dv8vg"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.658829 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.660199 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6r2x"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.662395 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nzjzv"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.663683 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-tv82h"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.665647 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j4m77"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.665762 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tv82h" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.666705 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hwjqf"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.667717 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p6wvt"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.668767 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghxgb"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.669474 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/966e01cf-5149-43ef-8967-517e68e2bbaa-serving-cert\") pod \"service-ca-operator-777779d784-x7dlg\" (UID: \"966e01cf-5149-43ef-8967-517e68e2bbaa\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x7dlg" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.669502 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-service-ca\") pod \"console-f9d7485db-c6ktd\" (UID: \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\") " pod="openshift-console/console-f9d7485db-c6ktd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.669522 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94txw\" (UniqueName: \"kubernetes.io/projected/b54a0848-6df8-47da-8537-a01d44322ca4-kube-api-access-94txw\") pod \"route-controller-manager-6576b87f9c-5wdgm\" (UID: \"b54a0848-6df8-47da-8537-a01d44322ca4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.669541 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58dzf\" (UniqueName: \"kubernetes.io/projected/5999fa11-c8b4-4e7f-ae21-1b570aa79853-kube-api-access-58dzf\") pod \"catalog-operator-68c6474976-dv8vg\" (UID: \"5999fa11-c8b4-4e7f-ae21-1b570aa79853\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dv8vg" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.669559 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-trusted-ca-bundle\") pod \"console-f9d7485db-c6ktd\" (UID: \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\") " pod="openshift-console/console-f9d7485db-c6ktd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.669575 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmxpf\" (UniqueName: \"kubernetes.io/projected/ad077696-8d80-47a9-9bb2-23764ccd2b6a-kube-api-access-wmxpf\") pod \"multus-admission-controller-857f4d67dd-sp7hg\" (UID: \"ad077696-8d80-47a9-9bb2-23764ccd2b6a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-sp7hg" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.669593 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5999fa11-c8b4-4e7f-ae21-1b570aa79853-profile-collector-cert\") pod \"catalog-operator-68c6474976-dv8vg\" (UID: \"5999fa11-c8b4-4e7f-ae21-1b570aa79853\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dv8vg" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.669610 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/559af3cb-f642-4e99-91e1-155840a1629c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dt66q\" (UID: \"559af3cb-f642-4e99-91e1-155840a1629c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt66q" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.669627 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0cedaefc-2211-4575-8993-8aff39f0d5a3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gwptd\" (UID: \"0cedaefc-2211-4575-8993-8aff39f0d5a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.669643 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/645ae111-522a-4216-aadd-0901313020ce-config\") pod \"controller-manager-879f6c89f-wpzmf\" (UID: \"645ae111-522a-4216-aadd-0901313020ce\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wpzmf" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.669658 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/645ae111-522a-4216-aadd-0901313020ce-serving-cert\") pod \"controller-manager-879f6c89f-wpzmf\" (UID: \"645ae111-522a-4216-aadd-0901313020ce\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wpzmf" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.669674 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvqhh\" (UniqueName: \"kubernetes.io/projected/0b28142c-7b85-406e-b158-42517bab7f11-kube-api-access-wvqhh\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.669688 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/394b01ea-0b57-4565-aa56-96b6c5372a15-trusted-ca\") pod \"console-operator-58897d9998-tq7nn\" (UID: \"394b01ea-0b57-4565-aa56-96b6c5372a15\") " pod="openshift-console-operator/console-operator-58897d9998-tq7nn" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.669704 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmbzv\" (UniqueName: \"kubernetes.io/projected/3c1605aa-6f4d-4754-9e6e-5f2c2d564f73-kube-api-access-jmbzv\") pod \"etcd-operator-b45778765-qj9xc\" (UID: \"3c1605aa-6f4d-4754-9e6e-5f2c2d564f73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qj9xc" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.669721 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/867152c5-9f9e-40b4-8623-3437a9793b5d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8g27v\" (UID: \"867152c5-9f9e-40b4-8623-3437a9793b5d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8g27v" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.669767 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.669803 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b54a0848-6df8-47da-8537-a01d44322ca4-serving-cert\") pod \"route-controller-manager-6576b87f9c-5wdgm\" (UID: \"b54a0848-6df8-47da-8537-a01d44322ca4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.669824 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzj5g\" (UniqueName: \"kubernetes.io/projected/0881096e-2031-4e7a-8a1a-927fcceccf61-kube-api-access-jzj5g\") pod \"machine-approver-56656f9798-9rfl7\" (UID: \"0881096e-2031-4e7a-8a1a-927fcceccf61\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9rfl7" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.669846 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-console-config\") pod \"console-f9d7485db-c6ktd\" (UID: \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\") " pod="openshift-console/console-f9d7485db-c6ktd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.669865 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/559af3cb-f642-4e99-91e1-155840a1629c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dt66q\" (UID: \"559af3cb-f642-4e99-91e1-155840a1629c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt66q" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.669885 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.669901 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-console-serving-cert\") pod \"console-f9d7485db-c6ktd\" (UID: \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\") " pod="openshift-console/console-f9d7485db-c6ktd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.669919 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.669936 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.669955 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/966e01cf-5149-43ef-8967-517e68e2bbaa-config\") pod \"service-ca-operator-777779d784-x7dlg\" (UID: \"966e01cf-5149-43ef-8967-517e68e2bbaa\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x7dlg" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.669972 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.669990 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b28142c-7b85-406e-b158-42517bab7f11-audit-dir\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670008 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b54a0848-6df8-47da-8537-a01d44322ca4-client-ca\") pod \"route-controller-manager-6576b87f9c-5wdgm\" (UID: \"b54a0848-6df8-47da-8537-a01d44322ca4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670024 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0881096e-2031-4e7a-8a1a-927fcceccf61-config\") pod \"machine-approver-56656f9798-9rfl7\" (UID: \"0881096e-2031-4e7a-8a1a-927fcceccf61\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9rfl7" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670050 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/394b01ea-0b57-4565-aa56-96b6c5372a15-serving-cert\") pod \"console-operator-58897d9998-tq7nn\" (UID: \"394b01ea-0b57-4565-aa56-96b6c5372a15\") " pod="openshift-console-operator/console-operator-58897d9998-tq7nn" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670067 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/664aa862-1bb6-421a-87b9-992ead56694b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kxzxq\" (UID: \"664aa862-1bb6-421a-87b9-992ead56694b\") " pod="openshift-marketplace/marketplace-operator-79b997595-kxzxq" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670083 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670100 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670118 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fd99e1c1-f4c2-42cb-ad67-76f781407b88-apiservice-cert\") pod \"packageserver-d55dfcdfc-s28vn\" (UID: \"fd99e1c1-f4c2-42cb-ad67-76f781407b88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s28vn" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670138 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb2ng\" (UniqueName: \"kubernetes.io/projected/42fde299-09b3-4bec-83c9-71af1d27475a-kube-api-access-xb2ng\") pod \"kube-storage-version-migrator-operator-b67b599dd-tb658\" (UID: \"42fde299-09b3-4bec-83c9-71af1d27475a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tb658" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670162 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1d0b49f8-7d1c-41ce-9735-73cfa83aa3a2-images\") pod \"machine-config-operator-74547568cd-2fpgr\" (UID: \"1d0b49f8-7d1c-41ce-9735-73cfa83aa3a2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2fpgr" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670181 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69wwr\" (UniqueName: \"kubernetes.io/projected/0cedaefc-2211-4575-8993-8aff39f0d5a3-kube-api-access-69wwr\") pod \"openshift-config-operator-7777fb866f-gwptd\" (UID: \"0cedaefc-2211-4575-8993-8aff39f0d5a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670198 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnblp\" (UniqueName: \"kubernetes.io/projected/b3fc9a5b-081d-4321-ac46-42992adcf541-kube-api-access-cnblp\") pod \"openshift-controller-manager-operator-756b6f6bc6-ntv7l\" (UID: \"b3fc9a5b-081d-4321-ac46-42992adcf541\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ntv7l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670215 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42fde299-09b3-4bec-83c9-71af1d27475a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tb658\" (UID: \"42fde299-09b3-4bec-83c9-71af1d27475a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tb658" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670239 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tks5p\" (UniqueName: \"kubernetes.io/projected/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-kube-api-access-tks5p\") pod \"console-f9d7485db-c6ktd\" (UID: \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\") " pod="openshift-console/console-f9d7485db-c6ktd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670299 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fwwx\" (UniqueName: \"kubernetes.io/projected/559af3cb-f642-4e99-91e1-155840a1629c-kube-api-access-9fwwx\") pod \"cluster-image-registry-operator-dc59b4c8b-dt66q\" (UID: \"559af3cb-f642-4e99-91e1-155840a1629c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt66q" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670316 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/664aa862-1bb6-421a-87b9-992ead56694b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kxzxq\" (UID: \"664aa862-1bb6-421a-87b9-992ead56694b\") " pod="openshift-marketplace/marketplace-operator-79b997595-kxzxq" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670333 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fd99e1c1-f4c2-42cb-ad67-76f781407b88-tmpfs\") pod \"packageserver-d55dfcdfc-s28vn\" (UID: \"fd99e1c1-f4c2-42cb-ad67-76f781407b88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s28vn" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670351 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42fde299-09b3-4bec-83c9-71af1d27475a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tb658\" (UID: \"42fde299-09b3-4bec-83c9-71af1d27475a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tb658" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670386 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xmmh\" (UniqueName: \"kubernetes.io/projected/b964ba7c-9b8c-40d8-b671-915649b4d77b-kube-api-access-7xmmh\") pod \"downloads-7954f5f757-fn5hz\" (UID: \"b964ba7c-9b8c-40d8-b671-915649b4d77b\") " pod="openshift-console/downloads-7954f5f757-fn5hz" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670403 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3c1605aa-6f4d-4754-9e6e-5f2c2d564f73-etcd-service-ca\") pod \"etcd-operator-b45778765-qj9xc\" (UID: \"3c1605aa-6f4d-4754-9e6e-5f2c2d564f73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qj9xc" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670418 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/645ae111-522a-4216-aadd-0901313020ce-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wpzmf\" (UID: \"645ae111-522a-4216-aadd-0901313020ce\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wpzmf" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670443 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670463 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56028b8f-0d6b-4f7f-b4d6-cefc5acec683-config-volume\") pod \"collect-profiles-29587785-ft78m\" (UID: \"56028b8f-0d6b-4f7f-b4d6-cefc5acec683\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587785-ft78m" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670478 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5999fa11-c8b4-4e7f-ae21-1b570aa79853-srv-cert\") pod \"catalog-operator-68c6474976-dv8vg\" (UID: \"5999fa11-c8b4-4e7f-ae21-1b570aa79853\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dv8vg" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670494 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3fc9a5b-081d-4321-ac46-42992adcf541-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ntv7l\" (UID: \"b3fc9a5b-081d-4321-ac46-42992adcf541\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ntv7l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670515 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3fc9a5b-081d-4321-ac46-42992adcf541-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ntv7l\" (UID: \"b3fc9a5b-081d-4321-ac46-42992adcf541\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ntv7l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670531 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cx66\" (UniqueName: \"kubernetes.io/projected/3d528d31-23f7-48f0-9e52-5357bd410c3d-kube-api-access-8cx66\") pod \"migrator-59844c95c7-mcv5m\" (UID: \"3d528d31-23f7-48f0-9e52-5357bd410c3d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mcv5m" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670548 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b54a0848-6df8-47da-8537-a01d44322ca4-config\") pod \"route-controller-manager-6576b87f9c-5wdgm\" (UID: \"b54a0848-6df8-47da-8537-a01d44322ca4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670563 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3c1605aa-6f4d-4754-9e6e-5f2c2d564f73-etcd-ca\") pod \"etcd-operator-b45778765-qj9xc\" (UID: \"3c1605aa-6f4d-4754-9e6e-5f2c2d564f73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qj9xc" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670577 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3c1605aa-6f4d-4754-9e6e-5f2c2d564f73-etcd-client\") pod \"etcd-operator-b45778765-qj9xc\" (UID: \"3c1605aa-6f4d-4754-9e6e-5f2c2d564f73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qj9xc" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670593 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-oauth-serving-cert\") pod \"console-f9d7485db-c6ktd\" (UID: \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\") " pod="openshift-console/console-f9d7485db-c6ktd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670609 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc95r\" (UniqueName: \"kubernetes.io/projected/1d0b49f8-7d1c-41ce-9735-73cfa83aa3a2-kube-api-access-dc95r\") pod \"machine-config-operator-74547568cd-2fpgr\" (UID: \"1d0b49f8-7d1c-41ce-9735-73cfa83aa3a2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2fpgr" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670624 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/559af3cb-f642-4e99-91e1-155840a1629c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dt66q\" (UID: \"559af3cb-f642-4e99-91e1-155840a1629c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt66q" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670646 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2stx\" (UniqueName: \"kubernetes.io/projected/394b01ea-0b57-4565-aa56-96b6c5372a15-kube-api-access-m2stx\") pod \"console-operator-58897d9998-tq7nn\" (UID: \"394b01ea-0b57-4565-aa56-96b6c5372a15\") " pod="openshift-console-operator/console-operator-58897d9998-tq7nn" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670661 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d0b49f8-7d1c-41ce-9735-73cfa83aa3a2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2fpgr\" (UID: \"1d0b49f8-7d1c-41ce-9735-73cfa83aa3a2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2fpgr" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670677 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c1605aa-6f4d-4754-9e6e-5f2c2d564f73-serving-cert\") pod \"etcd-operator-b45778765-qj9xc\" (UID: \"3c1605aa-6f4d-4754-9e6e-5f2c2d564f73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qj9xc" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670693 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fd99e1c1-f4c2-42cb-ad67-76f781407b88-webhook-cert\") pod \"packageserver-d55dfcdfc-s28vn\" (UID: \"fd99e1c1-f4c2-42cb-ad67-76f781407b88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s28vn" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670710 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzpf9\" (UniqueName: \"kubernetes.io/projected/56028b8f-0d6b-4f7f-b4d6-cefc5acec683-kube-api-access-tzpf9\") pod \"collect-profiles-29587785-ft78m\" (UID: \"56028b8f-0d6b-4f7f-b4d6-cefc5acec683\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587785-ft78m" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670728 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wzrg\" (UniqueName: \"kubernetes.io/projected/cf31d35d-7049-4892-8bd1-3dc9deb4325c-kube-api-access-5wzrg\") pod \"dns-operator-744455d44c-d8sjd\" (UID: \"cf31d35d-7049-4892-8bd1-3dc9deb4325c\") " pod="openshift-dns-operator/dns-operator-744455d44c-d8sjd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670745 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n78gm\" (UniqueName: \"kubernetes.io/projected/fcd50c8c-38c5-4c42-930d-2235c4384328-kube-api-access-n78gm\") pod \"auto-csr-approver-29587798-tmssr\" (UID: \"fcd50c8c-38c5-4c42-930d-2235c4384328\") " pod="openshift-infra/auto-csr-approver-29587798-tmssr" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670762 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1d0b49f8-7d1c-41ce-9735-73cfa83aa3a2-proxy-tls\") pod \"machine-config-operator-74547568cd-2fpgr\" (UID: \"1d0b49f8-7d1c-41ce-9735-73cfa83aa3a2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2fpgr" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670776 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c1605aa-6f4d-4754-9e6e-5f2c2d564f73-config\") pod \"etcd-operator-b45778765-qj9xc\" (UID: \"3c1605aa-6f4d-4754-9e6e-5f2c2d564f73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qj9xc" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670791 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0881096e-2031-4e7a-8a1a-927fcceccf61-auth-proxy-config\") pod \"machine-approver-56656f9798-9rfl7\" (UID: \"0881096e-2031-4e7a-8a1a-927fcceccf61\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9rfl7" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670809 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/394b01ea-0b57-4565-aa56-96b6c5372a15-config\") pod \"console-operator-58897d9998-tq7nn\" (UID: \"394b01ea-0b57-4565-aa56-96b6c5372a15\") " pod="openshift-console-operator/console-operator-58897d9998-tq7nn" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670825 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670851 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0cedaefc-2211-4575-8993-8aff39f0d5a3-serving-cert\") pod \"openshift-config-operator-7777fb866f-gwptd\" (UID: \"0cedaefc-2211-4575-8993-8aff39f0d5a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670870 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670887 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2lcv\" (UniqueName: \"kubernetes.io/projected/645ae111-522a-4216-aadd-0901313020ce-kube-api-access-z2lcv\") pod \"controller-manager-879f6c89f-wpzmf\" (UID: \"645ae111-522a-4216-aadd-0901313020ce\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wpzmf" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670907 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/56028b8f-0d6b-4f7f-b4d6-cefc5acec683-secret-volume\") pod \"collect-profiles-29587785-ft78m\" (UID: \"56028b8f-0d6b-4f7f-b4d6-cefc5acec683\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587785-ft78m" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670918 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0cedaefc-2211-4575-8993-8aff39f0d5a3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gwptd\" (UID: \"0cedaefc-2211-4575-8993-8aff39f0d5a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670922 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cf31d35d-7049-4892-8bd1-3dc9deb4325c-metrics-tls\") pod \"dns-operator-744455d44c-d8sjd\" (UID: \"cf31d35d-7049-4892-8bd1-3dc9deb4325c\") " pod="openshift-dns-operator/dns-operator-744455d44c-d8sjd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.670987 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wphgv\" (UniqueName: \"kubernetes.io/projected/664aa862-1bb6-421a-87b9-992ead56694b-kube-api-access-wphgv\") pod \"marketplace-operator-79b997595-kxzxq\" (UID: \"664aa862-1bb6-421a-87b9-992ead56694b\") " pod="openshift-marketplace/marketplace-operator-79b997595-kxzxq" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.671082 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0b28142c-7b85-406e-b158-42517bab7f11-audit-policies\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.671103 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2sww\" (UniqueName: \"kubernetes.io/projected/867152c5-9f9e-40b4-8623-3437a9793b5d-kube-api-access-q2sww\") pod \"cluster-samples-operator-665b6dd947-8g27v\" (UID: \"867152c5-9f9e-40b4-8623-3437a9793b5d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8g27v" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.671124 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6cwc\" (UniqueName: \"kubernetes.io/projected/966e01cf-5149-43ef-8967-517e68e2bbaa-kube-api-access-v6cwc\") pod \"service-ca-operator-777779d784-x7dlg\" (UID: \"966e01cf-5149-43ef-8967-517e68e2bbaa\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x7dlg" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.671144 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ad077696-8d80-47a9-9bb2-23764ccd2b6a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-sp7hg\" (UID: \"ad077696-8d80-47a9-9bb2-23764ccd2b6a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-sp7hg" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.671161 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/645ae111-522a-4216-aadd-0901313020ce-client-ca\") pod \"controller-manager-879f6c89f-wpzmf\" (UID: \"645ae111-522a-4216-aadd-0901313020ce\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wpzmf" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.671189 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0881096e-2031-4e7a-8a1a-927fcceccf61-machine-approver-tls\") pod \"machine-approver-56656f9798-9rfl7\" (UID: \"0881096e-2031-4e7a-8a1a-927fcceccf61\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9rfl7" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.671207 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.671227 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r75w\" (UniqueName: \"kubernetes.io/projected/fd99e1c1-f4c2-42cb-ad67-76f781407b88-kube-api-access-4r75w\") pod \"packageserver-d55dfcdfc-s28vn\" (UID: \"fd99e1c1-f4c2-42cb-ad67-76f781407b88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s28vn" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.671249 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-console-oauth-config\") pod \"console-f9d7485db-c6ktd\" (UID: \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\") " pod="openshift-console/console-f9d7485db-c6ktd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.671777 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-trusted-ca-bundle\") pod \"console-f9d7485db-c6ktd\" (UID: \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\") " pod="openshift-console/console-f9d7485db-c6ktd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.672299 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/645ae111-522a-4216-aadd-0901313020ce-config\") pod \"controller-manager-879f6c89f-wpzmf\" (UID: \"645ae111-522a-4216-aadd-0901313020ce\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wpzmf" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.672513 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b28142c-7b85-406e-b158-42517bab7f11-audit-dir\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.672699 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.672776 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3c1605aa-6f4d-4754-9e6e-5f2c2d564f73-etcd-service-ca\") pod \"etcd-operator-b45778765-qj9xc\" (UID: \"3c1605aa-6f4d-4754-9e6e-5f2c2d564f73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qj9xc" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.673186 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1d0b49f8-7d1c-41ce-9735-73cfa83aa3a2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-2fpgr\" (UID: \"1d0b49f8-7d1c-41ce-9735-73cfa83aa3a2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2fpgr" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.673334 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.673590 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/394b01ea-0b57-4565-aa56-96b6c5372a15-trusted-ca\") pod \"console-operator-58897d9998-tq7nn\" (UID: \"394b01ea-0b57-4565-aa56-96b6c5372a15\") " pod="openshift-console-operator/console-operator-58897d9998-tq7nn" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.673760 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/664aa862-1bb6-421a-87b9-992ead56694b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kxzxq\" (UID: \"664aa862-1bb6-421a-87b9-992ead56694b\") " pod="openshift-marketplace/marketplace-operator-79b997595-kxzxq" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.673993 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1d0b49f8-7d1c-41ce-9735-73cfa83aa3a2-images\") pod \"machine-config-operator-74547568cd-2fpgr\" (UID: \"1d0b49f8-7d1c-41ce-9735-73cfa83aa3a2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2fpgr" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.674574 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/645ae111-522a-4216-aadd-0901313020ce-serving-cert\") pod \"controller-manager-879f6c89f-wpzmf\" (UID: \"645ae111-522a-4216-aadd-0901313020ce\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wpzmf" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.674864 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.675170 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/cf31d35d-7049-4892-8bd1-3dc9deb4325c-metrics-tls\") pod \"dns-operator-744455d44c-d8sjd\" (UID: \"cf31d35d-7049-4892-8bd1-3dc9deb4325c\") " pod="openshift-dns-operator/dns-operator-744455d44c-d8sjd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.676090 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/867152c5-9f9e-40b4-8623-3437a9793b5d-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8g27v\" (UID: \"867152c5-9f9e-40b4-8623-3437a9793b5d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8g27v" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.671201 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-j4q2w"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.677055 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9t5n9"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.677072 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tb658"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.677082 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-tv82h"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.677110 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dnkcc"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.678008 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.678850 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.679017 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.679351 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/394b01ea-0b57-4565-aa56-96b6c5372a15-serving-cert\") pod \"console-operator-58897d9998-tq7nn\" (UID: \"394b01ea-0b57-4565-aa56-96b6c5372a15\") " pod="openshift-console-operator/console-operator-58897d9998-tq7nn" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.679615 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/645ae111-522a-4216-aadd-0901313020ce-client-ca\") pod \"controller-manager-879f6c89f-wpzmf\" (UID: \"645ae111-522a-4216-aadd-0901313020ce\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wpzmf" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.679710 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dnkcc" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.680183 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b54a0848-6df8-47da-8537-a01d44322ca4-config\") pod \"route-controller-manager-6576b87f9c-5wdgm\" (UID: \"b54a0848-6df8-47da-8537-a01d44322ca4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.680328 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-console-oauth-config\") pod \"console-f9d7485db-c6ktd\" (UID: \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\") " pod="openshift-console/console-f9d7485db-c6ktd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.680643 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/394b01ea-0b57-4565-aa56-96b6c5372a15-config\") pod \"console-operator-58897d9998-tq7nn\" (UID: \"394b01ea-0b57-4565-aa56-96b6c5372a15\") " pod="openshift-console-operator/console-operator-58897d9998-tq7nn" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.681238 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/fd99e1c1-f4c2-42cb-ad67-76f781407b88-tmpfs\") pod \"packageserver-d55dfcdfc-s28vn\" (UID: \"fd99e1c1-f4c2-42cb-ad67-76f781407b88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s28vn" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.681598 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.681679 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-kwds4"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.682221 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-oauth-serving-cert\") pod \"console-f9d7485db-c6ktd\" (UID: \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\") " pod="openshift-console/console-f9d7485db-c6ktd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.682448 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kwds4" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.682521 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.682552 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0b28142c-7b85-406e-b158-42517bab7f11-audit-policies\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.683005 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3fc9a5b-081d-4321-ac46-42992adcf541-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-ntv7l\" (UID: \"b3fc9a5b-081d-4321-ac46-42992adcf541\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ntv7l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.683118 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1d0b49f8-7d1c-41ce-9735-73cfa83aa3a2-proxy-tls\") pod \"machine-config-operator-74547568cd-2fpgr\" (UID: \"1d0b49f8-7d1c-41ce-9735-73cfa83aa3a2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2fpgr" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.683860 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/664aa862-1bb6-421a-87b9-992ead56694b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kxzxq\" (UID: \"664aa862-1bb6-421a-87b9-992ead56694b\") " pod="openshift-marketplace/marketplace-operator-79b997595-kxzxq" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.684007 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.684708 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-console-serving-cert\") pod \"console-f9d7485db-c6ktd\" (UID: \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\") " pod="openshift-console/console-f9d7485db-c6ktd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.684843 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/645ae111-522a-4216-aadd-0901313020ce-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-wpzmf\" (UID: \"645ae111-522a-4216-aadd-0901313020ce\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wpzmf" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.684989 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3fc9a5b-081d-4321-ac46-42992adcf541-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-ntv7l\" (UID: \"b3fc9a5b-081d-4321-ac46-42992adcf541\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ntv7l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.685128 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dnkcc"] Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.685464 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.685915 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b54a0848-6df8-47da-8537-a01d44322ca4-client-ca\") pod \"route-controller-manager-6576b87f9c-5wdgm\" (UID: \"b54a0848-6df8-47da-8537-a01d44322ca4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.686186 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-console-config\") pod \"console-f9d7485db-c6ktd\" (UID: \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\") " pod="openshift-console/console-f9d7485db-c6ktd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.686574 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b54a0848-6df8-47da-8537-a01d44322ca4-serving-cert\") pod \"route-controller-manager-6576b87f9c-5wdgm\" (UID: \"b54a0848-6df8-47da-8537-a01d44322ca4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.686665 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3c1605aa-6f4d-4754-9e6e-5f2c2d564f73-etcd-client\") pod \"etcd-operator-b45778765-qj9xc\" (UID: \"3c1605aa-6f4d-4754-9e6e-5f2c2d564f73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qj9xc" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.687387 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3c1605aa-6f4d-4754-9e6e-5f2c2d564f73-serving-cert\") pod \"etcd-operator-b45778765-qj9xc\" (UID: \"3c1605aa-6f4d-4754-9e6e-5f2c2d564f73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qj9xc" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.687411 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.687481 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-service-ca\") pod \"console-f9d7485db-c6ktd\" (UID: \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\") " pod="openshift-console/console-f9d7485db-c6ktd" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.687708 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.698822 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.718648 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.732000 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c1605aa-6f4d-4754-9e6e-5f2c2d564f73-config\") pod \"etcd-operator-b45778765-qj9xc\" (UID: \"3c1605aa-6f4d-4754-9e6e-5f2c2d564f73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qj9xc" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.738249 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.745752 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3c1605aa-6f4d-4754-9e6e-5f2c2d564f73-etcd-ca\") pod \"etcd-operator-b45778765-qj9xc\" (UID: \"3c1605aa-6f4d-4754-9e6e-5f2c2d564f73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qj9xc" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.758889 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.762441 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0881096e-2031-4e7a-8a1a-927fcceccf61-config\") pod \"machine-approver-56656f9798-9rfl7\" (UID: \"0881096e-2031-4e7a-8a1a-927fcceccf61\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9rfl7" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.772130 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42fde299-09b3-4bec-83c9-71af1d27475a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tb658\" (UID: \"42fde299-09b3-4bec-83c9-71af1d27475a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tb658" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.772207 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5999fa11-c8b4-4e7f-ae21-1b570aa79853-srv-cert\") pod \"catalog-operator-68c6474976-dv8vg\" (UID: \"5999fa11-c8b4-4e7f-ae21-1b570aa79853\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dv8vg" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.772412 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58dzf\" (UniqueName: \"kubernetes.io/projected/5999fa11-c8b4-4e7f-ae21-1b570aa79853-kube-api-access-58dzf\") pod \"catalog-operator-68c6474976-dv8vg\" (UID: \"5999fa11-c8b4-4e7f-ae21-1b570aa79853\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dv8vg" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.772448 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5999fa11-c8b4-4e7f-ae21-1b570aa79853-profile-collector-cert\") pod \"catalog-operator-68c6474976-dv8vg\" (UID: \"5999fa11-c8b4-4e7f-ae21-1b570aa79853\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dv8vg" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.772524 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xb2ng\" (UniqueName: \"kubernetes.io/projected/42fde299-09b3-4bec-83c9-71af1d27475a-kube-api-access-xb2ng\") pod \"kube-storage-version-migrator-operator-b67b599dd-tb658\" (UID: \"42fde299-09b3-4bec-83c9-71af1d27475a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tb658" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.772560 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42fde299-09b3-4bec-83c9-71af1d27475a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tb658\" (UID: \"42fde299-09b3-4bec-83c9-71af1d27475a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tb658" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.778960 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.798430 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.818866 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.822298 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0881096e-2031-4e7a-8a1a-927fcceccf61-machine-approver-tls\") pod \"machine-approver-56656f9798-9rfl7\" (UID: \"0881096e-2031-4e7a-8a1a-927fcceccf61\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9rfl7" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.839190 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.843400 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0881096e-2031-4e7a-8a1a-927fcceccf61-auth-proxy-config\") pod \"machine-approver-56656f9798-9rfl7\" (UID: \"0881096e-2031-4e7a-8a1a-927fcceccf61\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9rfl7" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.859050 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.879161 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.899534 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.918949 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.923143 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/966e01cf-5149-43ef-8967-517e68e2bbaa-serving-cert\") pod \"service-ca-operator-777779d784-x7dlg\" (UID: \"966e01cf-5149-43ef-8967-517e68e2bbaa\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x7dlg" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.939148 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.966838 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.972300 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/966e01cf-5149-43ef-8967-517e68e2bbaa-config\") pod \"service-ca-operator-777779d784-x7dlg\" (UID: \"966e01cf-5149-43ef-8967-517e68e2bbaa\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x7dlg" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.978348 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 01:58:53 crc kubenswrapper[4681]: I0404 01:58:53.999116 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.019255 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.039711 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.065169 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.077658 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0cedaefc-2211-4575-8993-8aff39f0d5a3-serving-cert\") pod \"openshift-config-operator-7777fb866f-gwptd\" (UID: \"0cedaefc-2211-4575-8993-8aff39f0d5a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.079753 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.111839 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.119193 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.121243 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/559af3cb-f642-4e99-91e1-155840a1629c-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-dt66q\" (UID: \"559af3cb-f642-4e99-91e1-155840a1629c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt66q" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.139542 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.152281 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/559af3cb-f642-4e99-91e1-155840a1629c-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-dt66q\" (UID: \"559af3cb-f642-4e99-91e1-155840a1629c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt66q" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.159156 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.178981 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.198457 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.206352 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fd99e1c1-f4c2-42cb-ad67-76f781407b88-apiservice-cert\") pod \"packageserver-d55dfcdfc-s28vn\" (UID: \"fd99e1c1-f4c2-42cb-ad67-76f781407b88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s28vn" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.208231 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fd99e1c1-f4c2-42cb-ad67-76f781407b88-webhook-cert\") pod \"packageserver-d55dfcdfc-s28vn\" (UID: \"fd99e1c1-f4c2-42cb-ad67-76f781407b88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s28vn" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.219595 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.238567 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.258953 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.269618 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ad077696-8d80-47a9-9bb2-23764ccd2b6a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-sp7hg\" (UID: \"ad077696-8d80-47a9-9bb2-23764ccd2b6a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-sp7hg" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.279693 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.284160 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/56028b8f-0d6b-4f7f-b4d6-cefc5acec683-secret-volume\") pod \"collect-profiles-29587785-ft78m\" (UID: \"56028b8f-0d6b-4f7f-b4d6-cefc5acec683\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587785-ft78m" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.285182 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/5999fa11-c8b4-4e7f-ae21-1b570aa79853-profile-collector-cert\") pod \"catalog-operator-68c6474976-dv8vg\" (UID: \"5999fa11-c8b4-4e7f-ae21-1b570aa79853\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dv8vg" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.299854 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.305100 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56028b8f-0d6b-4f7f-b4d6-cefc5acec683-config-volume\") pod \"collect-profiles-29587785-ft78m\" (UID: \"56028b8f-0d6b-4f7f-b4d6-cefc5acec683\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587785-ft78m" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.318889 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.339915 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.368607 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.379359 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.399958 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.419546 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.439285 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.459348 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.479681 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.499844 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.518998 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.538954 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.566624 4681 request.go:700] Waited for 1.007737096s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.568692 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.579699 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.599357 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.620008 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.639630 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.658902 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.679341 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.699107 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.719382 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.739895 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.759903 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.766940 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/5999fa11-c8b4-4e7f-ae21-1b570aa79853-srv-cert\") pod \"catalog-operator-68c6474976-dv8vg\" (UID: \"5999fa11-c8b4-4e7f-ae21-1b570aa79853\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dv8vg" Apr 04 01:58:54 crc kubenswrapper[4681]: E0404 01:58:54.773391 4681 secret.go:188] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Apr 04 01:58:54 crc kubenswrapper[4681]: E0404 01:58:54.773505 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42fde299-09b3-4bec-83c9-71af1d27475a-serving-cert podName:42fde299-09b3-4bec-83c9-71af1d27475a nodeName:}" failed. No retries permitted until 2026-04-04 01:58:55.27347251 +0000 UTC m=+214.939247660 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/42fde299-09b3-4bec-83c9-71af1d27475a-serving-cert") pod "kube-storage-version-migrator-operator-b67b599dd-tb658" (UID: "42fde299-09b3-4bec-83c9-71af1d27475a") : failed to sync secret cache: timed out waiting for the condition Apr 04 01:58:54 crc kubenswrapper[4681]: E0404 01:58:54.774214 4681 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: failed to sync configmap cache: timed out waiting for the condition Apr 04 01:58:54 crc kubenswrapper[4681]: E0404 01:58:54.774464 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/42fde299-09b3-4bec-83c9-71af1d27475a-config podName:42fde299-09b3-4bec-83c9-71af1d27475a nodeName:}" failed. No retries permitted until 2026-04-04 01:58:55.274433527 +0000 UTC m=+214.940208687 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/42fde299-09b3-4bec-83c9-71af1d27475a-config") pod "kube-storage-version-migrator-operator-b67b599dd-tb658" (UID: "42fde299-09b3-4bec-83c9-71af1d27475a") : failed to sync configmap cache: timed out waiting for the condition Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.779442 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.799847 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.818915 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.839650 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.860128 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.879423 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.899336 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.918952 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.939922 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.984186 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q74m2\" (UniqueName: \"kubernetes.io/projected/ea58e7c7-9e3b-42ea-898f-c161a7ce17d7-kube-api-access-q74m2\") pod \"apiserver-7bbb656c7d-7knn5\" (UID: \"ea58e7c7-9e3b-42ea-898f-c161a7ce17d7\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" Apr 04 01:58:54 crc kubenswrapper[4681]: I0404 01:58:54.989114 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.010061 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktq45\" (UniqueName: \"kubernetes.io/projected/36b289c9-56bc-4b1a-ab7d-1777b34bcaf4-kube-api-access-ktq45\") pod \"openshift-apiserver-operator-796bbdcf4f-hjl4b\" (UID: \"36b289c9-56bc-4b1a-ab7d-1777b34bcaf4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjl4b" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.027812 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8ng6\" (UniqueName: \"kubernetes.io/projected/5bb4d019-ae1f-4aa2-b255-a6974c4edf4a-kube-api-access-c8ng6\") pod \"authentication-operator-69f744f599-blqhv\" (UID: \"5bb4d019-ae1f-4aa2-b255-a6974c4edf4a\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-blqhv" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.039170 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.044477 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrhwv\" (UniqueName: \"kubernetes.io/projected/5da93ec3-d19f-40d9-97f1-994998839180-kube-api-access-xrhwv\") pod \"apiserver-76f77b778f-xpn96\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.059006 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.080161 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.099815 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.119079 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.142626 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.168470 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.184462 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.199874 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.211361 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.220544 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.238761 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.241899 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjl4b" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.259102 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.259486 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5"] Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.274995 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-blqhv" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.280067 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.298062 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42fde299-09b3-4bec-83c9-71af1d27475a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tb658\" (UID: \"42fde299-09b3-4bec-83c9-71af1d27475a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tb658" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.298127 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42fde299-09b3-4bec-83c9-71af1d27475a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tb658\" (UID: \"42fde299-09b3-4bec-83c9-71af1d27475a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tb658" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.299472 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42fde299-09b3-4bec-83c9-71af1d27475a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tb658\" (UID: \"42fde299-09b3-4bec-83c9-71af1d27475a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tb658" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.301622 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42fde299-09b3-4bec-83c9-71af1d27475a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tb658\" (UID: \"42fde299-09b3-4bec-83c9-71af1d27475a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tb658" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.319790 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.337381 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8bqc\" (UniqueName: \"kubernetes.io/projected/15b64868-afa1-4d70-bfda-799ed31decdb-kube-api-access-n8bqc\") pod \"machine-api-operator-5694c8668f-mftw8\" (UID: \"15b64868-afa1-4d70-bfda-799ed31decdb\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-mftw8" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.338590 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Apr 04 01:58:55 crc kubenswrapper[4681]: W0404 01:58:55.348996 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea58e7c7_9e3b_42ea_898f_c161a7ce17d7.slice/crio-10ac7f395a2c75b2836598c36fd43bf085a442e2063a81e6d5fa08883cad6169 WatchSource:0}: Error finding container 10ac7f395a2c75b2836598c36fd43bf085a442e2063a81e6d5fa08883cad6169: Status 404 returned error can't find the container with id 10ac7f395a2c75b2836598c36fd43bf085a442e2063a81e6d5fa08883cad6169 Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.359132 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.422562 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.424882 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xpn96"] Apr 04 01:58:55 crc kubenswrapper[4681]: W0404 01:58:55.439999 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5da93ec3_d19f_40d9_97f1_994998839180.slice/crio-8829d19903458c45eb0ab9c5aedc01fd8312ee86f1718766167a94281efb0d41 WatchSource:0}: Error finding container 8829d19903458c45eb0ab9c5aedc01fd8312ee86f1718766167a94281efb0d41: Status 404 returned error can't find the container with id 8829d19903458c45eb0ab9c5aedc01fd8312ee86f1718766167a94281efb0d41 Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.443234 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjl4b"] Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.443550 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Apr 04 01:58:55 crc kubenswrapper[4681]: W0404 01:58:55.455073 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36b289c9_56bc_4b1a_ab7d_1777b34bcaf4.slice/crio-864a020c2718d40524346f78c22b52a99c5f24a082b5ee4ff48b12b88b108147 WatchSource:0}: Error finding container 864a020c2718d40524346f78c22b52a99c5f24a082b5ee4ff48b12b88b108147: Status 404 returned error can't find the container with id 864a020c2718d40524346f78c22b52a99c5f24a082b5ee4ff48b12b88b108147 Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.460620 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.479348 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.494398 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-blqhv"] Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.516937 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94txw\" (UniqueName: \"kubernetes.io/projected/b54a0848-6df8-47da-8537-a01d44322ca4-kube-api-access-94txw\") pod \"route-controller-manager-6576b87f9c-5wdgm\" (UID: \"b54a0848-6df8-47da-8537-a01d44322ca4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.530017 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-mftw8" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.533046 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmxpf\" (UniqueName: \"kubernetes.io/projected/ad077696-8d80-47a9-9bb2-23764ccd2b6a-kube-api-access-wmxpf\") pod \"multus-admission-controller-857f4d67dd-sp7hg\" (UID: \"ad077696-8d80-47a9-9bb2-23764ccd2b6a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-sp7hg" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.557825 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvqhh\" (UniqueName: \"kubernetes.io/projected/0b28142c-7b85-406e-b158-42517bab7f11-kube-api-access-wvqhh\") pod \"oauth-openshift-558db77b4-t522l\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.557924 4681 request.go:700] Waited for 1.88684933s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-etcd-operator/serviceaccounts/etcd-operator/token Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.560236 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-sp7hg" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.575243 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmbzv\" (UniqueName: \"kubernetes.io/projected/3c1605aa-6f4d-4754-9e6e-5f2c2d564f73-kube-api-access-jmbzv\") pod \"etcd-operator-b45778765-qj9xc\" (UID: \"3c1605aa-6f4d-4754-9e6e-5f2c2d564f73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-qj9xc" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.601094 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xmmh\" (UniqueName: \"kubernetes.io/projected/b964ba7c-9b8c-40d8-b671-915649b4d77b-kube-api-access-7xmmh\") pod \"downloads-7954f5f757-fn5hz\" (UID: \"b964ba7c-9b8c-40d8-b671-915649b4d77b\") " pod="openshift-console/downloads-7954f5f757-fn5hz" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.652106 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-fn5hz" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.654891 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnblp\" (UniqueName: \"kubernetes.io/projected/b3fc9a5b-081d-4321-ac46-42992adcf541-kube-api-access-cnblp\") pod \"openshift-controller-manager-operator-756b6f6bc6-ntv7l\" (UID: \"b3fc9a5b-081d-4321-ac46-42992adcf541\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ntv7l" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.671887 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69wwr\" (UniqueName: \"kubernetes.io/projected/0cedaefc-2211-4575-8993-8aff39f0d5a3-kube-api-access-69wwr\") pod \"openshift-config-operator-7777fb866f-gwptd\" (UID: \"0cedaefc-2211-4575-8993-8aff39f0d5a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.680224 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r75w\" (UniqueName: \"kubernetes.io/projected/fd99e1c1-f4c2-42cb-ad67-76f781407b88-kube-api-access-4r75w\") pod \"packageserver-d55dfcdfc-s28vn\" (UID: \"fd99e1c1-f4c2-42cb-ad67-76f781407b88\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s28vn" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.692841 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cx66\" (UniqueName: \"kubernetes.io/projected/3d528d31-23f7-48f0-9e52-5357bd410c3d-kube-api-access-8cx66\") pod \"migrator-59844c95c7-mcv5m\" (UID: \"3d528d31-23f7-48f0-9e52-5357bd410c3d\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mcv5m" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.696543 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.699021 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.726993 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.748976 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mcv5m" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.752083 4681 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.752662 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.765862 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-mftw8"] Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.776299 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2stx\" (UniqueName: \"kubernetes.io/projected/394b01ea-0b57-4565-aa56-96b6c5372a15-kube-api-access-m2stx\") pod \"console-operator-58897d9998-tq7nn\" (UID: \"394b01ea-0b57-4565-aa56-96b6c5372a15\") " pod="openshift-console-operator/console-operator-58897d9998-tq7nn" Apr 04 01:58:55 crc kubenswrapper[4681]: W0404 01:58:55.785859 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15b64868_afa1_4d70_bfda_799ed31decdb.slice/crio-120ed0f5df7f9cff388a84f149f40af1895c4c05ba1aec829fc98ca63f6e2173 WatchSource:0}: Error finding container 120ed0f5df7f9cff388a84f149f40af1895c4c05ba1aec829fc98ca63f6e2173: Status 404 returned error can't find the container with id 120ed0f5df7f9cff388a84f149f40af1895c4c05ba1aec829fc98ca63f6e2173 Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.794319 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6cwc\" (UniqueName: \"kubernetes.io/projected/966e01cf-5149-43ef-8967-517e68e2bbaa-kube-api-access-v6cwc\") pod \"service-ca-operator-777779d784-x7dlg\" (UID: \"966e01cf-5149-43ef-8967-517e68e2bbaa\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-x7dlg" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.799039 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/559af3cb-f642-4e99-91e1-155840a1629c-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-dt66q\" (UID: \"559af3cb-f642-4e99-91e1-155840a1629c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt66q" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.809058 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-qj9xc" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.818464 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x7dlg" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.821909 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wphgv\" (UniqueName: \"kubernetes.io/projected/664aa862-1bb6-421a-87b9-992ead56694b-kube-api-access-wphgv\") pod \"marketplace-operator-79b997595-kxzxq\" (UID: \"664aa862-1bb6-421a-87b9-992ead56694b\") " pod="openshift-marketplace/marketplace-operator-79b997595-kxzxq" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.825926 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-sp7hg"] Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.832774 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tks5p\" (UniqueName: \"kubernetes.io/projected/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-kube-api-access-tks5p\") pod \"console-f9d7485db-c6ktd\" (UID: \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\") " pod="openshift-console/console-f9d7485db-c6ktd" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.838708 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.852775 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s28vn" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.854923 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2sww\" (UniqueName: \"kubernetes.io/projected/867152c5-9f9e-40b4-8623-3437a9793b5d-kube-api-access-q2sww\") pod \"cluster-samples-operator-665b6dd947-8g27v\" (UID: \"867152c5-9f9e-40b4-8623-3437a9793b5d\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8g27v" Apr 04 01:58:55 crc kubenswrapper[4681]: W0404 01:58:55.856411 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad077696_8d80_47a9_9bb2_23764ccd2b6a.slice/crio-7b1a6a351b144010c446f15563db06140ef57febc2796f34d7c1965e69ce98ac WatchSource:0}: Error finding container 7b1a6a351b144010c446f15563db06140ef57febc2796f34d7c1965e69ce98ac: Status 404 returned error can't find the container with id 7b1a6a351b144010c446f15563db06140ef57febc2796f34d7c1965e69ce98ac Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.879792 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc95r\" (UniqueName: \"kubernetes.io/projected/1d0b49f8-7d1c-41ce-9735-73cfa83aa3a2-kube-api-access-dc95r\") pod \"machine-config-operator-74547568cd-2fpgr\" (UID: \"1d0b49f8-7d1c-41ce-9735-73cfa83aa3a2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2fpgr" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.893071 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wzrg\" (UniqueName: \"kubernetes.io/projected/cf31d35d-7049-4892-8bd1-3dc9deb4325c-kube-api-access-5wzrg\") pod \"dns-operator-744455d44c-d8sjd\" (UID: \"cf31d35d-7049-4892-8bd1-3dc9deb4325c\") " pod="openshift-dns-operator/dns-operator-744455d44c-d8sjd" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.909522 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c6ktd" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.915454 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n78gm\" (UniqueName: \"kubernetes.io/projected/fcd50c8c-38c5-4c42-930d-2235c4384328-kube-api-access-n78gm\") pod \"auto-csr-approver-29587798-tmssr\" (UID: \"fcd50c8c-38c5-4c42-930d-2235c4384328\") " pod="openshift-infra/auto-csr-approver-29587798-tmssr" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.920895 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.924918 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ntv7l" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.940369 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.941410 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-d8sjd" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.958824 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.962823 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" Apr 04 01:58:55 crc kubenswrapper[4681]: I0404 01:58:55.984354 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-t522l"] Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.002251 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fwwx\" (UniqueName: \"kubernetes.io/projected/559af3cb-f642-4e99-91e1-155840a1629c-kube-api-access-9fwwx\") pod \"cluster-image-registry-operator-dc59b4c8b-dt66q\" (UID: \"559af3cb-f642-4e99-91e1-155840a1629c\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt66q" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.010225 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2fpgr" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.015400 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-mcv5m"] Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.034764 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2lcv\" (UniqueName: \"kubernetes.io/projected/645ae111-522a-4216-aadd-0901313020ce-kube-api-access-z2lcv\") pod \"controller-manager-879f6c89f-wpzmf\" (UID: \"645ae111-522a-4216-aadd-0901313020ce\") " pod="openshift-controller-manager/controller-manager-879f6c89f-wpzmf" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.039564 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzj5g\" (UniqueName: \"kubernetes.io/projected/0881096e-2031-4e7a-8a1a-927fcceccf61-kube-api-access-jzj5g\") pod \"machine-approver-56656f9798-9rfl7\" (UID: \"0881096e-2031-4e7a-8a1a-927fcceccf61\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9rfl7" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.041642 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8g27v" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.055637 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzpf9\" (UniqueName: \"kubernetes.io/projected/56028b8f-0d6b-4f7f-b4d6-cefc5acec683-kube-api-access-tzpf9\") pod \"collect-profiles-29587785-ft78m\" (UID: \"56028b8f-0d6b-4f7f-b4d6-cefc5acec683\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587785-ft78m" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.060407 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kxzxq" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.066872 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wpzmf" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.073100 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58dzf\" (UniqueName: \"kubernetes.io/projected/5999fa11-c8b4-4e7f-ae21-1b570aa79853-kube-api-access-58dzf\") pod \"catalog-operator-68c6474976-dv8vg\" (UID: \"5999fa11-c8b4-4e7f-ae21-1b570aa79853\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dv8vg" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.082874 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gwptd"] Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.099669 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xb2ng\" (UniqueName: \"kubernetes.io/projected/42fde299-09b3-4bec-83c9-71af1d27475a-kube-api-access-xb2ng\") pod \"kube-storage-version-migrator-operator-b67b599dd-tb658\" (UID: \"42fde299-09b3-4bec-83c9-71af1d27475a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tb658" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.114387 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9rfl7" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.127062 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-x7dlg"] Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.132568 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587798-tmssr" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.136880 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm"] Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.138151 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-fn5hz"] Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.143698 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-qj9xc"] Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.146298 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt66q" Apr 04 01:58:56 crc kubenswrapper[4681]: W0404 01:58:56.151847 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d528d31_23f7_48f0_9e52_5357bd410c3d.slice/crio-015a1a809141475bb91e322dfe33cb8ee5b878f87aedb08da2ba038ab7da0765 WatchSource:0}: Error finding container 015a1a809141475bb91e322dfe33cb8ee5b878f87aedb08da2ba038ab7da0765: Status 404 returned error can't find the container with id 015a1a809141475bb91e322dfe33cb8ee5b878f87aedb08da2ba038ab7da0765 Apr 04 01:58:56 crc kubenswrapper[4681]: W0404 01:58:56.152710 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0cedaefc_2211_4575_8993_8aff39f0d5a3.slice/crio-1b1c39421b58618367d1d5fc2141cbdfe12cac85b34194fd90f9340bee512721 WatchSource:0}: Error finding container 1b1c39421b58618367d1d5fc2141cbdfe12cac85b34194fd90f9340bee512721: Status 404 returned error can't find the container with id 1b1c39421b58618367d1d5fc2141cbdfe12cac85b34194fd90f9340bee512721 Apr 04 01:58:56 crc kubenswrapper[4681]: W0404 01:58:56.155755 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb964ba7c_9b8c_40d8_b671_915649b4d77b.slice/crio-ee7a1b8c8ddbf58993e2ad10d44108552370b2bbe80dbe07ba9023b8c6d8a186 WatchSource:0}: Error finding container ee7a1b8c8ddbf58993e2ad10d44108552370b2bbe80dbe07ba9023b8c6d8a186: Status 404 returned error can't find the container with id ee7a1b8c8ddbf58993e2ad10d44108552370b2bbe80dbe07ba9023b8c6d8a186 Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.169433 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587785-ft78m" Apr 04 01:58:56 crc kubenswrapper[4681]: W0404 01:58:56.217678 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c1605aa_6f4d_4754_9e6e_5f2c2d564f73.slice/crio-6ad820dd15c7412cc008aacbce31cd15171db73467394f14ada801c91f22cff9 WatchSource:0}: Error finding container 6ad820dd15c7412cc008aacbce31cd15171db73467394f14ada801c91f22cff9: Status 404 returned error can't find the container with id 6ad820dd15c7412cc008aacbce31cd15171db73467394f14ada801c91f22cff9 Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.218814 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-c6ktd"] Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.220690 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9a818bf-9611-4945-9350-97ca20b42b26-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9t5n9\" (UID: \"f9a818bf-9611-4945-9350-97ca20b42b26\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9t5n9" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.220744 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6c6431a6-2eaa-4931-9891-f6c08c3ed5ce-metrics-tls\") pod \"ingress-operator-5b745b69d9-j4q2w\" (UID: \"6c6431a6-2eaa-4931-9891-f6c08c3ed5ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j4q2w" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.220802 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8b59f8f0-e1c8-4187-a509-1a7f58a0ba37-stats-auth\") pod \"router-default-5444994796-d25mp\" (UID: \"8b59f8f0-e1c8-4187-a509-1a7f58a0ba37\") " pod="openshift-ingress/router-default-5444994796-d25mp" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.220819 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b1c92fd3-b8d6-4db7-bf23-784d5eb8ac5e-proxy-tls\") pod \"machine-config-controller-84d6567774-cxjm4\" (UID: \"b1c92fd3-b8d6-4db7-bf23-784d5eb8ac5e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cxjm4" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.220875 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.220896 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b26036bc-4cff-472f-a379-8dc4541cf018-bound-sa-token\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.220911 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c6431a6-2eaa-4931-9891-f6c08c3ed5ce-trusted-ca\") pod \"ingress-operator-5b745b69d9-j4q2w\" (UID: \"6c6431a6-2eaa-4931-9891-f6c08c3ed5ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j4q2w" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.220931 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bc827\" (UniqueName: \"kubernetes.io/projected/77f92d06-8808-403f-a105-192cdc57730d-kube-api-access-bc827\") pod \"service-ca-9c57cc56f-hwjqf\" (UID: \"77f92d06-8808-403f-a105-192cdc57730d\") " pod="openshift-service-ca/service-ca-9c57cc56f-hwjqf" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.220948 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grggx\" (UniqueName: \"kubernetes.io/projected/6c6431a6-2eaa-4931-9891-f6c08c3ed5ce-kube-api-access-grggx\") pod \"ingress-operator-5b745b69d9-j4q2w\" (UID: \"6c6431a6-2eaa-4931-9891-f6c08c3ed5ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j4q2w" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.220995 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b26036bc-4cff-472f-a379-8dc4541cf018-installation-pull-secrets\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.221014 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1182c93a-3e68-4418-aeb7-8394689b55c2-config\") pod \"kube-apiserver-operator-766d6c64bb-p6wvt\" (UID: \"1182c93a-3e68-4418-aeb7-8394689b55c2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p6wvt" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.221616 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scll7\" (UniqueName: \"kubernetes.io/projected/8b59f8f0-e1c8-4187-a509-1a7f58a0ba37-kube-api-access-scll7\") pod \"router-default-5444994796-d25mp\" (UID: \"8b59f8f0-e1c8-4187-a509-1a7f58a0ba37\") " pod="openshift-ingress/router-default-5444994796-d25mp" Apr 04 01:58:56 crc kubenswrapper[4681]: E0404 01:58:56.221663 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:58:56.721639602 +0000 UTC m=+216.387414822 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.221728 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8f9c5e4-05ac-48dd-8e04-81b8087e3a72-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-c76br\" (UID: \"b8f9c5e4-05ac-48dd-8e04-81b8087e3a72\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c76br" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.221780 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbede535-d73e-41cf-b483-6f6794647f90-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ghxgb\" (UID: \"cbede535-d73e-41cf-b483-6f6794647f90\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghxgb" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.221812 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b26036bc-4cff-472f-a379-8dc4541cf018-registry-tls\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.221834 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9a818bf-9611-4945-9350-97ca20b42b26-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9t5n9\" (UID: \"f9a818bf-9611-4945-9350-97ca20b42b26\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9t5n9" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.221872 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b59f8f0-e1c8-4187-a509-1a7f58a0ba37-metrics-certs\") pod \"router-default-5444994796-d25mp\" (UID: \"8b59f8f0-e1c8-4187-a509-1a7f58a0ba37\") " pod="openshift-ingress/router-default-5444994796-d25mp" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.222182 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxjjm\" (UniqueName: \"kubernetes.io/projected/b26036bc-4cff-472f-a379-8dc4541cf018-kube-api-access-zxjjm\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.222333 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/30f28982-f167-414c-9ad7-b76a6e8eb5ca-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-l6r2x\" (UID: \"30f28982-f167-414c-9ad7-b76a6e8eb5ca\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6r2x" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.222441 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2036b47b-0f29-4c79-b26c-c8877d60cfc4-srv-cert\") pod \"olm-operator-6b444d44fb-j4m77\" (UID: \"2036b47b-0f29-4c79-b26c-c8877d60cfc4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j4m77" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.222476 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll5zt\" (UniqueName: \"kubernetes.io/projected/2036b47b-0f29-4c79-b26c-c8877d60cfc4-kube-api-access-ll5zt\") pod \"olm-operator-6b444d44fb-j4m77\" (UID: \"2036b47b-0f29-4c79-b26c-c8877d60cfc4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j4m77" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.222516 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/77f92d06-8808-403f-a105-192cdc57730d-signing-key\") pod \"service-ca-9c57cc56f-hwjqf\" (UID: \"77f92d06-8808-403f-a105-192cdc57730d\") " pod="openshift-service-ca/service-ca-9c57cc56f-hwjqf" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.224166 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dv8vg" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.224935 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8b59f8f0-e1c8-4187-a509-1a7f58a0ba37-default-certificate\") pod \"router-default-5444994796-d25mp\" (UID: \"8b59f8f0-e1c8-4187-a509-1a7f58a0ba37\") " pod="openshift-ingress/router-default-5444994796-d25mp" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.225297 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2036b47b-0f29-4c79-b26c-c8877d60cfc4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-j4m77\" (UID: \"2036b47b-0f29-4c79-b26c-c8877d60cfc4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j4m77" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.225408 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b26036bc-4cff-472f-a379-8dc4541cf018-trusted-ca\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.225450 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1182c93a-3e68-4418-aeb7-8394689b55c2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-p6wvt\" (UID: \"1182c93a-3e68-4418-aeb7-8394689b55c2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p6wvt" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.225596 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1182c93a-3e68-4418-aeb7-8394689b55c2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-p6wvt\" (UID: \"1182c93a-3e68-4418-aeb7-8394689b55c2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p6wvt" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.225622 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/77f92d06-8808-403f-a105-192cdc57730d-signing-cabundle\") pod \"service-ca-9c57cc56f-hwjqf\" (UID: \"77f92d06-8808-403f-a105-192cdc57730d\") " pod="openshift-service-ca/service-ca-9c57cc56f-hwjqf" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.225653 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbede535-d73e-41cf-b483-6f6794647f90-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ghxgb\" (UID: \"cbede535-d73e-41cf-b483-6f6794647f90\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghxgb" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.225995 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c6431a6-2eaa-4931-9891-f6c08c3ed5ce-bound-sa-token\") pod \"ingress-operator-5b745b69d9-j4q2w\" (UID: \"6c6431a6-2eaa-4931-9891-f6c08c3ed5ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j4q2w" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.226211 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b1c92fd3-b8d6-4db7-bf23-784d5eb8ac5e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-cxjm4\" (UID: \"b1c92fd3-b8d6-4db7-bf23-784d5eb8ac5e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cxjm4" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.226348 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b26036bc-4cff-472f-a379-8dc4541cf018-registry-certificates\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.226377 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk257\" (UniqueName: \"kubernetes.io/projected/30f28982-f167-414c-9ad7-b76a6e8eb5ca-kube-api-access-hk257\") pod \"package-server-manager-789f6589d5-l6r2x\" (UID: \"30f28982-f167-414c-9ad7-b76a6e8eb5ca\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6r2x" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.226429 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a7623b88-9c92-45ab-b541-ee947e5c67df-metrics-tls\") pod \"dns-default-d5qvq\" (UID: \"a7623b88-9c92-45ab-b541-ee947e5c67df\") " pod="openshift-dns/dns-default-d5qvq" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.226460 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6pjv\" (UniqueName: \"kubernetes.io/projected/a7623b88-9c92-45ab-b541-ee947e5c67df-kube-api-access-n6pjv\") pod \"dns-default-d5qvq\" (UID: \"a7623b88-9c92-45ab-b541-ee947e5c67df\") " pod="openshift-dns/dns-default-d5qvq" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.226501 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7623b88-9c92-45ab-b541-ee947e5c67df-config-volume\") pod \"dns-default-d5qvq\" (UID: \"a7623b88-9c92-45ab-b541-ee947e5c67df\") " pod="openshift-dns/dns-default-d5qvq" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.226524 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b26036bc-4cff-472f-a379-8dc4541cf018-ca-trust-extracted\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.226554 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b59f8f0-e1c8-4187-a509-1a7f58a0ba37-service-ca-bundle\") pod \"router-default-5444994796-d25mp\" (UID: \"8b59f8f0-e1c8-4187-a509-1a7f58a0ba37\") " pod="openshift-ingress/router-default-5444994796-d25mp" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.226646 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbede535-d73e-41cf-b483-6f6794647f90-config\") pod \"kube-controller-manager-operator-78b949d7b-ghxgb\" (UID: \"cbede535-d73e-41cf-b483-6f6794647f90\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghxgb" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.226687 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9a818bf-9611-4945-9350-97ca20b42b26-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9t5n9\" (UID: \"f9a818bf-9611-4945-9350-97ca20b42b26\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9t5n9" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.226734 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw6w6\" (UniqueName: \"kubernetes.io/projected/b1c92fd3-b8d6-4db7-bf23-784d5eb8ac5e-kube-api-access-kw6w6\") pod \"machine-config-controller-84d6567774-cxjm4\" (UID: \"b1c92fd3-b8d6-4db7-bf23-784d5eb8ac5e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cxjm4" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.226751 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfgdf\" (UniqueName: \"kubernetes.io/projected/b8f9c5e4-05ac-48dd-8e04-81b8087e3a72-kube-api-access-gfgdf\") pod \"control-plane-machine-set-operator-78cbb6b69f-c76br\" (UID: \"b8f9c5e4-05ac-48dd-8e04-81b8087e3a72\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c76br" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.248728 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tb658" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.258294 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mcv5m" event={"ID":"3d528d31-23f7-48f0-9e52-5357bd410c3d","Type":"ContainerStarted","Data":"015a1a809141475bb91e322dfe33cb8ee5b878f87aedb08da2ba038ab7da0765"} Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.259164 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-blqhv" event={"ID":"5bb4d019-ae1f-4aa2-b255-a6974c4edf4a","Type":"ContainerStarted","Data":"a7cda7f571b2cd2876505354333cf955e70b901f627028f6c4dbc35c3de7c46a"} Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.259902 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm" event={"ID":"b54a0848-6df8-47da-8537-a01d44322ca4","Type":"ContainerStarted","Data":"eedce7f58219c8c323b008af1bc4351649a5e5ef3652937cde3bceb001fb6e83"} Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.261560 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-qj9xc" event={"ID":"3c1605aa-6f4d-4754-9e6e-5f2c2d564f73","Type":"ContainerStarted","Data":"6ad820dd15c7412cc008aacbce31cd15171db73467394f14ada801c91f22cff9"} Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.273831 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" event={"ID":"5da93ec3-d19f-40d9-97f1-994998839180","Type":"ContainerStarted","Data":"8829d19903458c45eb0ab9c5aedc01fd8312ee86f1718766167a94281efb0d41"} Apr 04 01:58:56 crc kubenswrapper[4681]: W0404 01:58:56.278340 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ddf35f8_67ba_4e4c_ad0e_5d20c24b5798.slice/crio-fb66f684fcb12fc8bf6dfd6dc2f7f92e1acc9a42c1878c52bf543bcb51c94850 WatchSource:0}: Error finding container fb66f684fcb12fc8bf6dfd6dc2f7f92e1acc9a42c1878c52bf543bcb51c94850: Status 404 returned error can't find the container with id fb66f684fcb12fc8bf6dfd6dc2f7f92e1acc9a42c1878c52bf543bcb51c94850 Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.279749 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-sp7hg" event={"ID":"ad077696-8d80-47a9-9bb2-23764ccd2b6a","Type":"ContainerStarted","Data":"7b1a6a351b144010c446f15563db06140ef57febc2796f34d7c1965e69ce98ac"} Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.333149 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.333325 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8b59f8f0-e1c8-4187-a509-1a7f58a0ba37-stats-auth\") pod \"router-default-5444994796-d25mp\" (UID: \"8b59f8f0-e1c8-4187-a509-1a7f58a0ba37\") " pod="openshift-ingress/router-default-5444994796-d25mp" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.333349 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b1c92fd3-b8d6-4db7-bf23-784d5eb8ac5e-proxy-tls\") pod \"machine-config-controller-84d6567774-cxjm4\" (UID: \"b1c92fd3-b8d6-4db7-bf23-784d5eb8ac5e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cxjm4" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.333383 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a2c6ee2e-54a9-4992-ac77-2b1f65957602-mountpoint-dir\") pod \"csi-hostpathplugin-dnkcc\" (UID: \"a2c6ee2e-54a9-4992-ac77-2b1f65957602\") " pod="hostpath-provisioner/csi-hostpathplugin-dnkcc" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.333438 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b26036bc-4cff-472f-a379-8dc4541cf018-bound-sa-token\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.333460 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c6431a6-2eaa-4931-9891-f6c08c3ed5ce-trusted-ca\") pod \"ingress-operator-5b745b69d9-j4q2w\" (UID: \"6c6431a6-2eaa-4931-9891-f6c08c3ed5ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j4q2w" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.333482 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bc827\" (UniqueName: \"kubernetes.io/projected/77f92d06-8808-403f-a105-192cdc57730d-kube-api-access-bc827\") pod \"service-ca-9c57cc56f-hwjqf\" (UID: \"77f92d06-8808-403f-a105-192cdc57730d\") " pod="openshift-service-ca/service-ca-9c57cc56f-hwjqf" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.333505 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grggx\" (UniqueName: \"kubernetes.io/projected/6c6431a6-2eaa-4931-9891-f6c08c3ed5ce-kube-api-access-grggx\") pod \"ingress-operator-5b745b69d9-j4q2w\" (UID: \"6c6431a6-2eaa-4931-9891-f6c08c3ed5ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j4q2w" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.333529 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j89pk\" (UniqueName: \"kubernetes.io/projected/c52ff89d-ecee-40a6-aff6-88924f18f386-kube-api-access-j89pk\") pod \"machine-config-server-kwds4\" (UID: \"c52ff89d-ecee-40a6-aff6-88924f18f386\") " pod="openshift-machine-config-operator/machine-config-server-kwds4" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.333562 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b26036bc-4cff-472f-a379-8dc4541cf018-installation-pull-secrets\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.333584 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1182c93a-3e68-4418-aeb7-8394689b55c2-config\") pod \"kube-apiserver-operator-766d6c64bb-p6wvt\" (UID: \"1182c93a-3e68-4418-aeb7-8394689b55c2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p6wvt" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.333603 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a2c6ee2e-54a9-4992-ac77-2b1f65957602-socket-dir\") pod \"csi-hostpathplugin-dnkcc\" (UID: \"a2c6ee2e-54a9-4992-ac77-2b1f65957602\") " pod="hostpath-provisioner/csi-hostpathplugin-dnkcc" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.333624 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a2c6ee2e-54a9-4992-ac77-2b1f65957602-csi-data-dir\") pod \"csi-hostpathplugin-dnkcc\" (UID: \"a2c6ee2e-54a9-4992-ac77-2b1f65957602\") " pod="hostpath-provisioner/csi-hostpathplugin-dnkcc" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.333667 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scll7\" (UniqueName: \"kubernetes.io/projected/8b59f8f0-e1c8-4187-a509-1a7f58a0ba37-kube-api-access-scll7\") pod \"router-default-5444994796-d25mp\" (UID: \"8b59f8f0-e1c8-4187-a509-1a7f58a0ba37\") " pod="openshift-ingress/router-default-5444994796-d25mp" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.333689 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8f9c5e4-05ac-48dd-8e04-81b8087e3a72-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-c76br\" (UID: \"b8f9c5e4-05ac-48dd-8e04-81b8087e3a72\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c76br" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.333713 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbede535-d73e-41cf-b483-6f6794647f90-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ghxgb\" (UID: \"cbede535-d73e-41cf-b483-6f6794647f90\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghxgb" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.333743 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9a818bf-9611-4945-9350-97ca20b42b26-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9t5n9\" (UID: \"f9a818bf-9611-4945-9350-97ca20b42b26\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9t5n9" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.333762 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b26036bc-4cff-472f-a379-8dc4541cf018-registry-tls\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.333783 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b59f8f0-e1c8-4187-a509-1a7f58a0ba37-metrics-certs\") pod \"router-default-5444994796-d25mp\" (UID: \"8b59f8f0-e1c8-4187-a509-1a7f58a0ba37\") " pod="openshift-ingress/router-default-5444994796-d25mp" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.333825 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwjrp\" (UniqueName: \"kubernetes.io/projected/82f388a5-daaa-45da-ba97-d3ea85530dfa-kube-api-access-bwjrp\") pod \"ingress-canary-tv82h\" (UID: \"82f388a5-daaa-45da-ba97-d3ea85530dfa\") " pod="openshift-ingress-canary/ingress-canary-tv82h" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.333848 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxjjm\" (UniqueName: \"kubernetes.io/projected/b26036bc-4cff-472f-a379-8dc4541cf018-kube-api-access-zxjjm\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.333871 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/30f28982-f167-414c-9ad7-b76a6e8eb5ca-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-l6r2x\" (UID: \"30f28982-f167-414c-9ad7-b76a6e8eb5ca\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6r2x" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.333906 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2036b47b-0f29-4c79-b26c-c8877d60cfc4-srv-cert\") pod \"olm-operator-6b444d44fb-j4m77\" (UID: \"2036b47b-0f29-4c79-b26c-c8877d60cfc4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j4m77" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.333927 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll5zt\" (UniqueName: \"kubernetes.io/projected/2036b47b-0f29-4c79-b26c-c8877d60cfc4-kube-api-access-ll5zt\") pod \"olm-operator-6b444d44fb-j4m77\" (UID: \"2036b47b-0f29-4c79-b26c-c8877d60cfc4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j4m77" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.333946 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/77f92d06-8808-403f-a105-192cdc57730d-signing-key\") pod \"service-ca-9c57cc56f-hwjqf\" (UID: \"77f92d06-8808-403f-a105-192cdc57730d\") " pod="openshift-service-ca/service-ca-9c57cc56f-hwjqf" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.333967 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a2c6ee2e-54a9-4992-ac77-2b1f65957602-plugins-dir\") pod \"csi-hostpathplugin-dnkcc\" (UID: \"a2c6ee2e-54a9-4992-ac77-2b1f65957602\") " pod="hostpath-provisioner/csi-hostpathplugin-dnkcc" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.334005 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8b59f8f0-e1c8-4187-a509-1a7f58a0ba37-default-certificate\") pod \"router-default-5444994796-d25mp\" (UID: \"8b59f8f0-e1c8-4187-a509-1a7f58a0ba37\") " pod="openshift-ingress/router-default-5444994796-d25mp" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.334027 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2036b47b-0f29-4c79-b26c-c8877d60cfc4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-j4m77\" (UID: \"2036b47b-0f29-4c79-b26c-c8877d60cfc4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j4m77" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.334047 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b26036bc-4cff-472f-a379-8dc4541cf018-trusted-ca\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.334073 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1182c93a-3e68-4418-aeb7-8394689b55c2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-p6wvt\" (UID: \"1182c93a-3e68-4418-aeb7-8394689b55c2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p6wvt" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.334097 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1182c93a-3e68-4418-aeb7-8394689b55c2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-p6wvt\" (UID: \"1182c93a-3e68-4418-aeb7-8394689b55c2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p6wvt" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.334117 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/77f92d06-8808-403f-a105-192cdc57730d-signing-cabundle\") pod \"service-ca-9c57cc56f-hwjqf\" (UID: \"77f92d06-8808-403f-a105-192cdc57730d\") " pod="openshift-service-ca/service-ca-9c57cc56f-hwjqf" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.334136 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbede535-d73e-41cf-b483-6f6794647f90-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ghxgb\" (UID: \"cbede535-d73e-41cf-b483-6f6794647f90\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghxgb" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.334167 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c6431a6-2eaa-4931-9891-f6c08c3ed5ce-bound-sa-token\") pod \"ingress-operator-5b745b69d9-j4q2w\" (UID: \"6c6431a6-2eaa-4931-9891-f6c08c3ed5ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j4q2w" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.334189 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b1c92fd3-b8d6-4db7-bf23-784d5eb8ac5e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-cxjm4\" (UID: \"b1c92fd3-b8d6-4db7-bf23-784d5eb8ac5e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cxjm4" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.334243 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a2c6ee2e-54a9-4992-ac77-2b1f65957602-registration-dir\") pod \"csi-hostpathplugin-dnkcc\" (UID: \"a2c6ee2e-54a9-4992-ac77-2b1f65957602\") " pod="hostpath-provisioner/csi-hostpathplugin-dnkcc" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.334298 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b26036bc-4cff-472f-a379-8dc4541cf018-registry-certificates\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.334321 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk257\" (UniqueName: \"kubernetes.io/projected/30f28982-f167-414c-9ad7-b76a6e8eb5ca-kube-api-access-hk257\") pod \"package-server-manager-789f6589d5-l6r2x\" (UID: \"30f28982-f167-414c-9ad7-b76a6e8eb5ca\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6r2x" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.334371 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a7623b88-9c92-45ab-b541-ee947e5c67df-metrics-tls\") pod \"dns-default-d5qvq\" (UID: \"a7623b88-9c92-45ab-b541-ee947e5c67df\") " pod="openshift-dns/dns-default-d5qvq" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.334396 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6pjv\" (UniqueName: \"kubernetes.io/projected/a7623b88-9c92-45ab-b541-ee947e5c67df-kube-api-access-n6pjv\") pod \"dns-default-d5qvq\" (UID: \"a7623b88-9c92-45ab-b541-ee947e5c67df\") " pod="openshift-dns/dns-default-d5qvq" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.334419 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b26036bc-4cff-472f-a379-8dc4541cf018-ca-trust-extracted\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.334466 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7623b88-9c92-45ab-b541-ee947e5c67df-config-volume\") pod \"dns-default-d5qvq\" (UID: \"a7623b88-9c92-45ab-b541-ee947e5c67df\") " pod="openshift-dns/dns-default-d5qvq" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.334487 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b59f8f0-e1c8-4187-a509-1a7f58a0ba37-service-ca-bundle\") pod \"router-default-5444994796-d25mp\" (UID: \"8b59f8f0-e1c8-4187-a509-1a7f58a0ba37\") " pod="openshift-ingress/router-default-5444994796-d25mp" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.334554 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82f388a5-daaa-45da-ba97-d3ea85530dfa-cert\") pod \"ingress-canary-tv82h\" (UID: \"82f388a5-daaa-45da-ba97-d3ea85530dfa\") " pod="openshift-ingress-canary/ingress-canary-tv82h" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.334565 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1182c93a-3e68-4418-aeb7-8394689b55c2-config\") pod \"kube-apiserver-operator-766d6c64bb-p6wvt\" (UID: \"1182c93a-3e68-4418-aeb7-8394689b55c2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p6wvt" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.334590 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78ptj\" (UniqueName: \"kubernetes.io/projected/a2c6ee2e-54a9-4992-ac77-2b1f65957602-kube-api-access-78ptj\") pod \"csi-hostpathplugin-dnkcc\" (UID: \"a2c6ee2e-54a9-4992-ac77-2b1f65957602\") " pod="hostpath-provisioner/csi-hostpathplugin-dnkcc" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.334641 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbede535-d73e-41cf-b483-6f6794647f90-config\") pod \"kube-controller-manager-operator-78b949d7b-ghxgb\" (UID: \"cbede535-d73e-41cf-b483-6f6794647f90\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghxgb" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.334696 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9a818bf-9611-4945-9350-97ca20b42b26-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9t5n9\" (UID: \"f9a818bf-9611-4945-9350-97ca20b42b26\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9t5n9" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.334743 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw6w6\" (UniqueName: \"kubernetes.io/projected/b1c92fd3-b8d6-4db7-bf23-784d5eb8ac5e-kube-api-access-kw6w6\") pod \"machine-config-controller-84d6567774-cxjm4\" (UID: \"b1c92fd3-b8d6-4db7-bf23-784d5eb8ac5e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cxjm4" Apr 04 01:58:56 crc kubenswrapper[4681]: E0404 01:58:56.334765 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:58:56.834753966 +0000 UTC m=+216.500529086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.334794 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfgdf\" (UniqueName: \"kubernetes.io/projected/b8f9c5e4-05ac-48dd-8e04-81b8087e3a72-kube-api-access-gfgdf\") pod \"control-plane-machine-set-operator-78cbb6b69f-c76br\" (UID: \"b8f9c5e4-05ac-48dd-8e04-81b8087e3a72\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c76br" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.334872 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c52ff89d-ecee-40a6-aff6-88924f18f386-certs\") pod \"machine-config-server-kwds4\" (UID: \"c52ff89d-ecee-40a6-aff6-88924f18f386\") " pod="openshift-machine-config-operator/machine-config-server-kwds4" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.334905 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9a818bf-9611-4945-9350-97ca20b42b26-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9t5n9\" (UID: \"f9a818bf-9611-4945-9350-97ca20b42b26\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9t5n9" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.334952 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c52ff89d-ecee-40a6-aff6-88924f18f386-node-bootstrap-token\") pod \"machine-config-server-kwds4\" (UID: \"c52ff89d-ecee-40a6-aff6-88924f18f386\") " pod="openshift-machine-config-operator/machine-config-server-kwds4" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.334977 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6c6431a6-2eaa-4931-9891-f6c08c3ed5ce-metrics-tls\") pod \"ingress-operator-5b745b69d9-j4q2w\" (UID: \"6c6431a6-2eaa-4931-9891-f6c08c3ed5ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j4q2w" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.337898 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cbede535-d73e-41cf-b483-6f6794647f90-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-ghxgb\" (UID: \"cbede535-d73e-41cf-b483-6f6794647f90\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghxgb" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.337335 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c6431a6-2eaa-4931-9891-f6c08c3ed5ce-trusted-ca\") pod \"ingress-operator-5b745b69d9-j4q2w\" (UID: \"6c6431a6-2eaa-4931-9891-f6c08c3ed5ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j4q2w" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.339396 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6c6431a6-2eaa-4931-9891-f6c08c3ed5ce-metrics-tls\") pod \"ingress-operator-5b745b69d9-j4q2w\" (UID: \"6c6431a6-2eaa-4931-9891-f6c08c3ed5ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j4q2w" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.340140 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b1c92fd3-b8d6-4db7-bf23-784d5eb8ac5e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-cxjm4\" (UID: \"b1c92fd3-b8d6-4db7-bf23-784d5eb8ac5e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cxjm4" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.340633 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" event={"ID":"0cedaefc-2211-4575-8993-8aff39f0d5a3","Type":"ContainerStarted","Data":"1b1c39421b58618367d1d5fc2141cbdfe12cac85b34194fd90f9340bee512721"} Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.341349 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b26036bc-4cff-472f-a379-8dc4541cf018-registry-certificates\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.341602 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cbede535-d73e-41cf-b483-6f6794647f90-config\") pod \"kube-controller-manager-operator-78b949d7b-ghxgb\" (UID: \"cbede535-d73e-41cf-b483-6f6794647f90\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghxgb" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.341626 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a7623b88-9c92-45ab-b541-ee947e5c67df-config-volume\") pod \"dns-default-d5qvq\" (UID: \"a7623b88-9c92-45ab-b541-ee947e5c67df\") " pod="openshift-dns/dns-default-d5qvq" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.342639 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b59f8f0-e1c8-4187-a509-1a7f58a0ba37-service-ca-bundle\") pod \"router-default-5444994796-d25mp\" (UID: \"8b59f8f0-e1c8-4187-a509-1a7f58a0ba37\") " pod="openshift-ingress/router-default-5444994796-d25mp" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.343080 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b26036bc-4cff-472f-a379-8dc4541cf018-installation-pull-secrets\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.343516 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8b59f8f0-e1c8-4187-a509-1a7f58a0ba37-stats-auth\") pod \"router-default-5444994796-d25mp\" (UID: \"8b59f8f0-e1c8-4187-a509-1a7f58a0ba37\") " pod="openshift-ingress/router-default-5444994796-d25mp" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.344249 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9a818bf-9611-4945-9350-97ca20b42b26-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9t5n9\" (UID: \"f9a818bf-9611-4945-9350-97ca20b42b26\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9t5n9" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.344905 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9a818bf-9611-4945-9350-97ca20b42b26-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9t5n9\" (UID: \"f9a818bf-9611-4945-9350-97ca20b42b26\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9t5n9" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.345393 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a7623b88-9c92-45ab-b541-ee947e5c67df-metrics-tls\") pod \"dns-default-d5qvq\" (UID: \"a7623b88-9c92-45ab-b541-ee947e5c67df\") " pod="openshift-dns/dns-default-d5qvq" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.345722 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b26036bc-4cff-472f-a379-8dc4541cf018-ca-trust-extracted\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.346231 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b1c92fd3-b8d6-4db7-bf23-784d5eb8ac5e-proxy-tls\") pod \"machine-config-controller-84d6567774-cxjm4\" (UID: \"b1c92fd3-b8d6-4db7-bf23-784d5eb8ac5e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cxjm4" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.346843 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/77f92d06-8808-403f-a105-192cdc57730d-signing-cabundle\") pod \"service-ca-9c57cc56f-hwjqf\" (UID: \"77f92d06-8808-403f-a105-192cdc57730d\") " pod="openshift-service-ca/service-ca-9c57cc56f-hwjqf" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.347609 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b26036bc-4cff-472f-a379-8dc4541cf018-trusted-ca\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.347813 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjl4b" event={"ID":"36b289c9-56bc-4b1a-ab7d-1777b34bcaf4","Type":"ContainerStarted","Data":"864a020c2718d40524346f78c22b52a99c5f24a082b5ee4ff48b12b88b108147"} Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.349084 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b26036bc-4cff-472f-a379-8dc4541cf018-registry-tls\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.349761 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1182c93a-3e68-4418-aeb7-8394689b55c2-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-p6wvt\" (UID: \"1182c93a-3e68-4418-aeb7-8394689b55c2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p6wvt" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.349909 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/30f28982-f167-414c-9ad7-b76a6e8eb5ca-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-l6r2x\" (UID: \"30f28982-f167-414c-9ad7-b76a6e8eb5ca\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6r2x" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.350817 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8f9c5e4-05ac-48dd-8e04-81b8087e3a72-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-c76br\" (UID: \"b8f9c5e4-05ac-48dd-8e04-81b8087e3a72\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c76br" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.350923 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-tq7nn"] Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.351383 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8b59f8f0-e1c8-4187-a509-1a7f58a0ba37-default-certificate\") pod \"router-default-5444994796-d25mp\" (UID: \"8b59f8f0-e1c8-4187-a509-1a7f58a0ba37\") " pod="openshift-ingress/router-default-5444994796-d25mp" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.351723 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fn5hz" event={"ID":"b964ba7c-9b8c-40d8-b671-915649b4d77b","Type":"ContainerStarted","Data":"ee7a1b8c8ddbf58993e2ad10d44108552370b2bbe80dbe07ba9023b8c6d8a186"} Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.352651 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" event={"ID":"ea58e7c7-9e3b-42ea-898f-c161a7ce17d7","Type":"ContainerStarted","Data":"10ac7f395a2c75b2836598c36fd43bf085a442e2063a81e6d5fa08883cad6169"} Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.354618 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-mftw8" event={"ID":"15b64868-afa1-4d70-bfda-799ed31decdb","Type":"ContainerStarted","Data":"120ed0f5df7f9cff388a84f149f40af1895c4c05ba1aec829fc98ca63f6e2173"} Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.355468 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2036b47b-0f29-4c79-b26c-c8877d60cfc4-srv-cert\") pod \"olm-operator-6b444d44fb-j4m77\" (UID: \"2036b47b-0f29-4c79-b26c-c8877d60cfc4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j4m77" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.355584 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2036b47b-0f29-4c79-b26c-c8877d60cfc4-profile-collector-cert\") pod \"olm-operator-6b444d44fb-j4m77\" (UID: \"2036b47b-0f29-4c79-b26c-c8877d60cfc4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j4m77" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.355923 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8b59f8f0-e1c8-4187-a509-1a7f58a0ba37-metrics-certs\") pod \"router-default-5444994796-d25mp\" (UID: \"8b59f8f0-e1c8-4187-a509-1a7f58a0ba37\") " pod="openshift-ingress/router-default-5444994796-d25mp" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.357054 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/77f92d06-8808-403f-a105-192cdc57730d-signing-key\") pod \"service-ca-9c57cc56f-hwjqf\" (UID: \"77f92d06-8808-403f-a105-192cdc57730d\") " pod="openshift-service-ca/service-ca-9c57cc56f-hwjqf" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.357151 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-t522l" event={"ID":"0b28142c-7b85-406e-b158-42517bab7f11","Type":"ContainerStarted","Data":"40efb309662502b632afcd3f3796db3bc782ef5d2337372770a5f6aa135f37ce"} Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.358452 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x7dlg" event={"ID":"966e01cf-5149-43ef-8967-517e68e2bbaa","Type":"ContainerStarted","Data":"d1e221e58285a358ffab4ca64f370d8ef8ee8febfb197aa0397c58930f77734b"} Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.363908 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s28vn"] Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.364762 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bc827\" (UniqueName: \"kubernetes.io/projected/77f92d06-8808-403f-a105-192cdc57730d-kube-api-access-bc827\") pod \"service-ca-9c57cc56f-hwjqf\" (UID: \"77f92d06-8808-403f-a105-192cdc57730d\") " pod="openshift-service-ca/service-ca-9c57cc56f-hwjqf" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.365045 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scll7\" (UniqueName: \"kubernetes.io/projected/8b59f8f0-e1c8-4187-a509-1a7f58a0ba37-kube-api-access-scll7\") pod \"router-default-5444994796-d25mp\" (UID: \"8b59f8f0-e1c8-4187-a509-1a7f58a0ba37\") " pod="openshift-ingress/router-default-5444994796-d25mp" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.408892 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-d8sjd"] Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.409411 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c6431a6-2eaa-4931-9891-f6c08c3ed5ce-bound-sa-token\") pod \"ingress-operator-5b745b69d9-j4q2w\" (UID: \"6c6431a6-2eaa-4931-9891-f6c08c3ed5ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j4q2w" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.415729 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grggx\" (UniqueName: \"kubernetes.io/projected/6c6431a6-2eaa-4931-9891-f6c08c3ed5ce-kube-api-access-grggx\") pod \"ingress-operator-5b745b69d9-j4q2w\" (UID: \"6c6431a6-2eaa-4931-9891-f6c08c3ed5ce\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j4q2w" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.418074 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f9a818bf-9611-4945-9350-97ca20b42b26-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9t5n9\" (UID: \"f9a818bf-9611-4945-9350-97ca20b42b26\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9t5n9" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.436741 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk257\" (UniqueName: \"kubernetes.io/projected/30f28982-f167-414c-9ad7-b76a6e8eb5ca-kube-api-access-hk257\") pod \"package-server-manager-789f6589d5-l6r2x\" (UID: \"30f28982-f167-414c-9ad7-b76a6e8eb5ca\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6r2x" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.438491 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82f388a5-daaa-45da-ba97-d3ea85530dfa-cert\") pod \"ingress-canary-tv82h\" (UID: \"82f388a5-daaa-45da-ba97-d3ea85530dfa\") " pod="openshift-ingress-canary/ingress-canary-tv82h" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.438804 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78ptj\" (UniqueName: \"kubernetes.io/projected/a2c6ee2e-54a9-4992-ac77-2b1f65957602-kube-api-access-78ptj\") pod \"csi-hostpathplugin-dnkcc\" (UID: \"a2c6ee2e-54a9-4992-ac77-2b1f65957602\") " pod="hostpath-provisioner/csi-hostpathplugin-dnkcc" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.438875 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c52ff89d-ecee-40a6-aff6-88924f18f386-certs\") pod \"machine-config-server-kwds4\" (UID: \"c52ff89d-ecee-40a6-aff6-88924f18f386\") " pod="openshift-machine-config-operator/machine-config-server-kwds4" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.438901 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c52ff89d-ecee-40a6-aff6-88924f18f386-node-bootstrap-token\") pod \"machine-config-server-kwds4\" (UID: \"c52ff89d-ecee-40a6-aff6-88924f18f386\") " pod="openshift-machine-config-operator/machine-config-server-kwds4" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.438933 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a2c6ee2e-54a9-4992-ac77-2b1f65957602-mountpoint-dir\") pod \"csi-hostpathplugin-dnkcc\" (UID: \"a2c6ee2e-54a9-4992-ac77-2b1f65957602\") " pod="hostpath-provisioner/csi-hostpathplugin-dnkcc" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.438955 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.438981 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j89pk\" (UniqueName: \"kubernetes.io/projected/c52ff89d-ecee-40a6-aff6-88924f18f386-kube-api-access-j89pk\") pod \"machine-config-server-kwds4\" (UID: \"c52ff89d-ecee-40a6-aff6-88924f18f386\") " pod="openshift-machine-config-operator/machine-config-server-kwds4" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.438998 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a2c6ee2e-54a9-4992-ac77-2b1f65957602-socket-dir\") pod \"csi-hostpathplugin-dnkcc\" (UID: \"a2c6ee2e-54a9-4992-ac77-2b1f65957602\") " pod="hostpath-provisioner/csi-hostpathplugin-dnkcc" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.439014 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a2c6ee2e-54a9-4992-ac77-2b1f65957602-csi-data-dir\") pod \"csi-hostpathplugin-dnkcc\" (UID: \"a2c6ee2e-54a9-4992-ac77-2b1f65957602\") " pod="hostpath-provisioner/csi-hostpathplugin-dnkcc" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.439056 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwjrp\" (UniqueName: \"kubernetes.io/projected/82f388a5-daaa-45da-ba97-d3ea85530dfa-kube-api-access-bwjrp\") pod \"ingress-canary-tv82h\" (UID: \"82f388a5-daaa-45da-ba97-d3ea85530dfa\") " pod="openshift-ingress-canary/ingress-canary-tv82h" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.439086 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a2c6ee2e-54a9-4992-ac77-2b1f65957602-plugins-dir\") pod \"csi-hostpathplugin-dnkcc\" (UID: \"a2c6ee2e-54a9-4992-ac77-2b1f65957602\") " pod="hostpath-provisioner/csi-hostpathplugin-dnkcc" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.439128 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a2c6ee2e-54a9-4992-ac77-2b1f65957602-registration-dir\") pod \"csi-hostpathplugin-dnkcc\" (UID: \"a2c6ee2e-54a9-4992-ac77-2b1f65957602\") " pod="hostpath-provisioner/csi-hostpathplugin-dnkcc" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.439367 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a2c6ee2e-54a9-4992-ac77-2b1f65957602-registration-dir\") pod \"csi-hostpathplugin-dnkcc\" (UID: \"a2c6ee2e-54a9-4992-ac77-2b1f65957602\") " pod="hostpath-provisioner/csi-hostpathplugin-dnkcc" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.439421 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a2c6ee2e-54a9-4992-ac77-2b1f65957602-socket-dir\") pod \"csi-hostpathplugin-dnkcc\" (UID: \"a2c6ee2e-54a9-4992-ac77-2b1f65957602\") " pod="hostpath-provisioner/csi-hostpathplugin-dnkcc" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.439472 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a2c6ee2e-54a9-4992-ac77-2b1f65957602-csi-data-dir\") pod \"csi-hostpathplugin-dnkcc\" (UID: \"a2c6ee2e-54a9-4992-ac77-2b1f65957602\") " pod="hostpath-provisioner/csi-hostpathplugin-dnkcc" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.439512 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a2c6ee2e-54a9-4992-ac77-2b1f65957602-plugins-dir\") pod \"csi-hostpathplugin-dnkcc\" (UID: \"a2c6ee2e-54a9-4992-ac77-2b1f65957602\") " pod="hostpath-provisioner/csi-hostpathplugin-dnkcc" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.439509 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a2c6ee2e-54a9-4992-ac77-2b1f65957602-mountpoint-dir\") pod \"csi-hostpathplugin-dnkcc\" (UID: \"a2c6ee2e-54a9-4992-ac77-2b1f65957602\") " pod="hostpath-provisioner/csi-hostpathplugin-dnkcc" Apr 04 01:58:56 crc kubenswrapper[4681]: E0404 01:58:56.439630 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:58:56.939616457 +0000 UTC m=+216.605391577 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.443590 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c52ff89d-ecee-40a6-aff6-88924f18f386-certs\") pod \"machine-config-server-kwds4\" (UID: \"c52ff89d-ecee-40a6-aff6-88924f18f386\") " pod="openshift-machine-config-operator/machine-config-server-kwds4" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.448219 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c52ff89d-ecee-40a6-aff6-88924f18f386-node-bootstrap-token\") pod \"machine-config-server-kwds4\" (UID: \"c52ff89d-ecee-40a6-aff6-88924f18f386\") " pod="openshift-machine-config-operator/machine-config-server-kwds4" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.451364 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82f388a5-daaa-45da-ba97-d3ea85530dfa-cert\") pod \"ingress-canary-tv82h\" (UID: \"82f388a5-daaa-45da-ba97-d3ea85530dfa\") " pod="openshift-ingress-canary/ingress-canary-tv82h" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.475706 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfgdf\" (UniqueName: \"kubernetes.io/projected/b8f9c5e4-05ac-48dd-8e04-81b8087e3a72-kube-api-access-gfgdf\") pod \"control-plane-machine-set-operator-78cbb6b69f-c76br\" (UID: \"b8f9c5e4-05ac-48dd-8e04-81b8087e3a72\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c76br" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.493630 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c76br" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.494362 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hwjqf" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.495182 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw6w6\" (UniqueName: \"kubernetes.io/projected/b1c92fd3-b8d6-4db7-bf23-784d5eb8ac5e-kube-api-access-kw6w6\") pod \"machine-config-controller-84d6567774-cxjm4\" (UID: \"b1c92fd3-b8d6-4db7-bf23-784d5eb8ac5e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cxjm4" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.497636 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll5zt\" (UniqueName: \"kubernetes.io/projected/2036b47b-0f29-4c79-b26c-c8877d60cfc4-kube-api-access-ll5zt\") pod \"olm-operator-6b444d44fb-j4m77\" (UID: \"2036b47b-0f29-4c79-b26c-c8877d60cfc4\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j4m77" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.502057 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6r2x" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.509390 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-d25mp" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.509739 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ntv7l"] Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.515065 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cbede535-d73e-41cf-b483-6f6794647f90-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-ghxgb\" (UID: \"cbede535-d73e-41cf-b483-6f6794647f90\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghxgb" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.537074 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6pjv\" (UniqueName: \"kubernetes.io/projected/a7623b88-9c92-45ab-b541-ee947e5c67df-kube-api-access-n6pjv\") pod \"dns-default-d5qvq\" (UID: \"a7623b88-9c92-45ab-b541-ee947e5c67df\") " pod="openshift-dns/dns-default-d5qvq" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.539678 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:58:56 crc kubenswrapper[4681]: E0404 01:58:56.540110 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:58:57.040097415 +0000 UTC m=+216.705872535 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.541780 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-2fpgr"] Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.545761 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghxgb" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.552862 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxjjm\" (UniqueName: \"kubernetes.io/projected/b26036bc-4cff-472f-a379-8dc4541cf018-kube-api-access-zxjjm\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.553568 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j4q2w" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.563629 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9t5n9" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.564060 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.569710 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-d5qvq" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.576091 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b26036bc-4cff-472f-a379-8dc4541cf018-bound-sa-token\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.576664 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587785-ft78m"] Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.609934 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1182c93a-3e68-4418-aeb7-8394689b55c2-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-p6wvt\" (UID: \"1182c93a-3e68-4418-aeb7-8394689b55c2\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p6wvt" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.615185 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 01:58:56 crc kubenswrapper[4681]: W0404 01:58:56.623554 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf31d35d_7049_4892_8bd1_3dc9deb4325c.slice/crio-ee6f87ecb0c0bc0c067f33e8d5c096d0d5b33113b00168bcf0c39e5524ec507d WatchSource:0}: Error finding container ee6f87ecb0c0bc0c067f33e8d5c096d0d5b33113b00168bcf0c39e5524ec507d: Status 404 returned error can't find the container with id ee6f87ecb0c0bc0c067f33e8d5c096d0d5b33113b00168bcf0c39e5524ec507d Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.631662 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8g27v"] Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.638737 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78ptj\" (UniqueName: \"kubernetes.io/projected/a2c6ee2e-54a9-4992-ac77-2b1f65957602-kube-api-access-78ptj\") pod \"csi-hostpathplugin-dnkcc\" (UID: \"a2c6ee2e-54a9-4992-ac77-2b1f65957602\") " pod="hostpath-provisioner/csi-hostpathplugin-dnkcc" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.642105 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:56 crc kubenswrapper[4681]: E0404 01:58:56.642660 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:58:57.142645212 +0000 UTC m=+216.808420332 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.657095 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j89pk\" (UniqueName: \"kubernetes.io/projected/c52ff89d-ecee-40a6-aff6-88924f18f386-kube-api-access-j89pk\") pod \"machine-config-server-kwds4\" (UID: \"c52ff89d-ecee-40a6-aff6-88924f18f386\") " pod="openshift-machine-config-operator/machine-config-server-kwds4" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.670635 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwjrp\" (UniqueName: \"kubernetes.io/projected/82f388a5-daaa-45da-ba97-d3ea85530dfa-kube-api-access-bwjrp\") pod \"ingress-canary-tv82h\" (UID: \"82f388a5-daaa-45da-ba97-d3ea85530dfa\") " pod="openshift-ingress-canary/ingress-canary-tv82h" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.696304 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587798-tmssr"] Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.708429 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kxzxq"] Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.743639 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:58:56 crc kubenswrapper[4681]: E0404 01:58:56.743921 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:58:57.243894002 +0000 UTC m=+216.909669122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.774169 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cxjm4" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.781249 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j4m77" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.838598 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p6wvt" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.845830 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:56 crc kubenswrapper[4681]: E0404 01:58:56.846133 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:58:57.34612198 +0000 UTC m=+217.011897100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.877496 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-tv82h" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.890973 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-dnkcc" Apr 04 01:58:56 crc kubenswrapper[4681]: W0404 01:58:56.904856 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcd50c8c_38c5_4c42_930d_2235c4384328.slice/crio-5e2ff0fa32024e08e808e07c16cb5533271b5627f1b4a184fbfce846e62b3d22 WatchSource:0}: Error finding container 5e2ff0fa32024e08e808e07c16cb5533271b5627f1b4a184fbfce846e62b3d22: Status 404 returned error can't find the container with id 5e2ff0fa32024e08e808e07c16cb5533271b5627f1b4a184fbfce846e62b3d22 Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.905701 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kwds4" Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.908633 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 04 01:58:56 crc kubenswrapper[4681]: I0404 01:58:56.947086 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:58:56 crc kubenswrapper[4681]: E0404 01:58:56.948519 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:58:57.448497971 +0000 UTC m=+217.114273091 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.049192 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:57 crc kubenswrapper[4681]: E0404 01:58:57.049804 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:58:57.549784002 +0000 UTC m=+217.215559122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.162800 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.162953 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wpzmf"] Apr 04 01:58:57 crc kubenswrapper[4681]: E0404 01:58:57.162994 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:58:57.662967438 +0000 UTC m=+217.328742558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.163202 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:57 crc kubenswrapper[4681]: E0404 01:58:57.163578 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:58:57.663564695 +0000 UTC m=+217.329339815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.264914 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:58:57 crc kubenswrapper[4681]: E0404 01:58:57.265225 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:58:57.765211776 +0000 UTC m=+217.430986896 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.309062 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p6wvt"] Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.310159 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt66q"] Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.330245 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dv8vg"] Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.363250 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-d5qvq"] Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.367041 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:57 crc kubenswrapper[4681]: E0404 01:58:57.367452 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:58:57.867438463 +0000 UTC m=+217.533213593 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.368678 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" event={"ID":"394b01ea-0b57-4565-aa56-96b6c5372a15","Type":"ContainerStarted","Data":"0ae4a3b30f3083770b96eac4bca79cf5f467ec5193950c34734c50bb6835c7c1"} Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.371779 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-mftw8" event={"ID":"15b64868-afa1-4d70-bfda-799ed31decdb","Type":"ContainerStarted","Data":"054967a5db79a98c4fe9d95c46fa51fd24b6d0237c06574e9c96ce98220feb2a"} Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.372894 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kxzxq" event={"ID":"664aa862-1bb6-421a-87b9-992ead56694b","Type":"ContainerStarted","Data":"2e10c9c28901d26d2cb700af683b207de6979efa1c472f6bfb446404f5b7b855"} Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.373928 4681 generic.go:334] "Generic (PLEG): container finished" podID="5da93ec3-d19f-40d9-97f1-994998839180" containerID="7fd6a52dc69a2290ee5d872d48e3ed38f746111abac9762955c9fbc7e8e81ec4" exitCode=0 Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.373982 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" event={"ID":"5da93ec3-d19f-40d9-97f1-994998839180","Type":"ContainerDied","Data":"7fd6a52dc69a2290ee5d872d48e3ed38f746111abac9762955c9fbc7e8e81ec4"} Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.375726 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587798-tmssr" event={"ID":"fcd50c8c-38c5-4c42-930d-2235c4384328","Type":"ContainerStarted","Data":"5e2ff0fa32024e08e808e07c16cb5533271b5627f1b4a184fbfce846e62b3d22"} Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.377936 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjl4b" event={"ID":"36b289c9-56bc-4b1a-ab7d-1777b34bcaf4","Type":"ContainerStarted","Data":"506ea35e33ea7f12bd1f1a9804e6494b655169d3b6a4d9f0bf515024a8afb565"} Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.378769 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ntv7l" event={"ID":"b3fc9a5b-081d-4321-ac46-42992adcf541","Type":"ContainerStarted","Data":"841344d1ce7cf1d8cb68492982f446a4c85c262e7c57c8dc9b56f61d67156797"} Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.381978 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s28vn" event={"ID":"fd99e1c1-f4c2-42cb-ad67-76f781407b88","Type":"ContainerStarted","Data":"b6b9dddfef47a87318b46a1048fce0bc1fcbf92401eb2d7ea8809d017244ae60"} Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.384024 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2fpgr" event={"ID":"1d0b49f8-7d1c-41ce-9735-73cfa83aa3a2","Type":"ContainerStarted","Data":"48ff1317436fb6d5c235858f32a2645392bbb6b56d713036a082616b94df8d92"} Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.385923 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587785-ft78m" event={"ID":"56028b8f-0d6b-4f7f-b4d6-cefc5acec683","Type":"ContainerStarted","Data":"5d5a3ca11a18835dd51381e9bd77ccf6a2a118940dae5dbf92c5990b703d2c55"} Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.387240 4681 generic.go:334] "Generic (PLEG): container finished" podID="ea58e7c7-9e3b-42ea-898f-c161a7ce17d7" containerID="b3fd87e77de4a3cfce71183bc82998ae4537994f51a4c3112193c70cafbd4846" exitCode=0 Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.387313 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" event={"ID":"ea58e7c7-9e3b-42ea-898f-c161a7ce17d7","Type":"ContainerDied","Data":"b3fd87e77de4a3cfce71183bc82998ae4537994f51a4c3112193c70cafbd4846"} Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.388305 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c6ktd" event={"ID":"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798","Type":"ContainerStarted","Data":"fb66f684fcb12fc8bf6dfd6dc2f7f92e1acc9a42c1878c52bf543bcb51c94850"} Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.389185 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-d8sjd" event={"ID":"cf31d35d-7049-4892-8bd1-3dc9deb4325c","Type":"ContainerStarted","Data":"ee6f87ecb0c0bc0c067f33e8d5c096d0d5b33113b00168bcf0c39e5524ec507d"} Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.390763 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-sp7hg" event={"ID":"ad077696-8d80-47a9-9bb2-23764ccd2b6a","Type":"ContainerStarted","Data":"0c4f8b32bf0927c1c5fb3d2f21ce4fbb974b6af921799f562d928e25829113d5"} Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.392526 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-blqhv" event={"ID":"5bb4d019-ae1f-4aa2-b255-a6974c4edf4a","Type":"ContainerStarted","Data":"dfb53a9838ae7da2900b72730dec271d562c080298c0a9671181461f7be926f9"} Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.394474 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9rfl7" event={"ID":"0881096e-2031-4e7a-8a1a-927fcceccf61","Type":"ContainerStarted","Data":"1a7533d0da254d325720f4f9d0ffdce7bee88b5bed4ab82658a45c0e9f3a39e3"} Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.397124 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tb658"] Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.443760 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9t5n9"] Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.445826 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6r2x"] Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.467985 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:58:57 crc kubenswrapper[4681]: E0404 01:58:57.468115 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:58:57.968096357 +0000 UTC m=+217.633871487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.468185 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:57 crc kubenswrapper[4681]: E0404 01:58:57.469170 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:58:57.969157216 +0000 UTC m=+217.634932516 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.569795 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:58:57 crc kubenswrapper[4681]: E0404 01:58:57.569934 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:58:58.069907862 +0000 UTC m=+217.735682992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.570061 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:57 crc kubenswrapper[4681]: E0404 01:58:57.570693 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:58:58.070673363 +0000 UTC m=+217.736448513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:57 crc kubenswrapper[4681]: W0404 01:58:57.630868 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod645ae111_522a_4216_aadd_0901313020ce.slice/crio-3942e06a25bb8844313cc5b0325583696d2e2ee8b3a96bbe76fcff6695326649 WatchSource:0}: Error finding container 3942e06a25bb8844313cc5b0325583696d2e2ee8b3a96bbe76fcff6695326649: Status 404 returned error can't find the container with id 3942e06a25bb8844313cc5b0325583696d2e2ee8b3a96bbe76fcff6695326649 Apr 04 01:58:57 crc kubenswrapper[4681]: W0404 01:58:57.636298 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1182c93a_3e68_4418_aeb7_8394689b55c2.slice/crio-261aafb73880dc8db67e73b524a78680d8a75819c18611fa2b0584d8e7dc1373 WatchSource:0}: Error finding container 261aafb73880dc8db67e73b524a78680d8a75819c18611fa2b0584d8e7dc1373: Status 404 returned error can't find the container with id 261aafb73880dc8db67e73b524a78680d8a75819c18611fa2b0584d8e7dc1373 Apr 04 01:58:57 crc kubenswrapper[4681]: W0404 01:58:57.637916 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod559af3cb_f642_4e99_91e1_155840a1629c.slice/crio-532e6c334b62ef69d0b2f8a3b54e714d2e2894cb9a17a56f7cfd94c52cf0d1e9 WatchSource:0}: Error finding container 532e6c334b62ef69d0b2f8a3b54e714d2e2894cb9a17a56f7cfd94c52cf0d1e9: Status 404 returned error can't find the container with id 532e6c334b62ef69d0b2f8a3b54e714d2e2894cb9a17a56f7cfd94c52cf0d1e9 Apr 04 01:58:57 crc kubenswrapper[4681]: W0404 01:58:57.642036 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5999fa11_c8b4_4e7f_ae21_1b570aa79853.slice/crio-6d3d27856c4e89d65d97ab60376a7bb22aa002a60cb19be2f9182367fc218445 WatchSource:0}: Error finding container 6d3d27856c4e89d65d97ab60376a7bb22aa002a60cb19be2f9182367fc218445: Status 404 returned error can't find the container with id 6d3d27856c4e89d65d97ab60376a7bb22aa002a60cb19be2f9182367fc218445 Apr 04 01:58:57 crc kubenswrapper[4681]: W0404 01:58:57.655500 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7623b88_9c92_45ab_b541_ee947e5c67df.slice/crio-bcb55315cb0b3c170edf19271f849fafa3a421d283568ca845a3fdea78e33e59 WatchSource:0}: Error finding container bcb55315cb0b3c170edf19271f849fafa3a421d283568ca845a3fdea78e33e59: Status 404 returned error can't find the container with id bcb55315cb0b3c170edf19271f849fafa3a421d283568ca845a3fdea78e33e59 Apr 04 01:58:57 crc kubenswrapper[4681]: W0404 01:58:57.661608 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42fde299_09b3_4bec_83c9_71af1d27475a.slice/crio-eb98ea79f49f9c4081a2d6ef3baf6a24565719c4ee9e44d5d70c594584744c5d WatchSource:0}: Error finding container eb98ea79f49f9c4081a2d6ef3baf6a24565719c4ee9e44d5d70c594584744c5d: Status 404 returned error can't find the container with id eb98ea79f49f9c4081a2d6ef3baf6a24565719c4ee9e44d5d70c594584744c5d Apr 04 01:58:57 crc kubenswrapper[4681]: W0404 01:58:57.669044 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30f28982_f167_414c_9ad7_b76a6e8eb5ca.slice/crio-01f49f8593bbc9d545844cd4e48372903cba627ad70ebf988908b4937a38151e WatchSource:0}: Error finding container 01f49f8593bbc9d545844cd4e48372903cba627ad70ebf988908b4937a38151e: Status 404 returned error can't find the container with id 01f49f8593bbc9d545844cd4e48372903cba627ad70ebf988908b4937a38151e Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.670649 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:58:57 crc kubenswrapper[4681]: E0404 01:58:57.670849 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:58:58.170823763 +0000 UTC m=+217.836598893 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.671047 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:57 crc kubenswrapper[4681]: E0404 01:58:57.671448 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:58:58.17143547 +0000 UTC m=+217.837210600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:57 crc kubenswrapper[4681]: W0404 01:58:57.683873 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc52ff89d_ecee_40a6_aff6_88924f18f386.slice/crio-c7d5c175b3bb9ccb258588bc3e22c194352fdf1232776f98b4355eaf6b68e57d WatchSource:0}: Error finding container c7d5c175b3bb9ccb258588bc3e22c194352fdf1232776f98b4355eaf6b68e57d: Status 404 returned error can't find the container with id c7d5c175b3bb9ccb258588bc3e22c194352fdf1232776f98b4355eaf6b68e57d Apr 04 01:58:57 crc kubenswrapper[4681]: W0404 01:58:57.743192 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b59f8f0_e1c8_4187_a509_1a7f58a0ba37.slice/crio-a343f727f29251fd7514f5594e145c46bcba3015ac79e8761c2c94e7207553c8 WatchSource:0}: Error finding container a343f727f29251fd7514f5594e145c46bcba3015ac79e8761c2c94e7207553c8: Status 404 returned error can't find the container with id a343f727f29251fd7514f5594e145c46bcba3015ac79e8761c2c94e7207553c8 Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.771816 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:58:57 crc kubenswrapper[4681]: E0404 01:58:57.772118 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:58:58.272098823 +0000 UTC m=+217.937873943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.872478 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:57 crc kubenswrapper[4681]: E0404 01:58:57.872877 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:58:58.372865839 +0000 UTC m=+218.038640959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:57 crc kubenswrapper[4681]: I0404 01:58:57.975343 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:58:57 crc kubenswrapper[4681]: E0404 01:58:57.975718 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:58:58.475700684 +0000 UTC m=+218.141475794 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.005488 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-tv82h"] Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.077326 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:58 crc kubenswrapper[4681]: E0404 01:58:58.077610 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:58:58.577596132 +0000 UTC m=+218.243371252 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.169369 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjl4b" podStartSLOduration=163.169352645 podStartE2EDuration="2m43.169352645s" podCreationTimestamp="2026-04-04 01:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:58:58.169056337 +0000 UTC m=+217.834831457" watchObservedRunningTime="2026-04-04 01:58:58.169352645 +0000 UTC m=+217.835127765" Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.181807 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:58:58 crc kubenswrapper[4681]: E0404 01:58:58.182081 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:58:58.682067522 +0000 UTC m=+218.347842642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.219300 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j4m77"] Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.231504 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-j4q2w"] Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.283119 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:58 crc kubenswrapper[4681]: E0404 01:58:58.283559 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:58:58.783540099 +0000 UTC m=+218.449315239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.384026 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:58:58 crc kubenswrapper[4681]: E0404 01:58:58.384162 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:58:58.884140461 +0000 UTC m=+218.549915581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.384205 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:58 crc kubenswrapper[4681]: E0404 01:58:58.384545 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:58:58.884537022 +0000 UTC m=+218.550312142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.397440 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-d5qvq" event={"ID":"a7623b88-9c92-45ab-b541-ee947e5c67df","Type":"ContainerStarted","Data":"bcb55315cb0b3c170edf19271f849fafa3a421d283568ca845a3fdea78e33e59"} Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.398226 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9t5n9" event={"ID":"f9a818bf-9611-4945-9350-97ca20b42b26","Type":"ContainerStarted","Data":"66835e0c744db53d524562bb1bd09b039f56a1b4b168192bdcf6246fab413472"} Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.399364 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-tv82h" event={"ID":"82f388a5-daaa-45da-ba97-d3ea85530dfa","Type":"ContainerStarted","Data":"3d9b865a98714a5b16504494656f0e9c657f1433e45da7a751c065e7d77ba385"} Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.400377 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wpzmf" event={"ID":"645ae111-522a-4216-aadd-0901313020ce","Type":"ContainerStarted","Data":"3942e06a25bb8844313cc5b0325583696d2e2ee8b3a96bbe76fcff6695326649"} Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.401153 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dv8vg" event={"ID":"5999fa11-c8b4-4e7f-ae21-1b570aa79853","Type":"ContainerStarted","Data":"6d3d27856c4e89d65d97ab60376a7bb22aa002a60cb19be2f9182367fc218445"} Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.402032 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kwds4" event={"ID":"c52ff89d-ecee-40a6-aff6-88924f18f386","Type":"ContainerStarted","Data":"c7d5c175b3bb9ccb258588bc3e22c194352fdf1232776f98b4355eaf6b68e57d"} Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.402775 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tb658" event={"ID":"42fde299-09b3-4bec-83c9-71af1d27475a","Type":"ContainerStarted","Data":"eb98ea79f49f9c4081a2d6ef3baf6a24565719c4ee9e44d5d70c594584744c5d"} Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.403643 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-d25mp" event={"ID":"8b59f8f0-e1c8-4187-a509-1a7f58a0ba37","Type":"ContainerStarted","Data":"a343f727f29251fd7514f5594e145c46bcba3015ac79e8761c2c94e7207553c8"} Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.404510 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6r2x" event={"ID":"30f28982-f167-414c-9ad7-b76a6e8eb5ca","Type":"ContainerStarted","Data":"01f49f8593bbc9d545844cd4e48372903cba627ad70ebf988908b4937a38151e"} Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.405247 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p6wvt" event={"ID":"1182c93a-3e68-4418-aeb7-8394689b55c2","Type":"ContainerStarted","Data":"261aafb73880dc8db67e73b524a78680d8a75819c18611fa2b0584d8e7dc1373"} Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.406106 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt66q" event={"ID":"559af3cb-f642-4e99-91e1-155840a1629c","Type":"ContainerStarted","Data":"532e6c334b62ef69d0b2f8a3b54e714d2e2894cb9a17a56f7cfd94c52cf0d1e9"} Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.428720 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c76br"] Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.431033 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-dnkcc"] Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.432842 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghxgb"] Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.434509 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hwjqf"] Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.436093 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-cxjm4"] Apr 04 01:58:58 crc kubenswrapper[4681]: W0404 01:58:58.446913 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c6431a6_2eaa_4931_9891_f6c08c3ed5ce.slice/crio-5e30b2297bceb2a7f86ba3d98fec29f96812f499b46a1b74ca4503289c61b09a WatchSource:0}: Error finding container 5e30b2297bceb2a7f86ba3d98fec29f96812f499b46a1b74ca4503289c61b09a: Status 404 returned error can't find the container with id 5e30b2297bceb2a7f86ba3d98fec29f96812f499b46a1b74ca4503289c61b09a Apr 04 01:58:58 crc kubenswrapper[4681]: W0404 01:58:58.454745 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1c92fd3_b8d6_4db7_bf23_784d5eb8ac5e.slice/crio-23f32fe7e9965c4ce8809f4c610bf72a164d9b569d174dbc0c4f54c4a7f2bfc4 WatchSource:0}: Error finding container 23f32fe7e9965c4ce8809f4c610bf72a164d9b569d174dbc0c4f54c4a7f2bfc4: Status 404 returned error can't find the container with id 23f32fe7e9965c4ce8809f4c610bf72a164d9b569d174dbc0c4f54c4a7f2bfc4 Apr 04 01:58:58 crc kubenswrapper[4681]: W0404 01:58:58.457479 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8f9c5e4_05ac_48dd_8e04_81b8087e3a72.slice/crio-231aa92702954ebb27c5dda9e2f133a58579b6fedbc18c7d450b7b1ef8b898ed WatchSource:0}: Error finding container 231aa92702954ebb27c5dda9e2f133a58579b6fedbc18c7d450b7b1ef8b898ed: Status 404 returned error can't find the container with id 231aa92702954ebb27c5dda9e2f133a58579b6fedbc18c7d450b7b1ef8b898ed Apr 04 01:58:58 crc kubenswrapper[4681]: W0404 01:58:58.462197 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbede535_d73e_41cf_b483_6f6794647f90.slice/crio-fb98928761d5fefab3e992b7d9da909cb6dc3ad5bb228d443d860b51cfcdaaff WatchSource:0}: Error finding container fb98928761d5fefab3e992b7d9da909cb6dc3ad5bb228d443d860b51cfcdaaff: Status 404 returned error can't find the container with id fb98928761d5fefab3e992b7d9da909cb6dc3ad5bb228d443d860b51cfcdaaff Apr 04 01:58:58 crc kubenswrapper[4681]: W0404 01:58:58.479775 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77f92d06_8808_403f_a105_192cdc57730d.slice/crio-efbdf398c3a70edaed96a04fd1a499d3b35866bc6f60f0b57e4a9b1bbc592fbd WatchSource:0}: Error finding container efbdf398c3a70edaed96a04fd1a499d3b35866bc6f60f0b57e4a9b1bbc592fbd: Status 404 returned error can't find the container with id efbdf398c3a70edaed96a04fd1a499d3b35866bc6f60f0b57e4a9b1bbc592fbd Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.485802 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:58:58 crc kubenswrapper[4681]: E0404 01:58:58.487150 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:58:58.987127959 +0000 UTC m=+218.652903079 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.588722 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:58 crc kubenswrapper[4681]: E0404 01:58:58.589017 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:58:59.089005717 +0000 UTC m=+218.754780837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.689918 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:58:58 crc kubenswrapper[4681]: E0404 01:58:58.690070 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:58:59.190049651 +0000 UTC m=+218.855824771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.690143 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:58 crc kubenswrapper[4681]: E0404 01:58:58.690466 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:58:59.190455092 +0000 UTC m=+218.856230212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.791151 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:58:58 crc kubenswrapper[4681]: E0404 01:58:58.791311 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:58:59.291290891 +0000 UTC m=+218.957066031 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.791475 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:58 crc kubenswrapper[4681]: E0404 01:58:58.791818 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:58:59.291808725 +0000 UTC m=+218.957583855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.891858 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:58:58 crc kubenswrapper[4681]: E0404 01:58:58.892028 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:58:59.391992815 +0000 UTC m=+219.057767945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.892145 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:58 crc kubenswrapper[4681]: E0404 01:58:58.892444 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:58:59.392428087 +0000 UTC m=+219.058203327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.993520 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:58:58 crc kubenswrapper[4681]: E0404 01:58:58.993874 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:58:59.493853812 +0000 UTC m=+219.159628932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:58 crc kubenswrapper[4681]: I0404 01:58:58.993933 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:58 crc kubenswrapper[4681]: E0404 01:58:58.994200 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:58:59.494188511 +0000 UTC m=+219.159963641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:59 crc kubenswrapper[4681]: I0404 01:58:59.095184 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:58:59 crc kubenswrapper[4681]: E0404 01:58:59.096299 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:58:59.596234234 +0000 UTC m=+219.262009384 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:59 crc kubenswrapper[4681]: I0404 01:58:59.197529 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:59 crc kubenswrapper[4681]: E0404 01:58:59.198028 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:58:59.698003828 +0000 UTC m=+219.363779018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:59 crc kubenswrapper[4681]: I0404 01:58:59.299211 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:58:59 crc kubenswrapper[4681]: E0404 01:58:59.299530 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:58:59.799499035 +0000 UTC m=+219.465274185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:59 crc kubenswrapper[4681]: I0404 01:58:59.300120 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:59 crc kubenswrapper[4681]: E0404 01:58:59.300639 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:58:59.800621177 +0000 UTC m=+219.466396337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:59 crc kubenswrapper[4681]: I0404 01:58:59.402410 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:58:59 crc kubenswrapper[4681]: E0404 01:58:59.402908 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:58:59.902882785 +0000 UTC m=+219.568657945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:59 crc kubenswrapper[4681]: I0404 01:58:59.411315 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hwjqf" event={"ID":"77f92d06-8808-403f-a105-192cdc57730d","Type":"ContainerStarted","Data":"efbdf398c3a70edaed96a04fd1a499d3b35866bc6f60f0b57e4a9b1bbc592fbd"} Apr 04 01:58:59 crc kubenswrapper[4681]: I0404 01:58:59.412800 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghxgb" event={"ID":"cbede535-d73e-41cf-b483-6f6794647f90","Type":"ContainerStarted","Data":"fb98928761d5fefab3e992b7d9da909cb6dc3ad5bb228d443d860b51cfcdaaff"} Apr 04 01:58:59 crc kubenswrapper[4681]: I0404 01:58:59.413961 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cxjm4" event={"ID":"b1c92fd3-b8d6-4db7-bf23-784d5eb8ac5e","Type":"ContainerStarted","Data":"23f32fe7e9965c4ce8809f4c610bf72a164d9b569d174dbc0c4f54c4a7f2bfc4"} Apr 04 01:58:59 crc kubenswrapper[4681]: I0404 01:58:59.415600 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mcv5m" event={"ID":"3d528d31-23f7-48f0-9e52-5357bd410c3d","Type":"ContainerStarted","Data":"b560269893e7e271dd621db73a0170787e04345c2759df2026a71e53c6ae1ed9"} Apr 04 01:58:59 crc kubenswrapper[4681]: I0404 01:58:59.417148 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-t522l" event={"ID":"0b28142c-7b85-406e-b158-42517bab7f11","Type":"ContainerStarted","Data":"75acc2d5d4742918bd229f104b0e1670f1a8e438b16f50b2fc0a86469929bab3"} Apr 04 01:58:59 crc kubenswrapper[4681]: I0404 01:58:59.418288 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c76br" event={"ID":"b8f9c5e4-05ac-48dd-8e04-81b8087e3a72","Type":"ContainerStarted","Data":"231aa92702954ebb27c5dda9e2f133a58579b6fedbc18c7d450b7b1ef8b898ed"} Apr 04 01:58:59 crc kubenswrapper[4681]: I0404 01:58:59.419529 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j4q2w" event={"ID":"6c6431a6-2eaa-4931-9891-f6c08c3ed5ce","Type":"ContainerStarted","Data":"5e30b2297bceb2a7f86ba3d98fec29f96812f499b46a1b74ca4503289c61b09a"} Apr 04 01:58:59 crc kubenswrapper[4681]: I0404 01:58:59.420723 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x7dlg" event={"ID":"966e01cf-5149-43ef-8967-517e68e2bbaa","Type":"ContainerStarted","Data":"a18bb7e65df8663198eaf96166d1c10a5d066f067a893c6e55b67a1aa4ca67f7"} Apr 04 01:58:59 crc kubenswrapper[4681]: I0404 01:58:59.421732 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dnkcc" event={"ID":"a2c6ee2e-54a9-4992-ac77-2b1f65957602","Type":"ContainerStarted","Data":"7f0675b5eb45a48e93c1d407043d0e61c34d8812ab0b546f27c4fd795b8202ce"} Apr 04 01:58:59 crc kubenswrapper[4681]: I0404 01:58:59.422771 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8g27v" event={"ID":"867152c5-9f9e-40b4-8623-3437a9793b5d","Type":"ContainerStarted","Data":"b50ffdea9e478fb2c83e20b38d3f3146f3e92ad969b4cdfecabf922587c126a9"} Apr 04 01:58:59 crc kubenswrapper[4681]: I0404 01:58:59.423757 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j4m77" event={"ID":"2036b47b-0f29-4c79-b26c-c8877d60cfc4","Type":"ContainerStarted","Data":"eb2a2e467536f36856748b2cc2ec1e5f2a6742104474afdf344e0cd00230ddb9"} Apr 04 01:58:59 crc kubenswrapper[4681]: I0404 01:58:59.490876 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-blqhv" podStartSLOduration=164.490850222 podStartE2EDuration="2m44.490850222s" podCreationTimestamp="2026-04-04 01:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:58:59.48796381 +0000 UTC m=+219.153738930" watchObservedRunningTime="2026-04-04 01:58:59.490850222 +0000 UTC m=+219.156625382" Apr 04 01:58:59 crc kubenswrapper[4681]: I0404 01:58:59.506188 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:59 crc kubenswrapper[4681]: E0404 01:58:59.506382 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:00.006367458 +0000 UTC m=+219.672142578 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:59 crc kubenswrapper[4681]: I0404 01:58:59.607819 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:58:59 crc kubenswrapper[4681]: E0404 01:58:59.608056 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:00.108010619 +0000 UTC m=+219.773785779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:59 crc kubenswrapper[4681]: I0404 01:58:59.608140 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:59 crc kubenswrapper[4681]: E0404 01:58:59.608556 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:00.108540794 +0000 UTC m=+219.774315934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:59 crc kubenswrapper[4681]: I0404 01:58:59.708835 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:58:59 crc kubenswrapper[4681]: E0404 01:58:59.709027 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:00.208997692 +0000 UTC m=+219.874772822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:59 crc kubenswrapper[4681]: I0404 01:58:59.710138 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:59 crc kubenswrapper[4681]: E0404 01:58:59.710664 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:00.210639988 +0000 UTC m=+219.876415138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:59 crc kubenswrapper[4681]: I0404 01:58:59.811484 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:58:59 crc kubenswrapper[4681]: E0404 01:58:59.811770 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:00.311734633 +0000 UTC m=+219.977509813 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:58:59 crc kubenswrapper[4681]: I0404 01:58:59.912915 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:58:59 crc kubenswrapper[4681]: E0404 01:58:59.913578 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:00.413540119 +0000 UTC m=+220.079315239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.017688 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:00 crc kubenswrapper[4681]: E0404 01:59:00.018027 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:00.517986769 +0000 UTC m=+220.183761929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.018386 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:00 crc kubenswrapper[4681]: E0404 01:59:00.018857 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:00.518831332 +0000 UTC m=+220.184606492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.120110 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:00 crc kubenswrapper[4681]: E0404 01:59:00.120388 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:00.62035239 +0000 UTC m=+220.286127540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.120762 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:00 crc kubenswrapper[4681]: E0404 01:59:00.121510 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:00.621494532 +0000 UTC m=+220.287269692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.222235 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:00 crc kubenswrapper[4681]: E0404 01:59:00.222579 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:00.722541936 +0000 UTC m=+220.388317086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.324010 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:00 crc kubenswrapper[4681]: E0404 01:59:00.324628 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:00.824611419 +0000 UTC m=+220.490386549 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.424943 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:00 crc kubenswrapper[4681]: E0404 01:59:00.425198 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:00.925164809 +0000 UTC m=+220.590939959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.425447 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:00 crc kubenswrapper[4681]: E0404 01:59:00.425858 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:00.925842808 +0000 UTC m=+220.591617958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.433321 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm" event={"ID":"b54a0848-6df8-47da-8537-a01d44322ca4","Type":"ContainerStarted","Data":"5476de75c33167b46440343021a24cce3c2eb98edfc710e1899fa446d9aa4d53"} Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.435939 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c6ktd" event={"ID":"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798","Type":"ContainerStarted","Data":"be2ecda88d01e3d2d5a2743a7cf1005605b8427a5b9dfb0fefee04ff5de9f9b0"} Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.437401 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-d8sjd" event={"ID":"cf31d35d-7049-4892-8bd1-3dc9deb4325c","Type":"ContainerStarted","Data":"5014076add6121ebe456d78c4db4f66789d532fd54fe986c32e0c21de128e674"} Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.440696 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ntv7l" event={"ID":"b3fc9a5b-081d-4321-ac46-42992adcf541","Type":"ContainerStarted","Data":"48f92583d96237bf98858630a8c28a5d19fb0c8bbf6e89d15e95cb5ff6a6207b"} Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.442718 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-qj9xc" event={"ID":"3c1605aa-6f4d-4754-9e6e-5f2c2d564f73","Type":"ContainerStarted","Data":"bf1675afa1371d17b5bb0f92e7a0e8065323d33a0ec0a6ebe8fff02c09a13723"} Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.444412 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9rfl7" event={"ID":"0881096e-2031-4e7a-8a1a-927fcceccf61","Type":"ContainerStarted","Data":"c4af30d3df5ea5ae05a0708070ea6de6afc05edff2b0a1565069e8f86df3e927"} Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.445850 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s28vn" event={"ID":"fd99e1c1-f4c2-42cb-ad67-76f781407b88","Type":"ContainerStarted","Data":"b28c9c5bddb51275ea4d5b4c5461af6a26df891e563573bb53723a5129353dfe"} Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.447431 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kwds4" event={"ID":"c52ff89d-ecee-40a6-aff6-88924f18f386","Type":"ContainerStarted","Data":"96014bc1d1db1937eec1ce0d7bc618dbb83139fd165e6cc32f0eaa6665038769"} Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.449660 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587785-ft78m" event={"ID":"56028b8f-0d6b-4f7f-b4d6-cefc5acec683","Type":"ContainerStarted","Data":"40c24a689a1c1c2e579dcd5c0ae257b02ee1eb59fb9e67c39e8b85f9acac6f28"} Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.451244 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fn5hz" event={"ID":"b964ba7c-9b8c-40d8-b671-915649b4d77b","Type":"ContainerStarted","Data":"427d17e02e448892f5ae843767760c5a27fcd731bbc3ed6efdec3be022b5a7b5"} Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.452498 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" event={"ID":"394b01ea-0b57-4565-aa56-96b6c5372a15","Type":"ContainerStarted","Data":"2a1f24c7419777930ac69ece450ebbe839f350b3626d923e81d0e0b7a955385c"} Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.453726 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6r2x" event={"ID":"30f28982-f167-414c-9ad7-b76a6e8eb5ca","Type":"ContainerStarted","Data":"21ed3804f54629b088633661702e9cb140b9df7e71e3342b429bedbe130add65"} Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.456209 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wpzmf" event={"ID":"645ae111-522a-4216-aadd-0901313020ce","Type":"ContainerStarted","Data":"fb8fa00d3aabe8c157730ec75f6a8af9fc1c7ff263585679795f38187a9fffcb"} Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.457786 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kxzxq" event={"ID":"664aa862-1bb6-421a-87b9-992ead56694b","Type":"ContainerStarted","Data":"2402eb5715d23e9fbc7ffffefaed58a58dacccd94a26b8309a9895842afa1341"} Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.459225 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2fpgr" event={"ID":"1d0b49f8-7d1c-41ce-9735-73cfa83aa3a2","Type":"ContainerStarted","Data":"2a103efdff9b229fccc660dc06f2ff467d648bb8ee58b83149005f4fd88bb717"} Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.461040 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" event={"ID":"0cedaefc-2211-4575-8993-8aff39f0d5a3","Type":"ContainerStarted","Data":"bc6974a97af0c80b9030a62cc6b2005af039208bf320fd5f46c9908abce5d29f"} Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.462582 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tb658" event={"ID":"42fde299-09b3-4bec-83c9-71af1d27475a","Type":"ContainerStarted","Data":"a606b74ce3d0f018b88b04eff4acd3a2d0fd8e0da446a3c38609b52e7180e83c"} Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.526135 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:00 crc kubenswrapper[4681]: E0404 01:59:00.526494 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:01.026462031 +0000 UTC m=+220.692237191 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.526586 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:00 crc kubenswrapper[4681]: E0404 01:59:00.526877 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:01.026867722 +0000 UTC m=+220.692642842 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.628325 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:00 crc kubenswrapper[4681]: E0404 01:59:00.628490 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:01.128468162 +0000 UTC m=+220.794243302 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.628848 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:00 crc kubenswrapper[4681]: E0404 01:59:00.629175 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:01.129160601 +0000 UTC m=+220.794935711 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.730114 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:00 crc kubenswrapper[4681]: E0404 01:59:00.730336 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:01.230300478 +0000 UTC m=+220.896075638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.730503 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:00 crc kubenswrapper[4681]: E0404 01:59:00.730822 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:01.230805522 +0000 UTC m=+220.896580632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.831719 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:00 crc kubenswrapper[4681]: E0404 01:59:00.831969 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:01.331932449 +0000 UTC m=+220.997707609 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.832254 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:00 crc kubenswrapper[4681]: E0404 01:59:00.832787 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:01.332769863 +0000 UTC m=+220.998545023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.933776 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:00 crc kubenswrapper[4681]: E0404 01:59:00.933992 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:01.43394228 +0000 UTC m=+221.099717450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:00 crc kubenswrapper[4681]: I0404 01:59:00.934319 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:00 crc kubenswrapper[4681]: E0404 01:59:00.934822 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:01.434803914 +0000 UTC m=+221.100579074 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:01 crc kubenswrapper[4681]: I0404 01:59:01.036101 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:01 crc kubenswrapper[4681]: E0404 01:59:01.036367 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:01.536318382 +0000 UTC m=+221.202093502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:01 crc kubenswrapper[4681]: I0404 01:59:01.036511 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:01 crc kubenswrapper[4681]: E0404 01:59:01.036812 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:01.536805285 +0000 UTC m=+221.202580405 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:01 crc kubenswrapper[4681]: I0404 01:59:01.141939 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:01 crc kubenswrapper[4681]: E0404 01:59:01.142091 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:01.642072429 +0000 UTC m=+221.307847549 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:01 crc kubenswrapper[4681]: I0404 01:59:01.142423 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:01 crc kubenswrapper[4681]: E0404 01:59:01.142853 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:01.64283606 +0000 UTC m=+221.308611220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:01 crc kubenswrapper[4681]: I0404 01:59:01.243382 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:01 crc kubenswrapper[4681]: E0404 01:59:01.243636 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:01.743602416 +0000 UTC m=+221.409377546 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:01 crc kubenswrapper[4681]: I0404 01:59:01.243699 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:01 crc kubenswrapper[4681]: E0404 01:59:01.244444 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:01.74442679 +0000 UTC m=+221.410201980 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:01 crc kubenswrapper[4681]: I0404 01:59:01.344530 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:01 crc kubenswrapper[4681]: E0404 01:59:01.345329 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:01.84530964 +0000 UTC m=+221.511084760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:01 crc kubenswrapper[4681]: I0404 01:59:01.446688 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:01 crc kubenswrapper[4681]: E0404 01:59:01.447563 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:01.947535206 +0000 UTC m=+221.613310356 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:01 crc kubenswrapper[4681]: I0404 01:59:01.471157 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-d5qvq" event={"ID":"a7623b88-9c92-45ab-b541-ee947e5c67df","Type":"ContainerStarted","Data":"ca3ad12c763d3433fd070fff7df9a4fae27620e842be4857d2e40739b8689548"} Apr 04 01:59:01 crc kubenswrapper[4681]: I0404 01:59:01.548530 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:01 crc kubenswrapper[4681]: E0404 01:59:01.548846 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:02.048808157 +0000 UTC m=+221.714583297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:01 crc kubenswrapper[4681]: I0404 01:59:01.549026 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:01 crc kubenswrapper[4681]: E0404 01:59:01.549524 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:02.049504477 +0000 UTC m=+221.715279677 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:01 crc kubenswrapper[4681]: I0404 01:59:01.651540 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:01 crc kubenswrapper[4681]: E0404 01:59:01.651998 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:02.151910459 +0000 UTC m=+221.817685639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:01 crc kubenswrapper[4681]: I0404 01:59:01.652249 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:01 crc kubenswrapper[4681]: E0404 01:59:01.652906 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:02.152876796 +0000 UTC m=+221.818651986 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:01 crc kubenswrapper[4681]: I0404 01:59:01.753837 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:01 crc kubenswrapper[4681]: E0404 01:59:01.753941 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:02.25391747 +0000 UTC m=+221.919692600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:01 crc kubenswrapper[4681]: I0404 01:59:01.754021 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:01 crc kubenswrapper[4681]: E0404 01:59:01.754368 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:02.254355603 +0000 UTC m=+221.920130743 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:01 crc kubenswrapper[4681]: I0404 01:59:01.855210 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:01 crc kubenswrapper[4681]: E0404 01:59:01.855411 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:02.355375746 +0000 UTC m=+222.021150866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:01 crc kubenswrapper[4681]: I0404 01:59:01.855697 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:01 crc kubenswrapper[4681]: E0404 01:59:01.856069 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:02.356058615 +0000 UTC m=+222.021833815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:01 crc kubenswrapper[4681]: I0404 01:59:01.956940 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:01 crc kubenswrapper[4681]: E0404 01:59:01.957077 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:02.457059478 +0000 UTC m=+222.122834598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:01 crc kubenswrapper[4681]: I0404 01:59:01.957346 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:01 crc kubenswrapper[4681]: E0404 01:59:01.957688 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:02.457678265 +0000 UTC m=+222.123453385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.058199 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:02 crc kubenswrapper[4681]: E0404 01:59:02.058357 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:02.558331969 +0000 UTC m=+222.224107089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.058486 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:02 crc kubenswrapper[4681]: E0404 01:59:02.058804 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:02.558796532 +0000 UTC m=+222.224571652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.159819 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:02 crc kubenswrapper[4681]: E0404 01:59:02.160025 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:02.65999263 +0000 UTC m=+222.325767750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.160111 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:02 crc kubenswrapper[4681]: E0404 01:59:02.160515 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:02.660496685 +0000 UTC m=+222.326271825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.260993 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:02 crc kubenswrapper[4681]: E0404 01:59:02.261198 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:02.761169459 +0000 UTC m=+222.426944589 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.261390 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:02 crc kubenswrapper[4681]: E0404 01:59:02.261748 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:02.761738384 +0000 UTC m=+222.427513514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.362992 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:02 crc kubenswrapper[4681]: E0404 01:59:02.363158 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:02.863129238 +0000 UTC m=+222.528904368 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.363399 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:02 crc kubenswrapper[4681]: E0404 01:59:02.363684 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:02.863672183 +0000 UTC m=+222.529447303 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.464449 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:02 crc kubenswrapper[4681]: E0404 01:59:02.464657 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:02.964620185 +0000 UTC m=+222.630395335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.464804 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:02 crc kubenswrapper[4681]: E0404 01:59:02.465317 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:02.965300303 +0000 UTC m=+222.631075463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.480341 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8g27v" event={"ID":"867152c5-9f9e-40b4-8623-3437a9793b5d","Type":"ContainerStarted","Data":"f699336c8258ea169d6c3c60ab186072142716549775d122c424fef520decb32"} Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.482783 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j4m77" event={"ID":"2036b47b-0f29-4c79-b26c-c8877d60cfc4","Type":"ContainerStarted","Data":"d5cb6f089396063f49b1d1bb5c60af491c02f4356aeff3d618c01b9ecfef54dd"} Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.485113 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c76br" event={"ID":"b8f9c5e4-05ac-48dd-8e04-81b8087e3a72","Type":"ContainerStarted","Data":"b73e99fb4694265d8f7157479cf2be506846ba5b41535bd7200a0a49deffb5e3"} Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.487222 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cxjm4" event={"ID":"b1c92fd3-b8d6-4db7-bf23-784d5eb8ac5e","Type":"ContainerStarted","Data":"167a709b1dafed300d91bc9af20715e6325d978589f1c72cb7acdabb5f722a54"} Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.489427 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hwjqf" event={"ID":"77f92d06-8808-403f-a105-192cdc57730d","Type":"ContainerStarted","Data":"b72b490a9a87657043759068c316ab4e55728841b1641eeb4f35e674b5965a2c"} Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.491787 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghxgb" event={"ID":"cbede535-d73e-41cf-b483-6f6794647f90","Type":"ContainerStarted","Data":"137b49d36236682a5892f9c88d55ba606f4c8f442ad4a967ca1e8a28140cda4a"} Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.494412 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9t5n9" event={"ID":"f9a818bf-9611-4945-9350-97ca20b42b26","Type":"ContainerStarted","Data":"e6d53063a95559c19e9ca96dc2fb8363db32e94796634599bd272c393ad6976b"} Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.497191 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dv8vg" event={"ID":"5999fa11-c8b4-4e7f-ae21-1b570aa79853","Type":"ContainerStarted","Data":"cab3866bad45ad30993ca0f2a7ae69962e5d93a02a705770fc9c099f18cb6365"} Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.499484 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j4q2w" event={"ID":"6c6431a6-2eaa-4931-9891-f6c08c3ed5ce","Type":"ContainerStarted","Data":"cc20810e33b8962e84bd3e7eaed89b44c7e6eda18d45cff69dd6a272f55126ce"} Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.508174 4681 generic.go:334] "Generic (PLEG): container finished" podID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerID="bc6974a97af0c80b9030a62cc6b2005af039208bf320fd5f46c9908abce5d29f" exitCode=0 Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.508353 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" event={"ID":"0cedaefc-2211-4575-8993-8aff39f0d5a3","Type":"ContainerDied","Data":"bc6974a97af0c80b9030a62cc6b2005af039208bf320fd5f46c9908abce5d29f"} Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.511174 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt66q" event={"ID":"559af3cb-f642-4e99-91e1-155840a1629c","Type":"ContainerStarted","Data":"10004ce6501c19d7f74b260f5bea4cd3eee850d332c372dd4781b2c61490db22"} Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.516793 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p6wvt" event={"ID":"1182c93a-3e68-4418-aeb7-8394689b55c2","Type":"ContainerStarted","Data":"2753d908d6de165cd9e4ec66646ecc2bce0121441e484548758f88050cc459a6"} Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.520809 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-mftw8" event={"ID":"15b64868-afa1-4d70-bfda-799ed31decdb","Type":"ContainerStarted","Data":"d1a5b7d26a0947cc4f31dfbbf4cb4cd4b9b7928fa9a668a0f31a95a5e418b14b"} Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.523367 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-sp7hg" event={"ID":"ad077696-8d80-47a9-9bb2-23764ccd2b6a","Type":"ContainerStarted","Data":"690ec968f43cbb16e13d690a7fe1602b6dddefa6119b24403b8806e2a6b53aa8"} Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.524925 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-d25mp" event={"ID":"8b59f8f0-e1c8-4187-a509-1a7f58a0ba37","Type":"ContainerStarted","Data":"382e93dc3496c2553ee8197822fa3f5d1b7469e2e54735cfad8934798718640b"} Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.527322 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-tv82h" event={"ID":"82f388a5-daaa-45da-ba97-d3ea85530dfa","Type":"ContainerStarted","Data":"0e7fb6a0bcb18a08f19f692ed57e39d66a767e6959bf30cbf939f684a688d2b3"} Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.528250 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.528475 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.530685 4681 patch_prober.go:28] interesting pod/console-operator-58897d9998-tq7nn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/readyz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.530733 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" podUID="394b01ea-0b57-4565-aa56-96b6c5372a15" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/readyz\": dial tcp 10.217.0.31:8443: connect: connection refused" Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.539641 4681 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-t522l container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.25:6443/healthz\": dial tcp 10.217.0.25:6443: connect: connection refused" start-of-body= Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.539768 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-t522l" podUID="0b28142c-7b85-406e-b158-42517bab7f11" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.25:6443/healthz\": dial tcp 10.217.0.25:6443: connect: connection refused" Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.566070 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:02 crc kubenswrapper[4681]: E0404 01:59:02.566307 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:03.066255565 +0000 UTC m=+222.732030695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.566922 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:02 crc kubenswrapper[4681]: E0404 01:59:02.567672 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:03.067656065 +0000 UTC m=+222.733431205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.583573 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" podStartSLOduration=167.583548223 podStartE2EDuration="2m47.583548223s" podCreationTimestamp="2026-04-04 01:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:02.581471924 +0000 UTC m=+222.247247054" watchObservedRunningTime="2026-04-04 01:59:02.583548223 +0000 UTC m=+222.249323353" Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.602633 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-t522l" podStartSLOduration=167.602618769 podStartE2EDuration="2m47.602618769s" podCreationTimestamp="2026-04-04 01:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:02.601453296 +0000 UTC m=+222.267228416" watchObservedRunningTime="2026-04-04 01:59:02.602618769 +0000 UTC m=+222.268393889" Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.627530 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x7dlg" podStartSLOduration=166.627509079 podStartE2EDuration="2m46.627509079s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:02.626185082 +0000 UTC m=+222.291960202" watchObservedRunningTime="2026-04-04 01:59:02.627509079 +0000 UTC m=+222.293284209" Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.667582 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:02 crc kubenswrapper[4681]: E0404 01:59:02.667734 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:03.167714662 +0000 UTC m=+222.833489792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.667914 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:02 crc kubenswrapper[4681]: E0404 01:59:02.668214 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:03.168204695 +0000 UTC m=+222.833979815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.768918 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:02 crc kubenswrapper[4681]: E0404 01:59:02.769139 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:03.269097895 +0000 UTC m=+222.934873015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.769518 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:02 crc kubenswrapper[4681]: E0404 01:59:02.769872 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:03.269856536 +0000 UTC m=+222.935631676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.870311 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:02 crc kubenswrapper[4681]: E0404 01:59:02.870546 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:03.37051571 +0000 UTC m=+223.036290830 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.870734 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:02 crc kubenswrapper[4681]: E0404 01:59:02.871169 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:03.371131347 +0000 UTC m=+223.036906497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.971450 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:02 crc kubenswrapper[4681]: E0404 01:59:02.971668 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:03.471632676 +0000 UTC m=+223.137407806 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:02 crc kubenswrapper[4681]: I0404 01:59:02.971777 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:02 crc kubenswrapper[4681]: E0404 01:59:02.972091 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:03.472079578 +0000 UTC m=+223.137854698 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.073079 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:03 crc kubenswrapper[4681]: E0404 01:59:03.073396 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:03.573361169 +0000 UTC m=+223.239136349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.073577 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:03 crc kubenswrapper[4681]: E0404 01:59:03.074083 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:03.574061039 +0000 UTC m=+223.239836199 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.181124 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:03 crc kubenswrapper[4681]: E0404 01:59:03.181707 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:03.681686618 +0000 UTC m=+223.347461748 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.282605 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:03 crc kubenswrapper[4681]: E0404 01:59:03.282948 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:03.782934398 +0000 UTC m=+223.448709518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.383272 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:03 crc kubenswrapper[4681]: E0404 01:59:03.383369 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:03.883337614 +0000 UTC m=+223.549112734 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.383603 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:03 crc kubenswrapper[4681]: E0404 01:59:03.383942 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:03.883933201 +0000 UTC m=+223.549708321 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.485139 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:03 crc kubenswrapper[4681]: E0404 01:59:03.485379 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:03.985346106 +0000 UTC m=+223.651121226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.485494 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:03 crc kubenswrapper[4681]: E0404 01:59:03.485847 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:03.985832779 +0000 UTC m=+223.651607969 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.536304 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" event={"ID":"ea58e7c7-9e3b-42ea-898f-c161a7ce17d7","Type":"ContainerStarted","Data":"eb5d69f0087591b6c5cf4a5ab446fb530eae56f3d436be973741ed31cde6a824"} Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.537876 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" event={"ID":"5da93ec3-d19f-40d9-97f1-994998839180","Type":"ContainerStarted","Data":"0ccf459dde3bb6380a86fadcdf1e4be0d0d77704eb533b1f0f3b8a5a51272132"} Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.538530 4681 patch_prober.go:28] interesting pod/console-operator-58897d9998-tq7nn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/readyz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.538567 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" podUID="394b01ea-0b57-4565-aa56-96b6c5372a15" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/readyz\": dial tcp 10.217.0.31:8443: connect: connection refused" Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.538783 4681 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-t522l container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.25:6443/healthz\": dial tcp 10.217.0.25:6443: connect: connection refused" start-of-body= Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.538838 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-t522l" podUID="0b28142c-7b85-406e-b158-42517bab7f11" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.25:6443/healthz\": dial tcp 10.217.0.25:6443: connect: connection refused" Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.557497 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-hwjqf" podStartSLOduration=167.557476855 podStartE2EDuration="2m47.557476855s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:03.556874109 +0000 UTC m=+223.222649239" watchObservedRunningTime="2026-04-04 01:59:03.557476855 +0000 UTC m=+223.223251975" Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.582955 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm" podStartSLOduration=167.582931172 podStartE2EDuration="2m47.582931172s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:03.580548476 +0000 UTC m=+223.246323606" watchObservedRunningTime="2026-04-04 01:59:03.582931172 +0000 UTC m=+223.248706292" Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.587183 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:03 crc kubenswrapper[4681]: E0404 01:59:03.587619 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:04.087590103 +0000 UTC m=+223.753365233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.588920 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:03 crc kubenswrapper[4681]: E0404 01:59:03.590235 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:04.090224557 +0000 UTC m=+223.755999667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.598150 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29587785-ft78m" podStartSLOduration=168.59813189 podStartE2EDuration="2m48.59813189s" podCreationTimestamp="2026-04-04 01:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:03.591664557 +0000 UTC m=+223.257439687" watchObservedRunningTime="2026-04-04 01:59:03.59813189 +0000 UTC m=+223.263907020" Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.639244 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s28vn" podStartSLOduration=167.639228116 podStartE2EDuration="2m47.639228116s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:03.618178424 +0000 UTC m=+223.283953554" watchObservedRunningTime="2026-04-04 01:59:03.639228116 +0000 UTC m=+223.305003256" Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.641098 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-fn5hz" podStartSLOduration=168.641087459 podStartE2EDuration="2m48.641087459s" podCreationTimestamp="2026-04-04 01:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:03.637783376 +0000 UTC m=+223.303558526" watchObservedRunningTime="2026-04-04 01:59:03.641087459 +0000 UTC m=+223.306862599" Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.655442 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-c6ktd" podStartSLOduration=168.655428132 podStartE2EDuration="2m48.655428132s" podCreationTimestamp="2026-04-04 01:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:03.653835318 +0000 UTC m=+223.319610438" watchObservedRunningTime="2026-04-04 01:59:03.655428132 +0000 UTC m=+223.321203262" Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.671076 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tb658" podStartSLOduration=167.671059793 podStartE2EDuration="2m47.671059793s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:03.669173849 +0000 UTC m=+223.334948969" watchObservedRunningTime="2026-04-04 01:59:03.671059793 +0000 UTC m=+223.336834923" Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.691431 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:03 crc kubenswrapper[4681]: E0404 01:59:03.691898 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:04.191880468 +0000 UTC m=+223.857655588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.693134 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-qj9xc" podStartSLOduration=167.693117434 podStartE2EDuration="2m47.693117434s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:03.690696076 +0000 UTC m=+223.356471206" watchObservedRunningTime="2026-04-04 01:59:03.693117434 +0000 UTC m=+223.358892554" Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.726124 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ntv7l" podStartSLOduration=167.726108792 podStartE2EDuration="2m47.726108792s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:03.709151474 +0000 UTC m=+223.374926604" watchObservedRunningTime="2026-04-04 01:59:03.726108792 +0000 UTC m=+223.391883912" Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.727095 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-kxzxq" podStartSLOduration=167.727067259 podStartE2EDuration="2m47.727067259s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:03.725207887 +0000 UTC m=+223.390983007" watchObservedRunningTime="2026-04-04 01:59:03.727067259 +0000 UTC m=+223.392842369" Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.739669 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-kwds4" podStartSLOduration=10.739649973 podStartE2EDuration="10.739649973s" podCreationTimestamp="2026-04-04 01:58:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:03.739280713 +0000 UTC m=+223.405055833" watchObservedRunningTime="2026-04-04 01:59:03.739649973 +0000 UTC m=+223.405425093" Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.752008 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c76br" podStartSLOduration=167.751991301 podStartE2EDuration="2m47.751991301s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:03.750777326 +0000 UTC m=+223.416552466" watchObservedRunningTime="2026-04-04 01:59:03.751991301 +0000 UTC m=+223.417766421" Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.769993 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-wpzmf" podStartSLOduration=167.769968007 podStartE2EDuration="2m47.769968007s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:03.764363889 +0000 UTC m=+223.430139029" watchObservedRunningTime="2026-04-04 01:59:03.769968007 +0000 UTC m=+223.435743127" Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.793040 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:03 crc kubenswrapper[4681]: E0404 01:59:03.793372 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:04.293360765 +0000 UTC m=+223.959135875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.898677 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:03 crc kubenswrapper[4681]: E0404 01:59:03.898879 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:04.398858364 +0000 UTC m=+224.064633494 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.899034 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:03 crc kubenswrapper[4681]: E0404 01:59:03.899437 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:04.39942616 +0000 UTC m=+224.065201290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:03 crc kubenswrapper[4681]: I0404 01:59:03.999939 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:04 crc kubenswrapper[4681]: E0404 01:59:04.000116 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:04.500087964 +0000 UTC m=+224.165863094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.000741 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:04 crc kubenswrapper[4681]: E0404 01:59:04.001160 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:04.501141194 +0000 UTC m=+224.166916324 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.102370 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:04 crc kubenswrapper[4681]: E0404 01:59:04.102563 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:04.602534668 +0000 UTC m=+224.268309788 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.102654 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:04 crc kubenswrapper[4681]: E0404 01:59:04.103043 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:04.603027691 +0000 UTC m=+224.268802831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.204155 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:04 crc kubenswrapper[4681]: E0404 01:59:04.204324 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:04.704302372 +0000 UTC m=+224.370077502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.204634 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.204798 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.205601 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:04 crc kubenswrapper[4681]: E0404 01:59:04.205676 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:04.705659661 +0000 UTC m=+224.371434851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.217329 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.220807 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.306878 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:04 crc kubenswrapper[4681]: E0404 01:59:04.307104 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:04.807047234 +0000 UTC m=+224.472822354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.307160 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.307223 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.307304 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:59:04 crc kubenswrapper[4681]: E0404 01:59:04.307784 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:04.807763454 +0000 UTC m=+224.473538674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.311605 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.311840 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.408850 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:04 crc kubenswrapper[4681]: E0404 01:59:04.409657 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:04.909631721 +0000 UTC m=+224.575406861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.409789 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:04 crc kubenswrapper[4681]: E0404 01:59:04.410242 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:04.910232639 +0000 UTC m=+224.576007769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.417692 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.426594 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.432486 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.513090 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:04 crc kubenswrapper[4681]: E0404 01:59:04.513638 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:05.013601418 +0000 UTC m=+224.679376538 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.545894 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mcv5m" event={"ID":"3d528d31-23f7-48f0-9e52-5357bd410c3d","Type":"ContainerStarted","Data":"4324dd8faf8fb5ee4728a6dee491316b888ae7532dd881d0b3c2c22e15f9b368"} Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.548177 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" event={"ID":"0cedaefc-2211-4575-8993-8aff39f0d5a3","Type":"ContainerStarted","Data":"9814347660aac18022f412075fdc0df5de3558d1b5ffe6e7fbd8e6cbaebc905b"} Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.563774 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p6wvt" podStartSLOduration=168.563750519 podStartE2EDuration="2m48.563750519s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:04.560835657 +0000 UTC m=+224.226610777" watchObservedRunningTime="2026-04-04 01:59:04.563750519 +0000 UTC m=+224.229525639" Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.578336 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-mftw8" podStartSLOduration=168.57831582 podStartE2EDuration="2m48.57831582s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:04.574625195 +0000 UTC m=+224.240400315" watchObservedRunningTime="2026-04-04 01:59:04.57831582 +0000 UTC m=+224.244090930" Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.615530 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:04 crc kubenswrapper[4681]: E0404 01:59:04.617537 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:05.117502703 +0000 UTC m=+224.783277823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.621404 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-tv82h" podStartSLOduration=11.621385952 podStartE2EDuration="11.621385952s" podCreationTimestamp="2026-04-04 01:58:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:04.614727025 +0000 UTC m=+224.280502145" watchObservedRunningTime="2026-04-04 01:59:04.621385952 +0000 UTC m=+224.287161072" Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.621673 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" podStartSLOduration=168.62166752 podStartE2EDuration="2m48.62166752s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:04.596782659 +0000 UTC m=+224.262557779" watchObservedRunningTime="2026-04-04 01:59:04.62166752 +0000 UTC m=+224.287442640" Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.632772 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j4m77" podStartSLOduration=168.632748661 podStartE2EDuration="2m48.632748661s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:04.629158651 +0000 UTC m=+224.294933771" watchObservedRunningTime="2026-04-04 01:59:04.632748661 +0000 UTC m=+224.298523781" Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.644612 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghxgb" podStartSLOduration=168.644592205 podStartE2EDuration="2m48.644592205s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:04.642751693 +0000 UTC m=+224.308526833" watchObservedRunningTime="2026-04-04 01:59:04.644592205 +0000 UTC m=+224.310367325" Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.661058 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-sp7hg" podStartSLOduration=168.661042438 podStartE2EDuration="2m48.661042438s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:04.660499513 +0000 UTC m=+224.326274633" watchObservedRunningTime="2026-04-04 01:59:04.661042438 +0000 UTC m=+224.326817558" Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.676361 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dv8vg" podStartSLOduration=168.676342289 podStartE2EDuration="2m48.676342289s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:04.67535016 +0000 UTC m=+224.341125290" watchObservedRunningTime="2026-04-04 01:59:04.676342289 +0000 UTC m=+224.342117419" Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.692065 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt66q" podStartSLOduration=168.69204709 podStartE2EDuration="2m48.69204709s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:04.691627449 +0000 UTC m=+224.357402569" watchObservedRunningTime="2026-04-04 01:59:04.69204709 +0000 UTC m=+224.357822210" Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.724732 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:04 crc kubenswrapper[4681]: E0404 01:59:04.724933 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:05.224904006 +0000 UTC m=+224.890679136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.724980 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:04 crc kubenswrapper[4681]: E0404 01:59:04.725528 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:05.225518023 +0000 UTC m=+224.891293143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.741240 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-d25mp" podStartSLOduration=168.741223385 podStartE2EDuration="2m48.741223385s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:04.739223518 +0000 UTC m=+224.404998638" watchObservedRunningTime="2026-04-04 01:59:04.741223385 +0000 UTC m=+224.406998505" Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.743009 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9t5n9" podStartSLOduration=168.742998604 podStartE2EDuration="2m48.742998604s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:04.711795216 +0000 UTC m=+224.377570336" watchObservedRunningTime="2026-04-04 01:59:04.742998604 +0000 UTC m=+224.408773724" Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.826225 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:04 crc kubenswrapper[4681]: E0404 01:59:04.826403 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:05.326373261 +0000 UTC m=+224.992148391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.826472 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:04 crc kubenswrapper[4681]: E0404 01:59:04.826815 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:05.326801453 +0000 UTC m=+224.992576573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:04 crc kubenswrapper[4681]: W0404 01:59:04.893965 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-3f6dbaa59396c2461f43eb2bcdcafb7522441066518cc1e0897fda555395e030 WatchSource:0}: Error finding container 3f6dbaa59396c2461f43eb2bcdcafb7522441066518cc1e0897fda555395e030: Status 404 returned error can't find the container with id 3f6dbaa59396c2461f43eb2bcdcafb7522441066518cc1e0897fda555395e030 Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.927879 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:04 crc kubenswrapper[4681]: E0404 01:59:04.928024 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:05.428000843 +0000 UTC m=+225.093775963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.928411 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:04 crc kubenswrapper[4681]: E0404 01:59:04.928828 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:05.428820095 +0000 UTC m=+225.094595215 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:04 crc kubenswrapper[4681]: W0404 01:59:04.931681 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-0f9855bd69b9f293abc25a24eae557e2c817731ec1f3b1ac5fa7e838e4e9fa79 WatchSource:0}: Error finding container 0f9855bd69b9f293abc25a24eae557e2c817731ec1f3b1ac5fa7e838e4e9fa79: Status 404 returned error can't find the container with id 0f9855bd69b9f293abc25a24eae557e2c817731ec1f3b1ac5fa7e838e4e9fa79 Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.989387 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.989432 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.991197 4681 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-7knn5 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Apr 04 01:59:04 crc kubenswrapper[4681]: I0404 01:59:04.991243 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" podUID="ea58e7c7-9e3b-42ea-898f-c161a7ce17d7" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.030047 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:05 crc kubenswrapper[4681]: E0404 01:59:05.030384 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:05.530367083 +0000 UTC m=+225.196142213 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.131502 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:05 crc kubenswrapper[4681]: E0404 01:59:05.132121 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:05.632103687 +0000 UTC m=+225.297878867 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.232364 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:05 crc kubenswrapper[4681]: E0404 01:59:05.232678 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:05.732660997 +0000 UTC m=+225.398436117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.333823 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:05 crc kubenswrapper[4681]: E0404 01:59:05.334096 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:05.834084242 +0000 UTC m=+225.499859352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.436675 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:05 crc kubenswrapper[4681]: E0404 01:59:05.436888 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:05.936855645 +0000 UTC m=+225.602630775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.437004 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:05 crc kubenswrapper[4681]: E0404 01:59:05.437476 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:05.937464642 +0000 UTC m=+225.603239762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.510117 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-d25mp" Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.511682 4681 patch_prober.go:28] interesting pod/router-default-5444994796-d25mp container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.511768 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d25mp" podUID="8b59f8f0-e1c8-4187-a509-1a7f58a0ba37" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.538523 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:05 crc kubenswrapper[4681]: E0404 01:59:05.538710 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:06.038682322 +0000 UTC m=+225.704457452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.538745 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:05 crc kubenswrapper[4681]: E0404 01:59:05.539029 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:06.039016941 +0000 UTC m=+225.704792061 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.569201 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0f9855bd69b9f293abc25a24eae557e2c817731ec1f3b1ac5fa7e838e4e9fa79"} Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.572342 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"3f6dbaa59396c2461f43eb2bcdcafb7522441066518cc1e0897fda555395e030"} Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.576424 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6r2x" event={"ID":"30f28982-f167-414c-9ad7-b76a6e8eb5ca","Type":"ContainerStarted","Data":"3f6c2818ffa262881ef60d0ecdd2b3c749bad7e2f7f2943ef9927ec2bb4cf901"} Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.578245 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-d8sjd" event={"ID":"cf31d35d-7049-4892-8bd1-3dc9deb4325c","Type":"ContainerStarted","Data":"01001b4b8e3f18f364c20e0c68f1b11a3aae8eccf58cd809dac1e643c04fc2b9"} Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.580467 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"6f0785a6b1e79291dc181995704672bbcc52b474ab2741961af292040d54b0b7"} Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.640177 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:05 crc kubenswrapper[4681]: E0404 01:59:05.640316 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:06.140296111 +0000 UTC m=+225.806071231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.640475 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:05 crc kubenswrapper[4681]: E0404 01:59:05.640871 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:06.140855267 +0000 UTC m=+225.806630387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.653582 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-fn5hz" Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.655712 4681 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn5hz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.655760 4681 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn5hz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.655801 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fn5hz" podUID="b964ba7c-9b8c-40d8-b671-915649b4d77b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.655759 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fn5hz" podUID="b964ba7c-9b8c-40d8-b671-915649b4d77b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.656080 4681 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn5hz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.656109 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fn5hz" podUID="b964ba7c-9b8c-40d8-b671-915649b4d77b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.698046 4681 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-t522l container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.25:6443/healthz\": dial tcp 10.217.0.25:6443: connect: connection refused" start-of-body= Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.698205 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-t522l" podUID="0b28142c-7b85-406e-b158-42517bab7f11" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.25:6443/healthz\": dial tcp 10.217.0.25:6443: connect: connection refused" Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.727724 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm" Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.729435 4681 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-5wdgm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.729469 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm" podUID="b54a0848-6df8-47da-8537-a01d44322ca4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.729678 4681 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-5wdgm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.729709 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm" podUID="b54a0848-6df8-47da-8537-a01d44322ca4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.742041 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:05 crc kubenswrapper[4681]: E0404 01:59:05.742953 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:06.24292459 +0000 UTC m=+225.908699720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:05 crc kubenswrapper[4681]: E0404 01:59:05.847030 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:06.34701119 +0000 UTC m=+226.012786340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.846645 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.853607 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s28vn" Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.855001 4681 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s28vn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:5443/healthz\": dial tcp 10.217.0.17:5443: connect: connection refused" start-of-body= Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.855023 4681 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s28vn container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.17:5443/healthz\": dial tcp 10.217.0.17:5443: connect: connection refused" start-of-body= Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.855068 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s28vn" podUID="fd99e1c1-f4c2-42cb-ad67-76f781407b88" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.17:5443/healthz\": dial tcp 10.217.0.17:5443: connect: connection refused" Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.855057 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s28vn" podUID="fd99e1c1-f4c2-42cb-ad67-76f781407b88" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.17:5443/healthz\": dial tcp 10.217.0.17:5443: connect: connection refused" Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.856687 4681 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-s28vn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:5443/healthz\": dial tcp 10.217.0.17:5443: connect: connection refused" start-of-body= Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.856729 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s28vn" podUID="fd99e1c1-f4c2-42cb-ad67-76f781407b88" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.17:5443/healthz\": dial tcp 10.217.0.17:5443: connect: connection refused" Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.910152 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-c6ktd" Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.910385 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-c6ktd" Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.912994 4681 patch_prober.go:28] interesting pod/console-f9d7485db-c6ktd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.913030 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-c6ktd" podUID="1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.948700 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:05 crc kubenswrapper[4681]: E0404 01:59:05.948846 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:06.448822466 +0000 UTC m=+226.114597596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.949036 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:05 crc kubenswrapper[4681]: E0404 01:59:05.949468 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:06.449457783 +0000 UTC m=+226.115232903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.964184 4681 patch_prober.go:28] interesting pod/console-operator-58897d9998-tq7nn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/readyz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.964275 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" podUID="394b01ea-0b57-4565-aa56-96b6c5372a15" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/readyz\": dial tcp 10.217.0.31:8443: connect: connection refused" Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.964319 4681 patch_prober.go:28] interesting pod/console-operator-58897d9998-tq7nn container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Apr 04 01:59:05 crc kubenswrapper[4681]: I0404 01:59:05.964398 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" podUID="394b01ea-0b57-4565-aa56-96b6c5372a15" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.050011 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:06 crc kubenswrapper[4681]: E0404 01:59:06.050141 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:06.550124787 +0000 UTC m=+226.215899907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.050234 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:06 crc kubenswrapper[4681]: E0404 01:59:06.050712 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:06.550696913 +0000 UTC m=+226.216472033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.062216 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-kxzxq" Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.066365 4681 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kxzxq container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.066419 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-kxzxq" podUID="664aa862-1bb6-421a-87b9-992ead56694b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.066377 4681 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kxzxq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.066490 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kxzxq" podUID="664aa862-1bb6-421a-87b9-992ead56694b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.067978 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-wpzmf" Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.069414 4681 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-wpzmf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.069441 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-wpzmf" podUID="645ae111-522a-4216-aadd-0901313020ce" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.069620 4681 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-wpzmf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.069641 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-wpzmf" podUID="645ae111-522a-4216-aadd-0901313020ce" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.074713 4681 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kxzxq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.074744 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kxzxq" podUID="664aa862-1bb6-421a-87b9-992ead56694b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.151842 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:06 crc kubenswrapper[4681]: E0404 01:59:06.152021 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:06.651997854 +0000 UTC m=+226.317772964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.152385 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:06 crc kubenswrapper[4681]: E0404 01:59:06.153515 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:06.653500027 +0000 UTC m=+226.319275147 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.225670 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dv8vg" Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.227599 4681 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dv8vg container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.227659 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dv8vg" podUID="5999fa11-c8b4-4e7f-ae21-1b570aa79853" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.227714 4681 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dv8vg container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.227753 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dv8vg" podUID="5999fa11-c8b4-4e7f-ae21-1b570aa79853" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.228107 4681 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-dv8vg container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.228166 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dv8vg" podUID="5999fa11-c8b4-4e7f-ae21-1b570aa79853" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.254129 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:06 crc kubenswrapper[4681]: E0404 01:59:06.254305 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:06.754280433 +0000 UTC m=+226.420055553 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.254426 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:06 crc kubenswrapper[4681]: E0404 01:59:06.254756 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:06.754748716 +0000 UTC m=+226.420523826 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.355019 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:06 crc kubenswrapper[4681]: E0404 01:59:06.355155 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:06.855131302 +0000 UTC m=+226.520906412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.355285 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:06 crc kubenswrapper[4681]: E0404 01:59:06.355624 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:06.855611055 +0000 UTC m=+226.521386165 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.455991 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:06 crc kubenswrapper[4681]: E0404 01:59:06.456410 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:06.956376222 +0000 UTC m=+226.622151362 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.456469 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:06 crc kubenswrapper[4681]: E0404 01:59:06.456835 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:06.956822144 +0000 UTC m=+226.622597284 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.510187 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-d25mp" Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.511599 4681 patch_prober.go:28] interesting pod/router-default-5444994796-d25mp container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.511662 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d25mp" podUID="8b59f8f0-e1c8-4187-a509-1a7f58a0ba37" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.557485 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:06 crc kubenswrapper[4681]: E0404 01:59:06.557672 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:07.057647052 +0000 UTC m=+226.723422172 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.557874 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:06 crc kubenswrapper[4681]: E0404 01:59:06.558212 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:07.058197257 +0000 UTC m=+226.723972387 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.590422 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9rfl7" event={"ID":"0881096e-2031-4e7a-8a1a-927fcceccf61","Type":"ContainerStarted","Data":"f08ff4dede6b5a5eb5a09cc53c21961850ef96013b29badd30cbc53f31a6deab"} Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.592476 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2fpgr" event={"ID":"1d0b49f8-7d1c-41ce-9735-73cfa83aa3a2","Type":"ContainerStarted","Data":"c01ada74b12650d3333e35302bfdcb3e9f89c662628a8404adab176c1b64d427"} Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.606443 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-mcv5m" podStartSLOduration=170.606424395 podStartE2EDuration="2m50.606424395s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:06.604665136 +0000 UTC m=+226.270440256" watchObservedRunningTime="2026-04-04 01:59:06.606424395 +0000 UTC m=+226.272199525" Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.659527 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:06 crc kubenswrapper[4681]: E0404 01:59:06.659779 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:07.159730726 +0000 UTC m=+226.825505846 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.659982 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:06 crc kubenswrapper[4681]: E0404 01:59:06.660744 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:07.160727214 +0000 UTC m=+226.826502354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.762903 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:06 crc kubenswrapper[4681]: E0404 01:59:06.763049 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:07.263024874 +0000 UTC m=+226.928799994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.763805 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:06 crc kubenswrapper[4681]: E0404 01:59:06.764572 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:07.264554666 +0000 UTC m=+226.930329786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.782788 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j4m77" Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.784489 4681 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-j4m77 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.784650 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j4m77" podUID="2036b47b-0f29-4c79-b26c-c8877d60cfc4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.784552 4681 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-j4m77 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.784859 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j4m77" podUID="2036b47b-0f29-4c79-b26c-c8877d60cfc4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.785339 4681 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-j4m77 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.785378 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j4m77" podUID="2036b47b-0f29-4c79-b26c-c8877d60cfc4" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.865296 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:06 crc kubenswrapper[4681]: E0404 01:59:06.865672 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:07.365625541 +0000 UTC m=+227.031400711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.865880 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:06 crc kubenswrapper[4681]: E0404 01:59:06.866718 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:07.366700062 +0000 UTC m=+227.032475182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.967598 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:06 crc kubenswrapper[4681]: E0404 01:59:06.967735 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:07.467718395 +0000 UTC m=+227.133493505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:06 crc kubenswrapper[4681]: I0404 01:59:06.968280 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:06 crc kubenswrapper[4681]: E0404 01:59:06.968699 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:07.468683142 +0000 UTC m=+227.134458262 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.069089 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:07 crc kubenswrapper[4681]: E0404 01:59:07.069356 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:07.569316984 +0000 UTC m=+227.235092124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.069551 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:07 crc kubenswrapper[4681]: E0404 01:59:07.069923 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:07.569912241 +0000 UTC m=+227.235687381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.171018 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:07 crc kubenswrapper[4681]: E0404 01:59:07.171218 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:07.671191172 +0000 UTC m=+227.336966292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.171796 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:07 crc kubenswrapper[4681]: E0404 01:59:07.172049 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:07.672033576 +0000 UTC m=+227.337808696 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.268785 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.269566 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.271988 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.272463 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.273065 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:07 crc kubenswrapper[4681]: E0404 01:59:07.273204 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:07.773186193 +0000 UTC m=+227.438961313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.273327 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:07 crc kubenswrapper[4681]: E0404 01:59:07.273752 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:07.773721148 +0000 UTC m=+227.439496278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.283728 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.374386 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:07 crc kubenswrapper[4681]: E0404 01:59:07.374536 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:07.874510525 +0000 UTC m=+227.540285645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.374928 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0feb9bef-e8b8-477e-a1fe-33bf077267e5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0feb9bef-e8b8-477e-a1fe-33bf077267e5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.375056 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.375179 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0feb9bef-e8b8-477e-a1fe-33bf077267e5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0feb9bef-e8b8-477e-a1fe-33bf077267e5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Apr 04 01:59:07 crc kubenswrapper[4681]: E0404 01:59:07.375337 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:07.875330118 +0000 UTC m=+227.541105238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.475772 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:07 crc kubenswrapper[4681]: E0404 01:59:07.475963 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:07.975931349 +0000 UTC m=+227.641706469 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.476018 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0feb9bef-e8b8-477e-a1fe-33bf077267e5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0feb9bef-e8b8-477e-a1fe-33bf077267e5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.476152 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0feb9bef-e8b8-477e-a1fe-33bf077267e5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0feb9bef-e8b8-477e-a1fe-33bf077267e5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.476213 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.476346 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0feb9bef-e8b8-477e-a1fe-33bf077267e5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"0feb9bef-e8b8-477e-a1fe-33bf077267e5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Apr 04 01:59:07 crc kubenswrapper[4681]: E0404 01:59:07.476579 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:07.976571467 +0000 UTC m=+227.642346587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.505491 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0feb9bef-e8b8-477e-a1fe-33bf077267e5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"0feb9bef-e8b8-477e-a1fe-33bf077267e5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.516828 4681 patch_prober.go:28] interesting pod/router-default-5444994796-d25mp container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.517010 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d25mp" podUID="8b59f8f0-e1c8-4187-a509-1a7f58a0ba37" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.577064 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:07 crc kubenswrapper[4681]: E0404 01:59:07.577251 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:08.07722057 +0000 UTC m=+227.742995690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.577484 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:07 crc kubenswrapper[4681]: E0404 01:59:07.577788 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:08.077777146 +0000 UTC m=+227.743552266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.597347 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-d5qvq" event={"ID":"a7623b88-9c92-45ab-b541-ee947e5c67df","Type":"ContainerStarted","Data":"8d80b9a40d2351208c367b9f16ae8772f3f394683f72c428e6969a44ee62aff1"} Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.599749 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" event={"ID":"5da93ec3-d19f-40d9-97f1-994998839180","Type":"ContainerStarted","Data":"92f929c341ccb7c4858283441c3001e4601d8287f04bb983d7276beb95b55533"} Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.601442 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8g27v" event={"ID":"867152c5-9f9e-40b4-8623-3437a9793b5d","Type":"ContainerStarted","Data":"6708420cb41a9f2685bada7c05ead7e16a3a5fb88fa0562bee540e741f7c66e1"} Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.602999 4681 generic.go:334] "Generic (PLEG): container finished" podID="56028b8f-0d6b-4f7f-b4d6-cefc5acec683" containerID="40c24a689a1c1c2e579dcd5c0ae257b02ee1eb59fb9e67c39e8b85f9acac6f28" exitCode=0 Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.603072 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587785-ft78m" event={"ID":"56028b8f-0d6b-4f7f-b4d6-cefc5acec683","Type":"ContainerDied","Data":"40c24a689a1c1c2e579dcd5c0ae257b02ee1eb59fb9e67c39e8b85f9acac6f28"} Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.604233 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"233058d37e04163e6af182a519e57cb6b66545ee113bae7ae3d47eabe66bea3d"} Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.605723 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j4q2w" event={"ID":"6c6431a6-2eaa-4931-9891-f6c08c3ed5ce","Type":"ContainerStarted","Data":"206e01be37981ce2456fb46b3d766bf97477dece3e5c6055390a901c42508ff6"} Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.607096 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"2b842d3f2f23e851a66cfa951dc1c7bf741a292462e527805a086de431eb4a65"} Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.608786 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cxjm4" event={"ID":"b1c92fd3-b8d6-4db7-bf23-784d5eb8ac5e","Type":"ContainerStarted","Data":"afb8d64c19c2efc5f4f8476753d681b731a36a7d6cfc3bdf77b2f9bb4692d2b8"} Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.610329 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"b2170ee39a84034e4078c3c0366291e83cab0b5fd0c8e5c7a04b1aecbe1ada1b"} Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.628532 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.633725 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-9rfl7" podStartSLOduration=172.63370865 podStartE2EDuration="2m52.63370865s" podCreationTimestamp="2026-04-04 01:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:07.632774964 +0000 UTC m=+227.298550084" watchObservedRunningTime="2026-04-04 01:59:07.63370865 +0000 UTC m=+227.299483770" Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.648303 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6r2x" podStartSLOduration=171.648284061 podStartE2EDuration="2m51.648284061s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:07.645793221 +0000 UTC m=+227.311568341" watchObservedRunningTime="2026-04-04 01:59:07.648284061 +0000 UTC m=+227.314059181" Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.667496 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podStartSLOduration=172.667477621 podStartE2EDuration="2m52.667477621s" podCreationTimestamp="2026-04-04 01:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:07.662639085 +0000 UTC m=+227.328414205" watchObservedRunningTime="2026-04-04 01:59:07.667477621 +0000 UTC m=+227.333252741" Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.678595 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:07 crc kubenswrapper[4681]: E0404 01:59:07.679545 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:08.17951209 +0000 UTC m=+227.845287210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.681291 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-d8sjd" podStartSLOduration=171.681274209 podStartE2EDuration="2m51.681274209s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:07.678932164 +0000 UTC m=+227.344707314" watchObservedRunningTime="2026-04-04 01:59:07.681274209 +0000 UTC m=+227.347049319" Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.682438 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:07 crc kubenswrapper[4681]: E0404 01:59:07.685798 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:08.185782846 +0000 UTC m=+227.851557966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.793016 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:07 crc kubenswrapper[4681]: E0404 01:59:07.793215 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:08.293185639 +0000 UTC m=+227.958960759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.794094 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:07 crc kubenswrapper[4681]: E0404 01:59:07.794655 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:08.29463685 +0000 UTC m=+227.960411970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.839787 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.853630 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.853714 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.854092 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.854121 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.898875 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:07 crc kubenswrapper[4681]: E0404 01:59:07.899051 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:08.399031809 +0000 UTC m=+228.064806929 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.899394 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:07 crc kubenswrapper[4681]: E0404 01:59:07.899704 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:08.399694097 +0000 UTC m=+228.065469217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:07 crc kubenswrapper[4681]: I0404 01:59:07.899969 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.000874 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:08 crc kubenswrapper[4681]: E0404 01:59:08.001021 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:08.500990848 +0000 UTC m=+228.166765988 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.001204 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:08 crc kubenswrapper[4681]: E0404 01:59:08.001533 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:08.501522414 +0000 UTC m=+228.167297604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.102499 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:08 crc kubenswrapper[4681]: E0404 01:59:08.102672 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:08.60264776 +0000 UTC m=+228.268422880 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.102744 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:08 crc kubenswrapper[4681]: E0404 01:59:08.103086 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:08.603078192 +0000 UTC m=+228.268853312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.203575 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:08 crc kubenswrapper[4681]: E0404 01:59:08.203739 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:08.703697864 +0000 UTC m=+228.369472984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.204072 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:08 crc kubenswrapper[4681]: E0404 01:59:08.204417 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:08.704405414 +0000 UTC m=+228.370180534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.305329 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:08 crc kubenswrapper[4681]: E0404 01:59:08.305743 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:08.805728576 +0000 UTC m=+228.471503696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.406659 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:08 crc kubenswrapper[4681]: E0404 01:59:08.407034 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:08.907023667 +0000 UTC m=+228.572798787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.507237 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:08 crc kubenswrapper[4681]: E0404 01:59:08.507479 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:09.007445744 +0000 UTC m=+228.673220874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.507615 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:08 crc kubenswrapper[4681]: E0404 01:59:08.507971 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:09.007961748 +0000 UTC m=+228.673736878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.515994 4681 patch_prober.go:28] interesting pod/router-default-5444994796-d25mp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Apr 04 01:59:08 crc kubenswrapper[4681]: [-]has-synced failed: reason withheld Apr 04 01:59:08 crc kubenswrapper[4681]: [+]process-running ok Apr 04 01:59:08 crc kubenswrapper[4681]: healthz check failed Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.516074 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d25mp" podUID="8b59f8f0-e1c8-4187-a509-1a7f58a0ba37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.608899 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:08 crc kubenswrapper[4681]: E0404 01:59:08.609078 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:09.109052824 +0000 UTC m=+228.774827944 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.609153 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:08 crc kubenswrapper[4681]: E0404 01:59:08.609482 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:09.109471646 +0000 UTC m=+228.775246766 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.615574 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dnkcc" event={"ID":"a2c6ee2e-54a9-4992-ac77-2b1f65957602","Type":"ContainerStarted","Data":"5a1fe67c0d1756286ac0063716c6de04f45f816a0171c625903a341e261c9352"} Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.617015 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0feb9bef-e8b8-477e-a1fe-33bf077267e5","Type":"ContainerStarted","Data":"7d2881839c6ef1f9b01b11f9a82c0eb01998cf4d6991faee673c4d20f15147db"} Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.617776 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-d5qvq" Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.618565 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.618616 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.658606 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-2fpgr" podStartSLOduration=172.658587268 podStartE2EDuration="2m52.658587268s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:08.657711403 +0000 UTC m=+228.323486523" watchObservedRunningTime="2026-04-04 01:59:08.658587268 +0000 UTC m=+228.324362388" Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.679778 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cxjm4" podStartSLOduration=172.679759624 podStartE2EDuration="2m52.679759624s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:08.679405634 +0000 UTC m=+228.345180764" watchObservedRunningTime="2026-04-04 01:59:08.679759624 +0000 UTC m=+228.345534744" Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.709872 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:08 crc kubenswrapper[4681]: E0404 01:59:08.710039 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:09.210008776 +0000 UTC m=+228.875783906 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.710762 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:08 crc kubenswrapper[4681]: E0404 01:59:08.712819 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:09.212801894 +0000 UTC m=+228.878577104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.714219 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-j4q2w" podStartSLOduration=172.714182263 podStartE2EDuration="2m52.714182263s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:08.708554774 +0000 UTC m=+228.374329894" watchObservedRunningTime="2026-04-04 01:59:08.714182263 +0000 UTC m=+228.379957393" Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.798772 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-d5qvq" podStartSLOduration=15.798749783 podStartE2EDuration="15.798749783s" podCreationTimestamp="2026-04-04 01:58:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:08.783077582 +0000 UTC m=+228.448852702" watchObservedRunningTime="2026-04-04 01:59:08.798749783 +0000 UTC m=+228.464524903" Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.809172 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.809897 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Apr 04 01:59:08 crc kubenswrapper[4681]: E0404 01:59:08.815533 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:09.315505425 +0000 UTC m=+228.981280555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.817112 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.817400 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.818222 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.818603 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:08 crc kubenswrapper[4681]: E0404 01:59:08.818894 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:09.318873779 +0000 UTC m=+228.984648899 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.856354 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.926936 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.927224 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/344435c6-96e7-44ac-99d4-55e56ac1f631-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"344435c6-96e7-44ac-99d4-55e56ac1f631\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.927342 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/344435c6-96e7-44ac-99d4-55e56ac1f631-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"344435c6-96e7-44ac-99d4-55e56ac1f631\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Apr 04 01:59:08 crc kubenswrapper[4681]: E0404 01:59:08.927449 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:09.427430665 +0000 UTC m=+229.093205785 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:08 crc kubenswrapper[4681]: I0404 01:59:08.952278 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" podStartSLOduration=173.952243024 podStartE2EDuration="2m53.952243024s" podCreationTimestamp="2026-04-04 01:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:08.95138513 +0000 UTC m=+228.617160260" watchObservedRunningTime="2026-04-04 01:59:08.952243024 +0000 UTC m=+228.618018154" Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.012542 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8g27v" podStartSLOduration=174.012526461 podStartE2EDuration="2m54.012526461s" podCreationTimestamp="2026-04-04 01:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:09.010411611 +0000 UTC m=+228.676186751" watchObservedRunningTime="2026-04-04 01:59:09.012526461 +0000 UTC m=+228.678301581" Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.028185 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/344435c6-96e7-44ac-99d4-55e56ac1f631-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"344435c6-96e7-44ac-99d4-55e56ac1f631\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.028252 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.028301 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/344435c6-96e7-44ac-99d4-55e56ac1f631-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"344435c6-96e7-44ac-99d4-55e56ac1f631\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.028653 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/344435c6-96e7-44ac-99d4-55e56ac1f631-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"344435c6-96e7-44ac-99d4-55e56ac1f631\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Apr 04 01:59:09 crc kubenswrapper[4681]: E0404 01:59:09.028924 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:09.528910681 +0000 UTC m=+229.194685811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.058992 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/344435c6-96e7-44ac-99d4-55e56ac1f631-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"344435c6-96e7-44ac-99d4-55e56ac1f631\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.089320 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587785-ft78m" Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.129544 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:09 crc kubenswrapper[4681]: E0404 01:59:09.129702 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:09.629680448 +0000 UTC m=+229.295455568 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.129739 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:09 crc kubenswrapper[4681]: E0404 01:59:09.130061 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:09.630052019 +0000 UTC m=+229.295827139 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.136549 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.230712 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.230759 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56028b8f-0d6b-4f7f-b4d6-cefc5acec683-config-volume\") pod \"56028b8f-0d6b-4f7f-b4d6-cefc5acec683\" (UID: \"56028b8f-0d6b-4f7f-b4d6-cefc5acec683\") " Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.230807 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/56028b8f-0d6b-4f7f-b4d6-cefc5acec683-secret-volume\") pod \"56028b8f-0d6b-4f7f-b4d6-cefc5acec683\" (UID: \"56028b8f-0d6b-4f7f-b4d6-cefc5acec683\") " Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.230892 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzpf9\" (UniqueName: \"kubernetes.io/projected/56028b8f-0d6b-4f7f-b4d6-cefc5acec683-kube-api-access-tzpf9\") pod \"56028b8f-0d6b-4f7f-b4d6-cefc5acec683\" (UID: \"56028b8f-0d6b-4f7f-b4d6-cefc5acec683\") " Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.231542 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56028b8f-0d6b-4f7f-b4d6-cefc5acec683-config-volume" (OuterVolumeSpecName: "config-volume") pod "56028b8f-0d6b-4f7f-b4d6-cefc5acec683" (UID: "56028b8f-0d6b-4f7f-b4d6-cefc5acec683"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:59:09 crc kubenswrapper[4681]: E0404 01:59:09.232175 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:09.732158432 +0000 UTC m=+229.397933552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.295072 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56028b8f-0d6b-4f7f-b4d6-cefc5acec683-kube-api-access-tzpf9" (OuterVolumeSpecName: "kube-api-access-tzpf9") pod "56028b8f-0d6b-4f7f-b4d6-cefc5acec683" (UID: "56028b8f-0d6b-4f7f-b4d6-cefc5acec683"). InnerVolumeSpecName "kube-api-access-tzpf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.296609 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56028b8f-0d6b-4f7f-b4d6-cefc5acec683-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "56028b8f-0d6b-4f7f-b4d6-cefc5acec683" (UID: "56028b8f-0d6b-4f7f-b4d6-cefc5acec683"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.333421 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.333520 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzpf9\" (UniqueName: \"kubernetes.io/projected/56028b8f-0d6b-4f7f-b4d6-cefc5acec683-kube-api-access-tzpf9\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.333535 4681 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/56028b8f-0d6b-4f7f-b4d6-cefc5acec683-config-volume\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.333546 4681 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/56028b8f-0d6b-4f7f-b4d6-cefc5acec683-secret-volume\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:09 crc kubenswrapper[4681]: E0404 01:59:09.333817 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:09.833802423 +0000 UTC m=+229.499577543 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.334244 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.434537 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:09 crc kubenswrapper[4681]: E0404 01:59:09.436462 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:09.935713752 +0000 UTC m=+229.601488912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.436555 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:09 crc kubenswrapper[4681]: E0404 01:59:09.437152 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:09.937141762 +0000 UTC m=+229.602916882 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.516844 4681 patch_prober.go:28] interesting pod/router-default-5444994796-d25mp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Apr 04 01:59:09 crc kubenswrapper[4681]: [-]has-synced failed: reason withheld Apr 04 01:59:09 crc kubenswrapper[4681]: [+]process-running ok Apr 04 01:59:09 crc kubenswrapper[4681]: healthz check failed Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.516911 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d25mp" podUID="8b59f8f0-e1c8-4187-a509-1a7f58a0ba37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.537539 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:09 crc kubenswrapper[4681]: E0404 01:59:09.537743 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:10.037715603 +0000 UTC m=+229.703490723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.537839 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:09 crc kubenswrapper[4681]: E0404 01:59:09.538185 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:10.038177986 +0000 UTC m=+229.703953106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.629111 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"344435c6-96e7-44ac-99d4-55e56ac1f631","Type":"ContainerStarted","Data":"01a5f6509b4ef61367b281b53572401443ebd3b64c86a681175a3a4ec2c32154"} Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.630906 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0feb9bef-e8b8-477e-a1fe-33bf077267e5","Type":"ContainerStarted","Data":"302f3060d1e7de6b38ef7f5e7a70a332f7158c2af4992f7b88a4962c214654cd"} Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.632990 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587785-ft78m" event={"ID":"56028b8f-0d6b-4f7f-b4d6-cefc5acec683","Type":"ContainerDied","Data":"5d5a3ca11a18835dd51381e9bd77ccf6a2a118940dae5dbf92c5990b703d2c55"} Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.633018 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d5a3ca11a18835dd51381e9bd77ccf6a2a118940dae5dbf92c5990b703d2c55" Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.633042 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587785-ft78m" Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.639246 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:09 crc kubenswrapper[4681]: E0404 01:59:09.639383 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:10.139363294 +0000 UTC m=+229.805138414 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.639509 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:09 crc kubenswrapper[4681]: E0404 01:59:09.639787 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:10.139780845 +0000 UTC m=+229.805555965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.653244 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.653226914 podStartE2EDuration="2.653226914s" podCreationTimestamp="2026-04-04 01:59:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:09.649672164 +0000 UTC m=+229.315447284" watchObservedRunningTime="2026-04-04 01:59:09.653226914 +0000 UTC m=+229.319002034" Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.740168 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:09 crc kubenswrapper[4681]: E0404 01:59:09.740531 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:10.24049825 +0000 UTC m=+229.906273370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.740794 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:09 crc kubenswrapper[4681]: E0404 01:59:09.742440 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:10.242428485 +0000 UTC m=+229.908203605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.841577 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:09 crc kubenswrapper[4681]: E0404 01:59:09.841998 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:10.341979998 +0000 UTC m=+230.007755118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.943475 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:09 crc kubenswrapper[4681]: E0404 01:59:09.943811 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:10.443796943 +0000 UTC m=+230.109572063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.989747 4681 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-7knn5 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Apr 04 01:59:09 crc kubenswrapper[4681]: I0404 01:59:09.989794 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" podUID="ea58e7c7-9e3b-42ea-898f-c161a7ce17d7" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.044964 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:10 crc kubenswrapper[4681]: E0404 01:59:10.045447 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:10.545428984 +0000 UTC m=+230.211204104 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.146948 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:10 crc kubenswrapper[4681]: E0404 01:59:10.147285 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:10.647271991 +0000 UTC m=+230.313047111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.157103 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hwjqf"] Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.157342 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-service-ca/service-ca-9c57cc56f-hwjqf" podUID="77f92d06-8808-403f-a105-192cdc57730d" containerName="service-ca-controller" containerID="cri-o://b72b490a9a87657043759068c316ab4e55728841b1641eeb4f35e674b5965a2c" gracePeriod=30 Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.211679 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.211724 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.215411 4681 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xpn96 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.215463 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.248014 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:10 crc kubenswrapper[4681]: E0404 01:59:10.248201 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:10.748175131 +0000 UTC m=+230.413950251 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.248293 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:10 crc kubenswrapper[4681]: E0404 01:59:10.248637 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:10.748629234 +0000 UTC m=+230.414404354 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.351503 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:10 crc kubenswrapper[4681]: E0404 01:59:10.351667 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:10.851637453 +0000 UTC m=+230.517412573 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.351796 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:10 crc kubenswrapper[4681]: E0404 01:59:10.352243 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:10.85222486 +0000 UTC m=+230.517999970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.452905 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:10 crc kubenswrapper[4681]: E0404 01:59:10.453101 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:10.953059768 +0000 UTC m=+230.618834878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.453331 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:10 crc kubenswrapper[4681]: E0404 01:59:10.453737 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:10.953715776 +0000 UTC m=+230.619490956 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.518012 4681 patch_prober.go:28] interesting pod/router-default-5444994796-d25mp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Apr 04 01:59:10 crc kubenswrapper[4681]: [-]has-synced failed: reason withheld Apr 04 01:59:10 crc kubenswrapper[4681]: [+]process-running ok Apr 04 01:59:10 crc kubenswrapper[4681]: healthz check failed Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.518082 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d25mp" podUID="8b59f8f0-e1c8-4187-a509-1a7f58a0ba37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.555678 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:10 crc kubenswrapper[4681]: E0404 01:59:10.555848 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:11.05582286 +0000 UTC m=+230.721597970 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.555925 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:10 crc kubenswrapper[4681]: E0404 01:59:10.556233 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:11.056223982 +0000 UTC m=+230.721999102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.641728 4681 generic.go:334] "Generic (PLEG): container finished" podID="77f92d06-8808-403f-a105-192cdc57730d" containerID="b72b490a9a87657043759068c316ab4e55728841b1641eeb4f35e674b5965a2c" exitCode=0 Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.641824 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hwjqf" event={"ID":"77f92d06-8808-403f-a105-192cdc57730d","Type":"ContainerDied","Data":"b72b490a9a87657043759068c316ab4e55728841b1641eeb4f35e674b5965a2c"} Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.651192 4681 generic.go:334] "Generic (PLEG): container finished" podID="0feb9bef-e8b8-477e-a1fe-33bf077267e5" containerID="302f3060d1e7de6b38ef7f5e7a70a332f7158c2af4992f7b88a4962c214654cd" exitCode=0 Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.651308 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0feb9bef-e8b8-477e-a1fe-33bf077267e5","Type":"ContainerDied","Data":"302f3060d1e7de6b38ef7f5e7a70a332f7158c2af4992f7b88a4962c214654cd"} Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.656625 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:10 crc kubenswrapper[4681]: E0404 01:59:10.656909 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:11.156896405 +0000 UTC m=+230.822671515 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.656996 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"344435c6-96e7-44ac-99d4-55e56ac1f631","Type":"ContainerStarted","Data":"8397287441bcef5398f0eca2a8113041939cea070203820921d3f1f4a6be4c7f"} Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.684686 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.6846694060000003 podStartE2EDuration="2.684669406s" podCreationTimestamp="2026-04-04 01:59:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:10.681800275 +0000 UTC m=+230.347575395" watchObservedRunningTime="2026-04-04 01:59:10.684669406 +0000 UTC m=+230.350444516" Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.758235 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:10 crc kubenswrapper[4681]: E0404 01:59:10.758580 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:11.258565217 +0000 UTC m=+230.924340337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.859441 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:10 crc kubenswrapper[4681]: E0404 01:59:10.859638 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:11.359611151 +0000 UTC m=+231.025386271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.859668 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:10 crc kubenswrapper[4681]: E0404 01:59:10.859964 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:11.35995339 +0000 UTC m=+231.025728500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.960854 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:10 crc kubenswrapper[4681]: E0404 01:59:10.961061 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:11.461032395 +0000 UTC m=+231.126807525 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.961372 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:10 crc kubenswrapper[4681]: E0404 01:59:10.961657 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:11.461642692 +0000 UTC m=+231.127417812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.966021 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5qn4m"] Apr 04 01:59:10 crc kubenswrapper[4681]: E0404 01:59:10.966236 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56028b8f-0d6b-4f7f-b4d6-cefc5acec683" containerName="collect-profiles" Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.966247 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="56028b8f-0d6b-4f7f-b4d6-cefc5acec683" containerName="collect-profiles" Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.966358 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="56028b8f-0d6b-4f7f-b4d6-cefc5acec683" containerName="collect-profiles" Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.967056 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5qn4m" Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.971109 4681 ???:1] "http: TLS handshake error from 192.168.126.11:44056: no serving certificate available for the kubelet" Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.971997 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Apr 04 01:59:10 crc kubenswrapper[4681]: I0404 01:59:10.979325 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5qn4m"] Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.062278 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.062480 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f00114dc-2aae-4d37-8143-71336f144be3-catalog-content\") pod \"certified-operators-5qn4m\" (UID: \"f00114dc-2aae-4d37-8143-71336f144be3\") " pod="openshift-marketplace/certified-operators-5qn4m" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.062547 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2bfd\" (UniqueName: \"kubernetes.io/projected/f00114dc-2aae-4d37-8143-71336f144be3-kube-api-access-r2bfd\") pod \"certified-operators-5qn4m\" (UID: \"f00114dc-2aae-4d37-8143-71336f144be3\") " pod="openshift-marketplace/certified-operators-5qn4m" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.062568 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f00114dc-2aae-4d37-8143-71336f144be3-utilities\") pod \"certified-operators-5qn4m\" (UID: \"f00114dc-2aae-4d37-8143-71336f144be3\") " pod="openshift-marketplace/certified-operators-5qn4m" Apr 04 01:59:11 crc kubenswrapper[4681]: E0404 01:59:11.062682 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:11.562666756 +0000 UTC m=+231.228441866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.068693 4681 ???:1] "http: TLS handshake error from 192.168.126.11:44066: no serving certificate available for the kubelet" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.158948 4681 ???:1] "http: TLS handshake error from 192.168.126.11:44076: no serving certificate available for the kubelet" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.163480 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.163541 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2bfd\" (UniqueName: \"kubernetes.io/projected/f00114dc-2aae-4d37-8143-71336f144be3-kube-api-access-r2bfd\") pod \"certified-operators-5qn4m\" (UID: \"f00114dc-2aae-4d37-8143-71336f144be3\") " pod="openshift-marketplace/certified-operators-5qn4m" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.163570 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f00114dc-2aae-4d37-8143-71336f144be3-utilities\") pod \"certified-operators-5qn4m\" (UID: \"f00114dc-2aae-4d37-8143-71336f144be3\") " pod="openshift-marketplace/certified-operators-5qn4m" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.163612 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f00114dc-2aae-4d37-8143-71336f144be3-catalog-content\") pod \"certified-operators-5qn4m\" (UID: \"f00114dc-2aae-4d37-8143-71336f144be3\") " pod="openshift-marketplace/certified-operators-5qn4m" Apr 04 01:59:11 crc kubenswrapper[4681]: E0404 01:59:11.163950 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:11.663927817 +0000 UTC m=+231.329702997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.163992 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f00114dc-2aae-4d37-8143-71336f144be3-catalog-content\") pod \"certified-operators-5qn4m\" (UID: \"f00114dc-2aae-4d37-8143-71336f144be3\") " pod="openshift-marketplace/certified-operators-5qn4m" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.164559 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f00114dc-2aae-4d37-8143-71336f144be3-utilities\") pod \"certified-operators-5qn4m\" (UID: \"f00114dc-2aae-4d37-8143-71336f144be3\") " pod="openshift-marketplace/certified-operators-5qn4m" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.187492 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m8stk"] Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.188494 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8stk" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.189469 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2bfd\" (UniqueName: \"kubernetes.io/projected/f00114dc-2aae-4d37-8143-71336f144be3-kube-api-access-r2bfd\") pod \"certified-operators-5qn4m\" (UID: \"f00114dc-2aae-4d37-8143-71336f144be3\") " pod="openshift-marketplace/certified-operators-5qn4m" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.190826 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.211495 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m8stk"] Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.269051 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:11 crc kubenswrapper[4681]: E0404 01:59:11.269603 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:11.76958418 +0000 UTC m=+231.435359300 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.272763 4681 ???:1] "http: TLS handshake error from 192.168.126.11:44090: no serving certificate available for the kubelet" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.285665 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5qn4m" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.365421 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kmgrn"] Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.366364 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kmgrn" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.370473 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb4kb\" (UniqueName: \"kubernetes.io/projected/da41f745-08e9-4d36-ad1d-3b054a4f0a2f-kube-api-access-kb4kb\") pod \"community-operators-m8stk\" (UID: \"da41f745-08e9-4d36-ad1d-3b054a4f0a2f\") " pod="openshift-marketplace/community-operators-m8stk" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.370535 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da41f745-08e9-4d36-ad1d-3b054a4f0a2f-utilities\") pod \"community-operators-m8stk\" (UID: \"da41f745-08e9-4d36-ad1d-3b054a4f0a2f\") " pod="openshift-marketplace/community-operators-m8stk" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.370569 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da41f745-08e9-4d36-ad1d-3b054a4f0a2f-catalog-content\") pod \"community-operators-m8stk\" (UID: \"da41f745-08e9-4d36-ad1d-3b054a4f0a2f\") " pod="openshift-marketplace/community-operators-m8stk" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.370593 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:11 crc kubenswrapper[4681]: E0404 01:59:11.371522 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:11.87151027 +0000 UTC m=+231.537285390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.422160 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kmgrn"] Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.433467 4681 ???:1] "http: TLS handshake error from 192.168.126.11:44104: no serving certificate available for the kubelet" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.473523 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:11 crc kubenswrapper[4681]: E0404 01:59:11.473682 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:11.973652774 +0000 UTC m=+231.639427894 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.473712 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k9ng\" (UniqueName: \"kubernetes.io/projected/c99a24fb-60ac-48a9-9158-40827f6e3737-kube-api-access-9k9ng\") pod \"certified-operators-kmgrn\" (UID: \"c99a24fb-60ac-48a9-9158-40827f6e3737\") " pod="openshift-marketplace/certified-operators-kmgrn" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.473747 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb4kb\" (UniqueName: \"kubernetes.io/projected/da41f745-08e9-4d36-ad1d-3b054a4f0a2f-kube-api-access-kb4kb\") pod \"community-operators-m8stk\" (UID: \"da41f745-08e9-4d36-ad1d-3b054a4f0a2f\") " pod="openshift-marketplace/community-operators-m8stk" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.473768 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c99a24fb-60ac-48a9-9158-40827f6e3737-utilities\") pod \"certified-operators-kmgrn\" (UID: \"c99a24fb-60ac-48a9-9158-40827f6e3737\") " pod="openshift-marketplace/certified-operators-kmgrn" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.473793 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da41f745-08e9-4d36-ad1d-3b054a4f0a2f-utilities\") pod \"community-operators-m8stk\" (UID: \"da41f745-08e9-4d36-ad1d-3b054a4f0a2f\") " pod="openshift-marketplace/community-operators-m8stk" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.473907 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da41f745-08e9-4d36-ad1d-3b054a4f0a2f-catalog-content\") pod \"community-operators-m8stk\" (UID: \"da41f745-08e9-4d36-ad1d-3b054a4f0a2f\") " pod="openshift-marketplace/community-operators-m8stk" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.473954 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.474018 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c99a24fb-60ac-48a9-9158-40827f6e3737-catalog-content\") pod \"certified-operators-kmgrn\" (UID: \"c99a24fb-60ac-48a9-9158-40827f6e3737\") " pod="openshift-marketplace/certified-operators-kmgrn" Apr 04 01:59:11 crc kubenswrapper[4681]: E0404 01:59:11.474310 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:11.974297482 +0000 UTC m=+231.640072602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.474466 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da41f745-08e9-4d36-ad1d-3b054a4f0a2f-catalog-content\") pod \"community-operators-m8stk\" (UID: \"da41f745-08e9-4d36-ad1d-3b054a4f0a2f\") " pod="openshift-marketplace/community-operators-m8stk" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.474542 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da41f745-08e9-4d36-ad1d-3b054a4f0a2f-utilities\") pod \"community-operators-m8stk\" (UID: \"da41f745-08e9-4d36-ad1d-3b054a4f0a2f\") " pod="openshift-marketplace/community-operators-m8stk" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.496012 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wpzmf"] Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.496203 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-wpzmf" podUID="645ae111-522a-4216-aadd-0901313020ce" containerName="controller-manager" containerID="cri-o://fb8fa00d3aabe8c157730ec75f6a8af9fc1c7ff263585679795f38187a9fffcb" gracePeriod=30 Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.501211 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-wpzmf" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.502286 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb4kb\" (UniqueName: \"kubernetes.io/projected/da41f745-08e9-4d36-ad1d-3b054a4f0a2f-kube-api-access-kb4kb\") pod \"community-operators-m8stk\" (UID: \"da41f745-08e9-4d36-ad1d-3b054a4f0a2f\") " pod="openshift-marketplace/community-operators-m8stk" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.513642 4681 patch_prober.go:28] interesting pod/router-default-5444994796-d25mp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Apr 04 01:59:11 crc kubenswrapper[4681]: [-]has-synced failed: reason withheld Apr 04 01:59:11 crc kubenswrapper[4681]: [+]process-running ok Apr 04 01:59:11 crc kubenswrapper[4681]: healthz check failed Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.513698 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d25mp" podUID="8b59f8f0-e1c8-4187-a509-1a7f58a0ba37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.519162 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm"] Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.519364 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm" podUID="b54a0848-6df8-47da-8537-a01d44322ca4" containerName="route-controller-manager" containerID="cri-o://5476de75c33167b46440343021a24cce3c2eb98edfc710e1899fa446d9aa4d53" gracePeriod=30 Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.527505 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.527820 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8stk" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.570550 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sv8f4"] Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.572402 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sv8f4" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.574174 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-d5qvq" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.575046 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.575289 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c99a24fb-60ac-48a9-9158-40827f6e3737-catalog-content\") pod \"certified-operators-kmgrn\" (UID: \"c99a24fb-60ac-48a9-9158-40827f6e3737\") " pod="openshift-marketplace/certified-operators-kmgrn" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.575350 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k9ng\" (UniqueName: \"kubernetes.io/projected/c99a24fb-60ac-48a9-9158-40827f6e3737-kube-api-access-9k9ng\") pod \"certified-operators-kmgrn\" (UID: \"c99a24fb-60ac-48a9-9158-40827f6e3737\") " pod="openshift-marketplace/certified-operators-kmgrn" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.575376 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c99a24fb-60ac-48a9-9158-40827f6e3737-utilities\") pod \"certified-operators-kmgrn\" (UID: \"c99a24fb-60ac-48a9-9158-40827f6e3737\") " pod="openshift-marketplace/certified-operators-kmgrn" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.575778 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c99a24fb-60ac-48a9-9158-40827f6e3737-catalog-content\") pod \"certified-operators-kmgrn\" (UID: \"c99a24fb-60ac-48a9-9158-40827f6e3737\") " pod="openshift-marketplace/certified-operators-kmgrn" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.575821 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c99a24fb-60ac-48a9-9158-40827f6e3737-utilities\") pod \"certified-operators-kmgrn\" (UID: \"c99a24fb-60ac-48a9-9158-40827f6e3737\") " pod="openshift-marketplace/certified-operators-kmgrn" Apr 04 01:59:11 crc kubenswrapper[4681]: E0404 01:59:11.575861 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:12.075846681 +0000 UTC m=+231.741621801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.582982 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sv8f4"] Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.588471 4681 ???:1] "http: TLS handshake error from 192.168.126.11:44108: no serving certificate available for the kubelet" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.612282 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k9ng\" (UniqueName: \"kubernetes.io/projected/c99a24fb-60ac-48a9-9158-40827f6e3737-kube-api-access-9k9ng\") pod \"certified-operators-kmgrn\" (UID: \"c99a24fb-60ac-48a9-9158-40827f6e3737\") " pod="openshift-marketplace/certified-operators-kmgrn" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.671085 4681 generic.go:334] "Generic (PLEG): container finished" podID="344435c6-96e7-44ac-99d4-55e56ac1f631" containerID="8397287441bcef5398f0eca2a8113041939cea070203820921d3f1f4a6be4c7f" exitCode=0 Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.671196 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"344435c6-96e7-44ac-99d4-55e56ac1f631","Type":"ContainerDied","Data":"8397287441bcef5398f0eca2a8113041939cea070203820921d3f1f4a6be4c7f"} Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.678895 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.678986 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72699dc0-10a9-45c2-9be8-e7a48b8f4edb-catalog-content\") pod \"community-operators-sv8f4\" (UID: \"72699dc0-10a9-45c2-9be8-e7a48b8f4edb\") " pod="openshift-marketplace/community-operators-sv8f4" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.679023 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72699dc0-10a9-45c2-9be8-e7a48b8f4edb-utilities\") pod \"community-operators-sv8f4\" (UID: \"72699dc0-10a9-45c2-9be8-e7a48b8f4edb\") " pod="openshift-marketplace/community-operators-sv8f4" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.679050 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4jdr\" (UniqueName: \"kubernetes.io/projected/72699dc0-10a9-45c2-9be8-e7a48b8f4edb-kube-api-access-s4jdr\") pod \"community-operators-sv8f4\" (UID: \"72699dc0-10a9-45c2-9be8-e7a48b8f4edb\") " pod="openshift-marketplace/community-operators-sv8f4" Apr 04 01:59:11 crc kubenswrapper[4681]: E0404 01:59:11.679897 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:12.17988512 +0000 UTC m=+231.845660240 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.682820 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kmgrn" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.780595 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:11 crc kubenswrapper[4681]: E0404 01:59:11.780770 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:12.280741409 +0000 UTC m=+231.946516529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.781067 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.781126 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72699dc0-10a9-45c2-9be8-e7a48b8f4edb-catalog-content\") pod \"community-operators-sv8f4\" (UID: \"72699dc0-10a9-45c2-9be8-e7a48b8f4edb\") " pod="openshift-marketplace/community-operators-sv8f4" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.781166 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72699dc0-10a9-45c2-9be8-e7a48b8f4edb-utilities\") pod \"community-operators-sv8f4\" (UID: \"72699dc0-10a9-45c2-9be8-e7a48b8f4edb\") " pod="openshift-marketplace/community-operators-sv8f4" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.781200 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4jdr\" (UniqueName: \"kubernetes.io/projected/72699dc0-10a9-45c2-9be8-e7a48b8f4edb-kube-api-access-s4jdr\") pod \"community-operators-sv8f4\" (UID: \"72699dc0-10a9-45c2-9be8-e7a48b8f4edb\") " pod="openshift-marketplace/community-operators-sv8f4" Apr 04 01:59:11 crc kubenswrapper[4681]: E0404 01:59:11.781399 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:12.281389747 +0000 UTC m=+231.947164927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.781666 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72699dc0-10a9-45c2-9be8-e7a48b8f4edb-catalog-content\") pod \"community-operators-sv8f4\" (UID: \"72699dc0-10a9-45c2-9be8-e7a48b8f4edb\") " pod="openshift-marketplace/community-operators-sv8f4" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.790276 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72699dc0-10a9-45c2-9be8-e7a48b8f4edb-utilities\") pod \"community-operators-sv8f4\" (UID: \"72699dc0-10a9-45c2-9be8-e7a48b8f4edb\") " pod="openshift-marketplace/community-operators-sv8f4" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.793916 4681 ???:1] "http: TLS handshake error from 192.168.126.11:44112: no serving certificate available for the kubelet" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.822692 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4jdr\" (UniqueName: \"kubernetes.io/projected/72699dc0-10a9-45c2-9be8-e7a48b8f4edb-kube-api-access-s4jdr\") pod \"community-operators-sv8f4\" (UID: \"72699dc0-10a9-45c2-9be8-e7a48b8f4edb\") " pod="openshift-marketplace/community-operators-sv8f4" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.840402 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.840455 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.844487 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.844553 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.882072 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:11 crc kubenswrapper[4681]: E0404 01:59:11.882550 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:12.382530963 +0000 UTC m=+232.048306093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.886632 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sv8f4" Apr 04 01:59:11 crc kubenswrapper[4681]: I0404 01:59:11.984297 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:11 crc kubenswrapper[4681]: E0404 01:59:11.984678 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:12.484661018 +0000 UTC m=+232.150436138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:12 crc kubenswrapper[4681]: I0404 01:59:12.085320 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:12 crc kubenswrapper[4681]: E0404 01:59:12.085499 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:12.585471815 +0000 UTC m=+232.251246945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:12 crc kubenswrapper[4681]: I0404 01:59:12.085782 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:12 crc kubenswrapper[4681]: E0404 01:59:12.086094 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:12.586083623 +0000 UTC m=+232.251858743 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:12 crc kubenswrapper[4681]: I0404 01:59:12.166297 4681 ???:1] "http: TLS handshake error from 192.168.126.11:44116: no serving certificate available for the kubelet" Apr 04 01:59:12 crc kubenswrapper[4681]: I0404 01:59:12.186499 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:12 crc kubenswrapper[4681]: E0404 01:59:12.186627 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:12.686608263 +0000 UTC m=+232.352383383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:12 crc kubenswrapper[4681]: I0404 01:59:12.187190 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:12 crc kubenswrapper[4681]: E0404 01:59:12.187682 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:12.687669372 +0000 UTC m=+232.353444492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:12 crc kubenswrapper[4681]: I0404 01:59:12.288846 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:12 crc kubenswrapper[4681]: E0404 01:59:12.289315 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:12.789293772 +0000 UTC m=+232.455068892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:12 crc kubenswrapper[4681]: I0404 01:59:12.378032 4681 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 04 01:59:12 crc kubenswrapper[4681]: I0404 01:59:12.391104 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:12 crc kubenswrapper[4681]: E0404 01:59:12.391587 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:12.891562652 +0000 UTC m=+232.557337782 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:12 crc kubenswrapper[4681]: I0404 01:59:12.496986 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:12 crc kubenswrapper[4681]: E0404 01:59:12.497700 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:12.997682569 +0000 UTC m=+232.663457689 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:12 crc kubenswrapper[4681]: I0404 01:59:12.513315 4681 patch_prober.go:28] interesting pod/router-default-5444994796-d25mp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Apr 04 01:59:12 crc kubenswrapper[4681]: [-]has-synced failed: reason withheld Apr 04 01:59:12 crc kubenswrapper[4681]: [+]process-running ok Apr 04 01:59:12 crc kubenswrapper[4681]: healthz check failed Apr 04 01:59:12 crc kubenswrapper[4681]: I0404 01:59:12.513371 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d25mp" podUID="8b59f8f0-e1c8-4187-a509-1a7f58a0ba37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 04 01:59:12 crc kubenswrapper[4681]: I0404 01:59:12.599408 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:12 crc kubenswrapper[4681]: E0404 01:59:12.599686 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:13.099675139 +0000 UTC m=+232.765450259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:12 crc kubenswrapper[4681]: I0404 01:59:12.678669 4681 generic.go:334] "Generic (PLEG): container finished" podID="b54a0848-6df8-47da-8537-a01d44322ca4" containerID="5476de75c33167b46440343021a24cce3c2eb98edfc710e1899fa446d9aa4d53" exitCode=0 Apr 04 01:59:12 crc kubenswrapper[4681]: I0404 01:59:12.678760 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm" event={"ID":"b54a0848-6df8-47da-8537-a01d44322ca4","Type":"ContainerDied","Data":"5476de75c33167b46440343021a24cce3c2eb98edfc710e1899fa446d9aa4d53"} Apr 04 01:59:12 crc kubenswrapper[4681]: I0404 01:59:12.682805 4681 generic.go:334] "Generic (PLEG): container finished" podID="645ae111-522a-4216-aadd-0901313020ce" containerID="fb8fa00d3aabe8c157730ec75f6a8af9fc1c7ff263585679795f38187a9fffcb" exitCode=0 Apr 04 01:59:12 crc kubenswrapper[4681]: I0404 01:59:12.682891 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wpzmf" event={"ID":"645ae111-522a-4216-aadd-0901313020ce","Type":"ContainerDied","Data":"fb8fa00d3aabe8c157730ec75f6a8af9fc1c7ff263585679795f38187a9fffcb"} Apr 04 01:59:12 crc kubenswrapper[4681]: I0404 01:59:12.699976 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:12 crc kubenswrapper[4681]: E0404 01:59:12.700434 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:13.200416225 +0000 UTC m=+232.866191345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:12 crc kubenswrapper[4681]: I0404 01:59:12.806666 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:12 crc kubenswrapper[4681]: E0404 01:59:12.807029 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:13.307015725 +0000 UTC m=+232.972790855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:12 crc kubenswrapper[4681]: I0404 01:59:12.852700 4681 ???:1] "http: TLS handshake error from 192.168.126.11:44130: no serving certificate available for the kubelet" Apr 04 01:59:12 crc kubenswrapper[4681]: I0404 01:59:12.907200 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:12 crc kubenswrapper[4681]: E0404 01:59:12.907398 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:13.40735887 +0000 UTC m=+233.073133990 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:12 crc kubenswrapper[4681]: I0404 01:59:12.907542 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:12 crc kubenswrapper[4681]: E0404 01:59:12.907863 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:13.407854363 +0000 UTC m=+233.073629483 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.008734 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:13 crc kubenswrapper[4681]: E0404 01:59:13.008882 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:13.508856226 +0000 UTC m=+233.174631346 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.009020 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:13 crc kubenswrapper[4681]: E0404 01:59:13.009341 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:13.509327829 +0000 UTC m=+233.175102949 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.110336 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:13 crc kubenswrapper[4681]: E0404 01:59:13.110622 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:13.610590409 +0000 UTC m=+233.276365539 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.110701 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:13 crc kubenswrapper[4681]: E0404 01:59:13.111051 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:13.611038503 +0000 UTC m=+233.276813623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.152403 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w2zrg"] Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.153470 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w2zrg" Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.155433 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.167087 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w2zrg"] Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.212428 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:13 crc kubenswrapper[4681]: E0404 01:59:13.212635 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:13.712609412 +0000 UTC m=+233.378384532 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.212792 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:13 crc kubenswrapper[4681]: E0404 01:59:13.213101 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:13.713089585 +0000 UTC m=+233.378864705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.314029 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:13 crc kubenswrapper[4681]: E0404 01:59:13.314221 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:13.814197981 +0000 UTC m=+233.479973101 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.314276 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpblw\" (UniqueName: \"kubernetes.io/projected/1b3e95cc-25d6-4efd-8828-894657c29bcb-kube-api-access-qpblw\") pod \"redhat-marketplace-w2zrg\" (UID: \"1b3e95cc-25d6-4efd-8828-894657c29bcb\") " pod="openshift-marketplace/redhat-marketplace-w2zrg" Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.314324 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b3e95cc-25d6-4efd-8828-894657c29bcb-catalog-content\") pod \"redhat-marketplace-w2zrg\" (UID: \"1b3e95cc-25d6-4efd-8828-894657c29bcb\") " pod="openshift-marketplace/redhat-marketplace-w2zrg" Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.314451 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b3e95cc-25d6-4efd-8828-894657c29bcb-utilities\") pod \"redhat-marketplace-w2zrg\" (UID: \"1b3e95cc-25d6-4efd-8828-894657c29bcb\") " pod="openshift-marketplace/redhat-marketplace-w2zrg" Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.314526 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:13 crc kubenswrapper[4681]: E0404 01:59:13.314894 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:13.81487601 +0000 UTC m=+233.480651140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.416032 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.416347 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpblw\" (UniqueName: \"kubernetes.io/projected/1b3e95cc-25d6-4efd-8828-894657c29bcb-kube-api-access-qpblw\") pod \"redhat-marketplace-w2zrg\" (UID: \"1b3e95cc-25d6-4efd-8828-894657c29bcb\") " pod="openshift-marketplace/redhat-marketplace-w2zrg" Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.416381 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b3e95cc-25d6-4efd-8828-894657c29bcb-catalog-content\") pod \"redhat-marketplace-w2zrg\" (UID: \"1b3e95cc-25d6-4efd-8828-894657c29bcb\") " pod="openshift-marketplace/redhat-marketplace-w2zrg" Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.416419 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b3e95cc-25d6-4efd-8828-894657c29bcb-utilities\") pod \"redhat-marketplace-w2zrg\" (UID: \"1b3e95cc-25d6-4efd-8828-894657c29bcb\") " pod="openshift-marketplace/redhat-marketplace-w2zrg" Apr 04 01:59:13 crc kubenswrapper[4681]: E0404 01:59:13.416977 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:13.916952543 +0000 UTC m=+233.582727663 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.426089 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b3e95cc-25d6-4efd-8828-894657c29bcb-utilities\") pod \"redhat-marketplace-w2zrg\" (UID: \"1b3e95cc-25d6-4efd-8828-894657c29bcb\") " pod="openshift-marketplace/redhat-marketplace-w2zrg" Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.428274 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b3e95cc-25d6-4efd-8828-894657c29bcb-catalog-content\") pod \"redhat-marketplace-w2zrg\" (UID: \"1b3e95cc-25d6-4efd-8828-894657c29bcb\") " pod="openshift-marketplace/redhat-marketplace-w2zrg" Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.454752 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpblw\" (UniqueName: \"kubernetes.io/projected/1b3e95cc-25d6-4efd-8828-894657c29bcb-kube-api-access-qpblw\") pod \"redhat-marketplace-w2zrg\" (UID: \"1b3e95cc-25d6-4efd-8828-894657c29bcb\") " pod="openshift-marketplace/redhat-marketplace-w2zrg" Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.474104 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w2zrg" Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.502239 4681 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.512889 4681 patch_prober.go:28] interesting pod/router-default-5444994796-d25mp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Apr 04 01:59:13 crc kubenswrapper[4681]: [-]has-synced failed: reason withheld Apr 04 01:59:13 crc kubenswrapper[4681]: [+]process-running ok Apr 04 01:59:13 crc kubenswrapper[4681]: healthz check failed Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.512967 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d25mp" podUID="8b59f8f0-e1c8-4187-a509-1a7f58a0ba37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.517833 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:13 crc kubenswrapper[4681]: E0404 01:59:13.518408 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:14.018395038 +0000 UTC m=+233.684170158 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.552926 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qdvrs"] Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.555514 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdvrs" Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.572062 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdvrs"] Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.618521 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:13 crc kubenswrapper[4681]: E0404 01:59:13.618929 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:14.118914768 +0000 UTC m=+233.784689888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.689487 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dnkcc" event={"ID":"a2c6ee2e-54a9-4992-ac77-2b1f65957602","Type":"ContainerStarted","Data":"3e121c237d2eca738e94b3475b0d371b23930de3b42533ee89af5a547c753de4"} Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.720726 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:13 crc kubenswrapper[4681]: E0404 01:59:13.721622 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:14.221609959 +0000 UTC m=+233.887385079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.721776 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d83b7914-ed31-46fd-9fc5-b7924c6b8b3d-utilities\") pod \"redhat-marketplace-qdvrs\" (UID: \"d83b7914-ed31-46fd-9fc5-b7924c6b8b3d\") " pod="openshift-marketplace/redhat-marketplace-qdvrs" Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.721808 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d83b7914-ed31-46fd-9fc5-b7924c6b8b3d-catalog-content\") pod \"redhat-marketplace-qdvrs\" (UID: \"d83b7914-ed31-46fd-9fc5-b7924c6b8b3d\") " pod="openshift-marketplace/redhat-marketplace-qdvrs" Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.721842 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rfcz\" (UniqueName: \"kubernetes.io/projected/d83b7914-ed31-46fd-9fc5-b7924c6b8b3d-kube-api-access-5rfcz\") pod \"redhat-marketplace-qdvrs\" (UID: \"d83b7914-ed31-46fd-9fc5-b7924c6b8b3d\") " pod="openshift-marketplace/redhat-marketplace-qdvrs" Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.822755 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.822928 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rfcz\" (UniqueName: \"kubernetes.io/projected/d83b7914-ed31-46fd-9fc5-b7924c6b8b3d-kube-api-access-5rfcz\") pod \"redhat-marketplace-qdvrs\" (UID: \"d83b7914-ed31-46fd-9fc5-b7924c6b8b3d\") " pod="openshift-marketplace/redhat-marketplace-qdvrs" Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.823010 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d83b7914-ed31-46fd-9fc5-b7924c6b8b3d-utilities\") pod \"redhat-marketplace-qdvrs\" (UID: \"d83b7914-ed31-46fd-9fc5-b7924c6b8b3d\") " pod="openshift-marketplace/redhat-marketplace-qdvrs" Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.823035 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d83b7914-ed31-46fd-9fc5-b7924c6b8b3d-catalog-content\") pod \"redhat-marketplace-qdvrs\" (UID: \"d83b7914-ed31-46fd-9fc5-b7924c6b8b3d\") " pod="openshift-marketplace/redhat-marketplace-qdvrs" Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.823485 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d83b7914-ed31-46fd-9fc5-b7924c6b8b3d-catalog-content\") pod \"redhat-marketplace-qdvrs\" (UID: \"d83b7914-ed31-46fd-9fc5-b7924c6b8b3d\") " pod="openshift-marketplace/redhat-marketplace-qdvrs" Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.824016 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d83b7914-ed31-46fd-9fc5-b7924c6b8b3d-utilities\") pod \"redhat-marketplace-qdvrs\" (UID: \"d83b7914-ed31-46fd-9fc5-b7924c6b8b3d\") " pod="openshift-marketplace/redhat-marketplace-qdvrs" Apr 04 01:59:13 crc kubenswrapper[4681]: E0404 01:59:13.824396 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:14.324369291 +0000 UTC m=+233.990144461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.857299 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rfcz\" (UniqueName: \"kubernetes.io/projected/d83b7914-ed31-46fd-9fc5-b7924c6b8b3d-kube-api-access-5rfcz\") pod \"redhat-marketplace-qdvrs\" (UID: \"d83b7914-ed31-46fd-9fc5-b7924c6b8b3d\") " pod="openshift-marketplace/redhat-marketplace-qdvrs" Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.857901 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.889332 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdvrs" Apr 04 01:59:13 crc kubenswrapper[4681]: I0404 01:59:13.924173 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:13 crc kubenswrapper[4681]: E0404 01:59:13.924653 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:14.424628793 +0000 UTC m=+234.090404003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.025776 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:14 crc kubenswrapper[4681]: E0404 01:59:14.025966 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:14.525935084 +0000 UTC m=+234.191710204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.026198 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:14 crc kubenswrapper[4681]: E0404 01:59:14.026586 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:14.526566452 +0000 UTC m=+234.192341662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.127249 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:14 crc kubenswrapper[4681]: E0404 01:59:14.127413 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:14.62739253 +0000 UTC m=+234.293167650 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.127496 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:14 crc kubenswrapper[4681]: E0404 01:59:14.127936 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:14.627925676 +0000 UTC m=+234.293700796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.153282 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dvhf7"] Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.154638 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dvhf7" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.156907 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.157688 4681 ???:1] "http: TLS handshake error from 192.168.126.11:44132: no serving certificate available for the kubelet" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.171753 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dvhf7"] Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.229853 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:14 crc kubenswrapper[4681]: E0404 01:59:14.230018 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:14.729989678 +0000 UTC m=+234.395764798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.230373 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:14 crc kubenswrapper[4681]: E0404 01:59:14.230671 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:14.730660537 +0000 UTC m=+234.396435747 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.339152 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:14 crc kubenswrapper[4681]: E0404 01:59:14.339349 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-04-04 01:59:14.839321285 +0000 UTC m=+234.505096405 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.339523 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.339612 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z87tn\" (UniqueName: \"kubernetes.io/projected/b0cbd40c-5c8c-451b-af65-fb67ba867ced-kube-api-access-z87tn\") pod \"redhat-operators-dvhf7\" (UID: \"b0cbd40c-5c8c-451b-af65-fb67ba867ced\") " pod="openshift-marketplace/redhat-operators-dvhf7" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.339757 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0cbd40c-5c8c-451b-af65-fb67ba867ced-utilities\") pod \"redhat-operators-dvhf7\" (UID: \"b0cbd40c-5c8c-451b-af65-fb67ba867ced\") " pod="openshift-marketplace/redhat-operators-dvhf7" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.339805 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0cbd40c-5c8c-451b-af65-fb67ba867ced-catalog-content\") pod \"redhat-operators-dvhf7\" (UID: \"b0cbd40c-5c8c-451b-af65-fb67ba867ced\") " pod="openshift-marketplace/redhat-operators-dvhf7" Apr 04 01:59:14 crc kubenswrapper[4681]: E0404 01:59:14.340152 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-04-04 01:59:14.840140028 +0000 UTC m=+234.505915148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-nzjzv" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.405225 4681 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-04-04T01:59:13.502297825Z","Handler":null,"Name":""} Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.409523 4681 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.409564 4681 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.419072 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.440503 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.440663 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z87tn\" (UniqueName: \"kubernetes.io/projected/b0cbd40c-5c8c-451b-af65-fb67ba867ced-kube-api-access-z87tn\") pod \"redhat-operators-dvhf7\" (UID: \"b0cbd40c-5c8c-451b-af65-fb67ba867ced\") " pod="openshift-marketplace/redhat-operators-dvhf7" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.440723 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0cbd40c-5c8c-451b-af65-fb67ba867ced-utilities\") pod \"redhat-operators-dvhf7\" (UID: \"b0cbd40c-5c8c-451b-af65-fb67ba867ced\") " pod="openshift-marketplace/redhat-operators-dvhf7" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.440774 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0cbd40c-5c8c-451b-af65-fb67ba867ced-catalog-content\") pod \"redhat-operators-dvhf7\" (UID: \"b0cbd40c-5c8c-451b-af65-fb67ba867ced\") " pod="openshift-marketplace/redhat-operators-dvhf7" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.441184 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0cbd40c-5c8c-451b-af65-fb67ba867ced-utilities\") pod \"redhat-operators-dvhf7\" (UID: \"b0cbd40c-5c8c-451b-af65-fb67ba867ced\") " pod="openshift-marketplace/redhat-operators-dvhf7" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.441695 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0cbd40c-5c8c-451b-af65-fb67ba867ced-catalog-content\") pod \"redhat-operators-dvhf7\" (UID: \"b0cbd40c-5c8c-451b-af65-fb67ba867ced\") " pod="openshift-marketplace/redhat-operators-dvhf7" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.444310 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.456087 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z87tn\" (UniqueName: \"kubernetes.io/projected/b0cbd40c-5c8c-451b-af65-fb67ba867ced-kube-api-access-z87tn\") pod \"redhat-operators-dvhf7\" (UID: \"b0cbd40c-5c8c-451b-af65-fb67ba867ced\") " pod="openshift-marketplace/redhat-operators-dvhf7" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.478349 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dvhf7" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.513128 4681 patch_prober.go:28] interesting pod/router-default-5444994796-d25mp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Apr 04 01:59:14 crc kubenswrapper[4681]: [-]has-synced failed: reason withheld Apr 04 01:59:14 crc kubenswrapper[4681]: [+]process-running ok Apr 04 01:59:14 crc kubenswrapper[4681]: healthz check failed Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.513211 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d25mp" podUID="8b59f8f0-e1c8-4187-a509-1a7f58a0ba37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.542682 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.542802 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41bdd8e6-130d-4e3e-b466-313031c233d1-metrics-certs\") pod \"network-metrics-daemon-jk6f6\" (UID: \"41bdd8e6-130d-4e3e-b466-313031c233d1\") " pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.553602 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xdpcd"] Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.554754 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xdpcd" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.566980 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xdpcd"] Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.571541 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/41bdd8e6-130d-4e3e-b466-313031c233d1-metrics-certs\") pod \"network-metrics-daemon-jk6f6\" (UID: \"41bdd8e6-130d-4e3e-b466-313031c233d1\") " pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.745124 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbcf0420-aff0-484c-9c2b-134552760373-catalog-content\") pod \"redhat-operators-xdpcd\" (UID: \"cbcf0420-aff0-484c-9c2b-134552760373\") " pod="openshift-marketplace/redhat-operators-xdpcd" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.745353 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdgzk\" (UniqueName: \"kubernetes.io/projected/cbcf0420-aff0-484c-9c2b-134552760373-kube-api-access-rdgzk\") pod \"redhat-operators-xdpcd\" (UID: \"cbcf0420-aff0-484c-9c2b-134552760373\") " pod="openshift-marketplace/redhat-operators-xdpcd" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.745431 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbcf0420-aff0-484c-9c2b-134552760373-utilities\") pod \"redhat-operators-xdpcd\" (UID: \"cbcf0420-aff0-484c-9c2b-134552760373\") " pod="openshift-marketplace/redhat-operators-xdpcd" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.826387 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jk6f6" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.850298 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbcf0420-aff0-484c-9c2b-134552760373-catalog-content\") pod \"redhat-operators-xdpcd\" (UID: \"cbcf0420-aff0-484c-9c2b-134552760373\") " pod="openshift-marketplace/redhat-operators-xdpcd" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.850398 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdgzk\" (UniqueName: \"kubernetes.io/projected/cbcf0420-aff0-484c-9c2b-134552760373-kube-api-access-rdgzk\") pod \"redhat-operators-xdpcd\" (UID: \"cbcf0420-aff0-484c-9c2b-134552760373\") " pod="openshift-marketplace/redhat-operators-xdpcd" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.850439 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbcf0420-aff0-484c-9c2b-134552760373-utilities\") pod \"redhat-operators-xdpcd\" (UID: \"cbcf0420-aff0-484c-9c2b-134552760373\") " pod="openshift-marketplace/redhat-operators-xdpcd" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.850914 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbcf0420-aff0-484c-9c2b-134552760373-catalog-content\") pod \"redhat-operators-xdpcd\" (UID: \"cbcf0420-aff0-484c-9c2b-134552760373\") " pod="openshift-marketplace/redhat-operators-xdpcd" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.850999 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbcf0420-aff0-484c-9c2b-134552760373-utilities\") pod \"redhat-operators-xdpcd\" (UID: \"cbcf0420-aff0-484c-9c2b-134552760373\") " pod="openshift-marketplace/redhat-operators-xdpcd" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.865805 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdgzk\" (UniqueName: \"kubernetes.io/projected/cbcf0420-aff0-484c-9c2b-134552760373-kube-api-access-rdgzk\") pod \"redhat-operators-xdpcd\" (UID: \"cbcf0420-aff0-484c-9c2b-134552760373\") " pod="openshift-marketplace/redhat-operators-xdpcd" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.996690 4681 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.996765 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:14 crc kubenswrapper[4681]: I0404 01:59:14.997524 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xdpcd" Apr 04 01:59:15 crc kubenswrapper[4681]: I0404 01:59:15.035757 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" Apr 04 01:59:15 crc kubenswrapper[4681]: I0404 01:59:15.047382 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-7knn5" Apr 04 01:59:15 crc kubenswrapper[4681]: I0404 01:59:15.065210 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-nzjzv\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:15 crc kubenswrapper[4681]: I0404 01:59:15.127504 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:15 crc kubenswrapper[4681]: I0404 01:59:15.207571 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Apr 04 01:59:15 crc kubenswrapper[4681]: I0404 01:59:15.220650 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:59:15 crc kubenswrapper[4681]: I0404 01:59:15.224811 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 01:59:15 crc kubenswrapper[4681]: I0404 01:59:15.479449 4681 ???:1] "http: TLS handshake error from 192.168.126.11:44148: no serving certificate available for the kubelet" Apr 04 01:59:15 crc kubenswrapper[4681]: I0404 01:59:15.511674 4681 patch_prober.go:28] interesting pod/router-default-5444994796-d25mp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Apr 04 01:59:15 crc kubenswrapper[4681]: [-]has-synced failed: reason withheld Apr 04 01:59:15 crc kubenswrapper[4681]: [+]process-running ok Apr 04 01:59:15 crc kubenswrapper[4681]: healthz check failed Apr 04 01:59:15 crc kubenswrapper[4681]: I0404 01:59:15.511747 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-d25mp" podUID="8b59f8f0-e1c8-4187-a509-1a7f58a0ba37" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 04 01:59:15 crc kubenswrapper[4681]: I0404 01:59:15.653627 4681 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn5hz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Apr 04 01:59:15 crc kubenswrapper[4681]: I0404 01:59:15.653671 4681 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn5hz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Apr 04 01:59:15 crc kubenswrapper[4681]: I0404 01:59:15.653693 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fn5hz" podUID="b964ba7c-9b8c-40d8-b671-915649b4d77b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Apr 04 01:59:15 crc kubenswrapper[4681]: I0404 01:59:15.653736 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fn5hz" podUID="b964ba7c-9b8c-40d8-b671-915649b4d77b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Apr 04 01:59:15 crc kubenswrapper[4681]: I0404 01:59:15.706129 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 01:59:15 crc kubenswrapper[4681]: I0404 01:59:15.727858 4681 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-5wdgm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Apr 04 01:59:15 crc kubenswrapper[4681]: I0404 01:59:15.727937 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm" podUID="b54a0848-6df8-47da-8537-a01d44322ca4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Apr 04 01:59:15 crc kubenswrapper[4681]: I0404 01:59:15.857314 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-s28vn" Apr 04 01:59:15 crc kubenswrapper[4681]: I0404 01:59:15.910020 4681 patch_prober.go:28] interesting pod/console-f9d7485db-c6ktd container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Apr 04 01:59:15 crc kubenswrapper[4681]: I0404 01:59:15.910080 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-c6ktd" podUID="1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Apr 04 01:59:15 crc kubenswrapper[4681]: I0404 01:59:15.969381 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.066027 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-kxzxq" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.068405 4681 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-wpzmf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.068445 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-wpzmf" podUID="645ae111-522a-4216-aadd-0901313020ce" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.23:8443/healthz\": dial tcp 10.217.0.23:8443: connect: connection refused" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.231879 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dv8vg" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.357358 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hwjqf" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.360848 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.363718 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.425589 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/344435c6-96e7-44ac-99d4-55e56ac1f631-kube-api-access\") pod \"344435c6-96e7-44ac-99d4-55e56ac1f631\" (UID: \"344435c6-96e7-44ac-99d4-55e56ac1f631\") " Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.425641 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bc827\" (UniqueName: \"kubernetes.io/projected/77f92d06-8808-403f-a105-192cdc57730d-kube-api-access-bc827\") pod \"77f92d06-8808-403f-a105-192cdc57730d\" (UID: \"77f92d06-8808-403f-a105-192cdc57730d\") " Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.425665 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0feb9bef-e8b8-477e-a1fe-33bf077267e5-kube-api-access\") pod \"0feb9bef-e8b8-477e-a1fe-33bf077267e5\" (UID: \"0feb9bef-e8b8-477e-a1fe-33bf077267e5\") " Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.425700 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/77f92d06-8808-403f-a105-192cdc57730d-signing-key\") pod \"77f92d06-8808-403f-a105-192cdc57730d\" (UID: \"77f92d06-8808-403f-a105-192cdc57730d\") " Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.425739 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/344435c6-96e7-44ac-99d4-55e56ac1f631-kubelet-dir\") pod \"344435c6-96e7-44ac-99d4-55e56ac1f631\" (UID: \"344435c6-96e7-44ac-99d4-55e56ac1f631\") " Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.425784 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/77f92d06-8808-403f-a105-192cdc57730d-signing-cabundle\") pod \"77f92d06-8808-403f-a105-192cdc57730d\" (UID: \"77f92d06-8808-403f-a105-192cdc57730d\") " Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.425820 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0feb9bef-e8b8-477e-a1fe-33bf077267e5-kubelet-dir\") pod \"0feb9bef-e8b8-477e-a1fe-33bf077267e5\" (UID: \"0feb9bef-e8b8-477e-a1fe-33bf077267e5\") " Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.426021 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0feb9bef-e8b8-477e-a1fe-33bf077267e5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0feb9bef-e8b8-477e-a1fe-33bf077267e5" (UID: "0feb9bef-e8b8-477e-a1fe-33bf077267e5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.430411 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/344435c6-96e7-44ac-99d4-55e56ac1f631-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "344435c6-96e7-44ac-99d4-55e56ac1f631" (UID: "344435c6-96e7-44ac-99d4-55e56ac1f631"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.430457 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77f92d06-8808-403f-a105-192cdc57730d-signing-key" (OuterVolumeSpecName: "signing-key") pod "77f92d06-8808-403f-a105-192cdc57730d" (UID: "77f92d06-8808-403f-a105-192cdc57730d"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.430470 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/344435c6-96e7-44ac-99d4-55e56ac1f631-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "344435c6-96e7-44ac-99d4-55e56ac1f631" (UID: "344435c6-96e7-44ac-99d4-55e56ac1f631"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.442697 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77f92d06-8808-403f-a105-192cdc57730d-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "77f92d06-8808-403f-a105-192cdc57730d" (UID: "77f92d06-8808-403f-a105-192cdc57730d"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.443304 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77f92d06-8808-403f-a105-192cdc57730d-kube-api-access-bc827" (OuterVolumeSpecName: "kube-api-access-bc827") pod "77f92d06-8808-403f-a105-192cdc57730d" (UID: "77f92d06-8808-403f-a105-192cdc57730d"). InnerVolumeSpecName "kube-api-access-bc827". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.448831 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0feb9bef-e8b8-477e-a1fe-33bf077267e5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0feb9bef-e8b8-477e-a1fe-33bf077267e5" (UID: "0feb9bef-e8b8-477e-a1fe-33bf077267e5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.504296 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6r2x" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.516086 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-d25mp" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.523507 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-d25mp" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.526692 4681 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0feb9bef-e8b8-477e-a1fe-33bf077267e5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.526736 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/344435c6-96e7-44ac-99d4-55e56ac1f631-kube-api-access\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.526749 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bc827\" (UniqueName: \"kubernetes.io/projected/77f92d06-8808-403f-a105-192cdc57730d-kube-api-access-bc827\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.526762 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0feb9bef-e8b8-477e-a1fe-33bf077267e5-kube-api-access\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.526774 4681 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/77f92d06-8808-403f-a105-192cdc57730d-signing-key\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.526784 4681 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/344435c6-96e7-44ac-99d4-55e56ac1f631-kubelet-dir\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.526794 4681 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/77f92d06-8808-403f-a105-192cdc57730d-signing-cabundle\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.717514 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"344435c6-96e7-44ac-99d4-55e56ac1f631","Type":"ContainerDied","Data":"01a5f6509b4ef61367b281b53572401443ebd3b64c86a681175a3a4ec2c32154"} Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.717554 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01a5f6509b4ef61367b281b53572401443ebd3b64c86a681175a3a4ec2c32154" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.717610 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.726074 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-hwjqf" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.726110 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-hwjqf" event={"ID":"77f92d06-8808-403f-a105-192cdc57730d","Type":"ContainerDied","Data":"efbdf398c3a70edaed96a04fd1a499d3b35866bc6f60f0b57e4a9b1bbc592fbd"} Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.726166 4681 scope.go:117] "RemoveContainer" containerID="b72b490a9a87657043759068c316ab4e55728841b1641eeb4f35e674b5965a2c" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.730452 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"0feb9bef-e8b8-477e-a1fe-33bf077267e5","Type":"ContainerDied","Data":"7d2881839c6ef1f9b01b11f9a82c0eb01998cf4d6991faee673c4d20f15147db"} Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.730488 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.730497 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d2881839c6ef1f9b01b11f9a82c0eb01998cf4d6991faee673c4d20f15147db" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.756807 4681 ???:1] "http: TLS handshake error from 192.168.126.11:44156: no serving certificate available for the kubelet" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.783459 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hwjqf"] Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.787352 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-j4m77" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.788742 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-hwjqf"] Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.808008 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-599b54464f-gtwzc"] Apr 04 01:59:16 crc kubenswrapper[4681]: E0404 01:59:16.808304 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="344435c6-96e7-44ac-99d4-55e56ac1f631" containerName="pruner" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.808320 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="344435c6-96e7-44ac-99d4-55e56ac1f631" containerName="pruner" Apr 04 01:59:16 crc kubenswrapper[4681]: E0404 01:59:16.808339 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0feb9bef-e8b8-477e-a1fe-33bf077267e5" containerName="pruner" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.808360 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="0feb9bef-e8b8-477e-a1fe-33bf077267e5" containerName="pruner" Apr 04 01:59:16 crc kubenswrapper[4681]: E0404 01:59:16.808378 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77f92d06-8808-403f-a105-192cdc57730d" containerName="service-ca-controller" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.808386 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="77f92d06-8808-403f-a105-192cdc57730d" containerName="service-ca-controller" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.808522 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="344435c6-96e7-44ac-99d4-55e56ac1f631" containerName="pruner" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.808538 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="77f92d06-8808-403f-a105-192cdc57730d" containerName="service-ca-controller" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.808546 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="0feb9bef-e8b8-477e-a1fe-33bf077267e5" containerName="pruner" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.809043 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-599b54464f-gtwzc" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.813984 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.816231 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.816425 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.816470 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.816677 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.823603 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-599b54464f-gtwzc"] Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.831787 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f28cf32b-22dc-4fc9-99f0-6c7e462ff132-signing-key\") pod \"service-ca-599b54464f-gtwzc\" (UID: \"f28cf32b-22dc-4fc9-99f0-6c7e462ff132\") " pod="openshift-service-ca/service-ca-599b54464f-gtwzc" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.831939 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f28cf32b-22dc-4fc9-99f0-6c7e462ff132-signing-cabundle\") pod \"service-ca-599b54464f-gtwzc\" (UID: \"f28cf32b-22dc-4fc9-99f0-6c7e462ff132\") " pod="openshift-service-ca/service-ca-599b54464f-gtwzc" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.831957 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5rjp\" (UniqueName: \"kubernetes.io/projected/f28cf32b-22dc-4fc9-99f0-6c7e462ff132-kube-api-access-b5rjp\") pod \"service-ca-599b54464f-gtwzc\" (UID: \"f28cf32b-22dc-4fc9-99f0-6c7e462ff132\") " pod="openshift-service-ca/service-ca-599b54464f-gtwzc" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.933183 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f28cf32b-22dc-4fc9-99f0-6c7e462ff132-signing-cabundle\") pod \"service-ca-599b54464f-gtwzc\" (UID: \"f28cf32b-22dc-4fc9-99f0-6c7e462ff132\") " pod="openshift-service-ca/service-ca-599b54464f-gtwzc" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.933487 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5rjp\" (UniqueName: \"kubernetes.io/projected/f28cf32b-22dc-4fc9-99f0-6c7e462ff132-kube-api-access-b5rjp\") pod \"service-ca-599b54464f-gtwzc\" (UID: \"f28cf32b-22dc-4fc9-99f0-6c7e462ff132\") " pod="openshift-service-ca/service-ca-599b54464f-gtwzc" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.933507 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f28cf32b-22dc-4fc9-99f0-6c7e462ff132-signing-key\") pod \"service-ca-599b54464f-gtwzc\" (UID: \"f28cf32b-22dc-4fc9-99f0-6c7e462ff132\") " pod="openshift-service-ca/service-ca-599b54464f-gtwzc" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.933944 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/f28cf32b-22dc-4fc9-99f0-6c7e462ff132-signing-cabundle\") pod \"service-ca-599b54464f-gtwzc\" (UID: \"f28cf32b-22dc-4fc9-99f0-6c7e462ff132\") " pod="openshift-service-ca/service-ca-599b54464f-gtwzc" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.959577 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5rjp\" (UniqueName: \"kubernetes.io/projected/f28cf32b-22dc-4fc9-99f0-6c7e462ff132-kube-api-access-b5rjp\") pod \"service-ca-599b54464f-gtwzc\" (UID: \"f28cf32b-22dc-4fc9-99f0-6c7e462ff132\") " pod="openshift-service-ca/service-ca-599b54464f-gtwzc" Apr 04 01:59:16 crc kubenswrapper[4681]: I0404 01:59:16.965670 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/f28cf32b-22dc-4fc9-99f0-6c7e462ff132-signing-key\") pod \"service-ca-599b54464f-gtwzc\" (UID: \"f28cf32b-22dc-4fc9-99f0-6c7e462ff132\") " pod="openshift-service-ca/service-ca-599b54464f-gtwzc" Apr 04 01:59:17 crc kubenswrapper[4681]: I0404 01:59:17.126983 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-599b54464f-gtwzc" Apr 04 01:59:17 crc kubenswrapper[4681]: I0404 01:59:17.208015 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77f92d06-8808-403f-a105-192cdc57730d" path="/var/lib/kubelet/pods/77f92d06-8808-403f-a105-192cdc57730d/volumes" Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.099530 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm" Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.148878 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b54a0848-6df8-47da-8537-a01d44322ca4-serving-cert\") pod \"b54a0848-6df8-47da-8537-a01d44322ca4\" (UID: \"b54a0848-6df8-47da-8537-a01d44322ca4\") " Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.148989 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b54a0848-6df8-47da-8537-a01d44322ca4-config\") pod \"b54a0848-6df8-47da-8537-a01d44322ca4\" (UID: \"b54a0848-6df8-47da-8537-a01d44322ca4\") " Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.149064 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94txw\" (UniqueName: \"kubernetes.io/projected/b54a0848-6df8-47da-8537-a01d44322ca4-kube-api-access-94txw\") pod \"b54a0848-6df8-47da-8537-a01d44322ca4\" (UID: \"b54a0848-6df8-47da-8537-a01d44322ca4\") " Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.149086 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b54a0848-6df8-47da-8537-a01d44322ca4-client-ca\") pod \"b54a0848-6df8-47da-8537-a01d44322ca4\" (UID: \"b54a0848-6df8-47da-8537-a01d44322ca4\") " Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.150909 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b54a0848-6df8-47da-8537-a01d44322ca4-config" (OuterVolumeSpecName: "config") pod "b54a0848-6df8-47da-8537-a01d44322ca4" (UID: "b54a0848-6df8-47da-8537-a01d44322ca4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.151443 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b54a0848-6df8-47da-8537-a01d44322ca4-client-ca" (OuterVolumeSpecName: "client-ca") pod "b54a0848-6df8-47da-8537-a01d44322ca4" (UID: "b54a0848-6df8-47da-8537-a01d44322ca4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.160385 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b54a0848-6df8-47da-8537-a01d44322ca4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b54a0848-6df8-47da-8537-a01d44322ca4" (UID: "b54a0848-6df8-47da-8537-a01d44322ca4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.177906 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b54a0848-6df8-47da-8537-a01d44322ca4-kube-api-access-94txw" (OuterVolumeSpecName: "kube-api-access-94txw") pod "b54a0848-6df8-47da-8537-a01d44322ca4" (UID: "b54a0848-6df8-47da-8537-a01d44322ca4"). InnerVolumeSpecName "kube-api-access-94txw". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.250292 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b54a0848-6df8-47da-8537-a01d44322ca4-config\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.250343 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94txw\" (UniqueName: \"kubernetes.io/projected/b54a0848-6df8-47da-8537-a01d44322ca4-kube-api-access-94txw\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.250360 4681 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b54a0848-6df8-47da-8537-a01d44322ca4-client-ca\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.250372 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b54a0848-6df8-47da-8537-a01d44322ca4-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.572088 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph"] Apr 04 01:59:18 crc kubenswrapper[4681]: E0404 01:59:18.572650 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b54a0848-6df8-47da-8537-a01d44322ca4" containerName="route-controller-manager" Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.572664 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="b54a0848-6df8-47da-8537-a01d44322ca4" containerName="route-controller-manager" Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.572761 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="b54a0848-6df8-47da-8537-a01d44322ca4" containerName="route-controller-manager" Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.573223 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph" Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.587194 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph"] Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.744896 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm" event={"ID":"b54a0848-6df8-47da-8537-a01d44322ca4","Type":"ContainerDied","Data":"eedce7f58219c8c323b008af1bc4351649a5e5ef3652937cde3bceb001fb6e83"} Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.744990 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm" Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.756079 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b62eaf1b-9cab-4297-a047-7993d2506c1d-serving-cert\") pod \"route-controller-manager-7fd9496cc9-jndph\" (UID: \"b62eaf1b-9cab-4297-a047-7993d2506c1d\") " pod="openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph" Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.756221 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b62eaf1b-9cab-4297-a047-7993d2506c1d-config\") pod \"route-controller-manager-7fd9496cc9-jndph\" (UID: \"b62eaf1b-9cab-4297-a047-7993d2506c1d\") " pod="openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph" Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.756294 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b62eaf1b-9cab-4297-a047-7993d2506c1d-client-ca\") pod \"route-controller-manager-7fd9496cc9-jndph\" (UID: \"b62eaf1b-9cab-4297-a047-7993d2506c1d\") " pod="openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph" Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.756401 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmh7f\" (UniqueName: \"kubernetes.io/projected/b62eaf1b-9cab-4297-a047-7993d2506c1d-kube-api-access-gmh7f\") pod \"route-controller-manager-7fd9496cc9-jndph\" (UID: \"b62eaf1b-9cab-4297-a047-7993d2506c1d\") " pod="openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph" Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.776307 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm"] Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.782367 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-5wdgm"] Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.857181 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b62eaf1b-9cab-4297-a047-7993d2506c1d-config\") pod \"route-controller-manager-7fd9496cc9-jndph\" (UID: \"b62eaf1b-9cab-4297-a047-7993d2506c1d\") " pod="openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph" Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.857235 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b62eaf1b-9cab-4297-a047-7993d2506c1d-client-ca\") pod \"route-controller-manager-7fd9496cc9-jndph\" (UID: \"b62eaf1b-9cab-4297-a047-7993d2506c1d\") " pod="openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph" Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.857306 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmh7f\" (UniqueName: \"kubernetes.io/projected/b62eaf1b-9cab-4297-a047-7993d2506c1d-kube-api-access-gmh7f\") pod \"route-controller-manager-7fd9496cc9-jndph\" (UID: \"b62eaf1b-9cab-4297-a047-7993d2506c1d\") " pod="openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph" Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.857356 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b62eaf1b-9cab-4297-a047-7993d2506c1d-serving-cert\") pod \"route-controller-manager-7fd9496cc9-jndph\" (UID: \"b62eaf1b-9cab-4297-a047-7993d2506c1d\") " pod="openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph" Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.859411 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b62eaf1b-9cab-4297-a047-7993d2506c1d-client-ca\") pod \"route-controller-manager-7fd9496cc9-jndph\" (UID: \"b62eaf1b-9cab-4297-a047-7993d2506c1d\") " pod="openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph" Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.859679 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b62eaf1b-9cab-4297-a047-7993d2506c1d-config\") pod \"route-controller-manager-7fd9496cc9-jndph\" (UID: \"b62eaf1b-9cab-4297-a047-7993d2506c1d\") " pod="openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph" Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.861587 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b62eaf1b-9cab-4297-a047-7993d2506c1d-serving-cert\") pod \"route-controller-manager-7fd9496cc9-jndph\" (UID: \"b62eaf1b-9cab-4297-a047-7993d2506c1d\") " pod="openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph" Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.879506 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmh7f\" (UniqueName: \"kubernetes.io/projected/b62eaf1b-9cab-4297-a047-7993d2506c1d-kube-api-access-gmh7f\") pod \"route-controller-manager-7fd9496cc9-jndph\" (UID: \"b62eaf1b-9cab-4297-a047-7993d2506c1d\") " pod="openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph" Apr 04 01:59:18 crc kubenswrapper[4681]: I0404 01:59:18.895775 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph" Apr 04 01:59:19 crc kubenswrapper[4681]: I0404 01:59:19.211878 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b54a0848-6df8-47da-8537-a01d44322ca4" path="/var/lib/kubelet/pods/b54a0848-6df8-47da-8537-a01d44322ca4/volumes" Apr 04 01:59:19 crc kubenswrapper[4681]: I0404 01:59:19.902365 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wpzmf" Apr 04 01:59:19 crc kubenswrapper[4681]: I0404 01:59:19.973465 4681 scope.go:117] "RemoveContainer" containerID="5476de75c33167b46440343021a24cce3c2eb98edfc710e1899fa446d9aa4d53" Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.080793 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/645ae111-522a-4216-aadd-0901313020ce-config\") pod \"645ae111-522a-4216-aadd-0901313020ce\" (UID: \"645ae111-522a-4216-aadd-0901313020ce\") " Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.081104 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/645ae111-522a-4216-aadd-0901313020ce-client-ca\") pod \"645ae111-522a-4216-aadd-0901313020ce\" (UID: \"645ae111-522a-4216-aadd-0901313020ce\") " Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.081140 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2lcv\" (UniqueName: \"kubernetes.io/projected/645ae111-522a-4216-aadd-0901313020ce-kube-api-access-z2lcv\") pod \"645ae111-522a-4216-aadd-0901313020ce\" (UID: \"645ae111-522a-4216-aadd-0901313020ce\") " Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.081178 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/645ae111-522a-4216-aadd-0901313020ce-serving-cert\") pod \"645ae111-522a-4216-aadd-0901313020ce\" (UID: \"645ae111-522a-4216-aadd-0901313020ce\") " Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.081208 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/645ae111-522a-4216-aadd-0901313020ce-proxy-ca-bundles\") pod \"645ae111-522a-4216-aadd-0901313020ce\" (UID: \"645ae111-522a-4216-aadd-0901313020ce\") " Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.086570 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/645ae111-522a-4216-aadd-0901313020ce-kube-api-access-z2lcv" (OuterVolumeSpecName: "kube-api-access-z2lcv") pod "645ae111-522a-4216-aadd-0901313020ce" (UID: "645ae111-522a-4216-aadd-0901313020ce"). InnerVolumeSpecName "kube-api-access-z2lcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.088330 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/645ae111-522a-4216-aadd-0901313020ce-client-ca" (OuterVolumeSpecName: "client-ca") pod "645ae111-522a-4216-aadd-0901313020ce" (UID: "645ae111-522a-4216-aadd-0901313020ce"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.088414 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/645ae111-522a-4216-aadd-0901313020ce-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "645ae111-522a-4216-aadd-0901313020ce" (UID: "645ae111-522a-4216-aadd-0901313020ce"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.089312 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/645ae111-522a-4216-aadd-0901313020ce-config" (OuterVolumeSpecName: "config") pod "645ae111-522a-4216-aadd-0901313020ce" (UID: "645ae111-522a-4216-aadd-0901313020ce"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.091416 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/645ae111-522a-4216-aadd-0901313020ce-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "645ae111-522a-4216-aadd-0901313020ce" (UID: "645ae111-522a-4216-aadd-0901313020ce"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.184336 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/645ae111-522a-4216-aadd-0901313020ce-config\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.184386 4681 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/645ae111-522a-4216-aadd-0901313020ce-client-ca\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.184400 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2lcv\" (UniqueName: \"kubernetes.io/projected/645ae111-522a-4216-aadd-0901313020ce-kube-api-access-z2lcv\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.184411 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/645ae111-522a-4216-aadd-0901313020ce-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.184442 4681 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/645ae111-522a-4216-aadd-0901313020ce-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.507689 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5qn4m"] Apr 04 01:59:20 crc kubenswrapper[4681]: W0404 01:59:20.528787 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf00114dc_2aae_4d37_8143_71336f144be3.slice/crio-2fc423564d475057b567302bc17716463da1952c821cc09db70634fe92283c2c WatchSource:0}: Error finding container 2fc423564d475057b567302bc17716463da1952c821cc09db70634fe92283c2c: Status 404 returned error can't find the container with id 2fc423564d475057b567302bc17716463da1952c821cc09db70634fe92283c2c Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.636385 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dvhf7"] Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.650211 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sv8f4"] Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.676733 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m8stk"] Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.686133 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nzjzv"] Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.724009 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w2zrg"] Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.733918 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdvrs"] Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.771453 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdvrs" event={"ID":"d83b7914-ed31-46fd-9fc5-b7924c6b8b3d","Type":"ContainerStarted","Data":"2bf6ce6d9c5ea83ae8f94a0c26f5b2ceedac134fd52e2f44e0cbc7f2a49343ec"} Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.773325 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvhf7" event={"ID":"b0cbd40c-5c8c-451b-af65-fb67ba867ced","Type":"ContainerStarted","Data":"bdb8cfad974e2015a6e9a04476c19f3d6c245c02e2c0bb0d9e59b18002cf9cd3"} Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.790304 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5qn4m" event={"ID":"f00114dc-2aae-4d37-8143-71336f144be3","Type":"ContainerStarted","Data":"2fc423564d475057b567302bc17716463da1952c821cc09db70634fe92283c2c"} Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.791962 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kmgrn"] Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.796161 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-wpzmf" event={"ID":"645ae111-522a-4216-aadd-0901313020ce","Type":"ContainerDied","Data":"3942e06a25bb8844313cc5b0325583696d2e2ee8b3a96bbe76fcff6695326649"} Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.796169 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-wpzmf" Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.796512 4681 scope.go:117] "RemoveContainer" containerID="fb8fa00d3aabe8c157730ec75f6a8af9fc1c7ff263585679795f38187a9fffcb" Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.804781 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jk6f6"] Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.804822 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w2zrg" event={"ID":"1b3e95cc-25d6-4efd-8828-894657c29bcb","Type":"ContainerStarted","Data":"f61bf544592be43e66551e2ad2efce665711b9491d6a4baa033d226b9064b82c"} Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.815437 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" event={"ID":"b26036bc-4cff-472f-a379-8dc4541cf018","Type":"ContainerStarted","Data":"28c30d2faad9128ae836e246429d7cae5b3e5eed7ae23ca96311680854fe8597"} Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.818587 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8stk" event={"ID":"da41f745-08e9-4d36-ad1d-3b054a4f0a2f","Type":"ContainerStarted","Data":"bf1b2ca664a043738a03784b7f337797e99b9d468a8f615dce9358d5a01f51c8"} Apr 04 01:59:20 crc kubenswrapper[4681]: W0404 01:59:20.820345 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41bdd8e6_130d_4e3e_b466_313031c233d1.slice/crio-0ff69f31ad6b7663eea82f645860976a8b58e3375c2645c3d0393159c8caf7aa WatchSource:0}: Error finding container 0ff69f31ad6b7663eea82f645860976a8b58e3375c2645c3d0393159c8caf7aa: Status 404 returned error can't find the container with id 0ff69f31ad6b7663eea82f645860976a8b58e3375c2645c3d0393159c8caf7aa Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.822629 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sv8f4" event={"ID":"72699dc0-10a9-45c2-9be8-e7a48b8f4edb","Type":"ContainerStarted","Data":"0687ff25679556832c1335870c08e5631727bf4ce589b61c94bd23bb4110338d"} Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.858298 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph"] Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.863691 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xdpcd"] Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.874008 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wpzmf"] Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.880252 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-wpzmf"] Apr 04 01:59:20 crc kubenswrapper[4681]: W0404 01:59:20.881864 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbcf0420_aff0_484c_9c2b_134552760373.slice/crio-df9c8774db905c1de830ddce922656b58d5c91478f57c5b9e8db101af1436380 WatchSource:0}: Error finding container df9c8774db905c1de830ddce922656b58d5c91478f57c5b9e8db101af1436380: Status 404 returned error can't find the container with id df9c8774db905c1de830ddce922656b58d5c91478f57c5b9e8db101af1436380 Apr 04 01:59:20 crc kubenswrapper[4681]: W0404 01:59:20.889643 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb62eaf1b_9cab_4297_a047_7993d2506c1d.slice/crio-2b6b35ae870480e763d4382c40ec3dcd419db759561b86cfcf9f493064623bac WatchSource:0}: Error finding container 2b6b35ae870480e763d4382c40ec3dcd419db759561b86cfcf9f493064623bac: Status 404 returned error can't find the container with id 2b6b35ae870480e763d4382c40ec3dcd419db759561b86cfcf9f493064623bac Apr 04 01:59:20 crc kubenswrapper[4681]: I0404 01:59:20.889664 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-599b54464f-gtwzc"] Apr 04 01:59:20 crc kubenswrapper[4681]: W0404 01:59:20.898258 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf28cf32b_22dc_4fc9_99f0_6c7e462ff132.slice/crio-52326df531c7159cbc715383989a7fc738c14339ae5975fc74e8ac3b80824eff WatchSource:0}: Error finding container 52326df531c7159cbc715383989a7fc738c14339ae5975fc74e8ac3b80824eff: Status 404 returned error can't find the container with id 52326df531c7159cbc715383989a7fc738c14339ae5975fc74e8ac3b80824eff Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.227098 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="645ae111-522a-4216-aadd-0901313020ce" path="/var/lib/kubelet/pods/645ae111-522a-4216-aadd-0901313020ce/volumes" Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.831496 4681 generic.go:334] "Generic (PLEG): container finished" podID="d83b7914-ed31-46fd-9fc5-b7924c6b8b3d" containerID="391b0dadeab0a17ffb841b690258afdcb0692f62abd514deb8137fc42f4cb404" exitCode=0 Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.831667 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdvrs" event={"ID":"d83b7914-ed31-46fd-9fc5-b7924c6b8b3d","Type":"ContainerDied","Data":"391b0dadeab0a17ffb841b690258afdcb0692f62abd514deb8137fc42f4cb404"} Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.835363 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" event={"ID":"b26036bc-4cff-472f-a379-8dc4541cf018","Type":"ContainerStarted","Data":"a7e788b7107c4e3259472a3dc00b36075c3e1912a5f966f02120b87a1eac6fa5"} Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.835396 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.836879 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-599b54464f-gtwzc" event={"ID":"f28cf32b-22dc-4fc9-99f0-6c7e462ff132","Type":"ContainerStarted","Data":"af129348b406400f02456796e6ab5d8ccc3d133ab79ad65d4596552627cfd6a7"} Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.836902 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-599b54464f-gtwzc" event={"ID":"f28cf32b-22dc-4fc9-99f0-6c7e462ff132","Type":"ContainerStarted","Data":"52326df531c7159cbc715383989a7fc738c14339ae5975fc74e8ac3b80824eff"} Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.838306 4681 generic.go:334] "Generic (PLEG): container finished" podID="c99a24fb-60ac-48a9-9158-40827f6e3737" containerID="6aab3f4dc3df2a20b9b272243d33d40f6acebf13e06ffdf1fd3aeaeb30d006cb" exitCode=0 Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.838375 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmgrn" event={"ID":"c99a24fb-60ac-48a9-9158-40827f6e3737","Type":"ContainerDied","Data":"6aab3f4dc3df2a20b9b272243d33d40f6acebf13e06ffdf1fd3aeaeb30d006cb"} Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.838399 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmgrn" event={"ID":"c99a24fb-60ac-48a9-9158-40827f6e3737","Type":"ContainerStarted","Data":"88a471454d9c64934c6d23c2120096fda91594aaa9ada110a86684d35e3786b8"} Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.840414 4681 generic.go:334] "Generic (PLEG): container finished" podID="1b3e95cc-25d6-4efd-8828-894657c29bcb" containerID="ea39a7e416775c2c2210ed4efb462cad9511190e04a2f1b139bd1e75a8dc18b4" exitCode=0 Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.840458 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w2zrg" event={"ID":"1b3e95cc-25d6-4efd-8828-894657c29bcb","Type":"ContainerDied","Data":"ea39a7e416775c2c2210ed4efb462cad9511190e04a2f1b139bd1e75a8dc18b4"} Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.842400 4681 generic.go:334] "Generic (PLEG): container finished" podID="cbcf0420-aff0-484c-9c2b-134552760373" containerID="b859e8db31e126665f88aa1f626b4d817dc249b7ec77d57ace8fd5d7bee3ba16" exitCode=0 Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.842454 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdpcd" event={"ID":"cbcf0420-aff0-484c-9c2b-134552760373","Type":"ContainerDied","Data":"b859e8db31e126665f88aa1f626b4d817dc249b7ec77d57ace8fd5d7bee3ba16"} Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.842469 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdpcd" event={"ID":"cbcf0420-aff0-484c-9c2b-134552760373","Type":"ContainerStarted","Data":"df9c8774db905c1de830ddce922656b58d5c91478f57c5b9e8db101af1436380"} Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.844469 4681 generic.go:334] "Generic (PLEG): container finished" podID="f00114dc-2aae-4d37-8143-71336f144be3" containerID="043873db76115a77c9ee982f597acffbf8aa1a2683288579c41319e06cd8b16a" exitCode=0 Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.844515 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5qn4m" event={"ID":"f00114dc-2aae-4d37-8143-71336f144be3","Type":"ContainerDied","Data":"043873db76115a77c9ee982f597acffbf8aa1a2683288579c41319e06cd8b16a"} Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.849253 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dnkcc" event={"ID":"a2c6ee2e-54a9-4992-ac77-2b1f65957602","Type":"ContainerStarted","Data":"49d4e348e807c68a82df0e8d70d32a687367137f4cfc4cf6466cb0b99c285080"} Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.850597 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" event={"ID":"41bdd8e6-130d-4e3e-b466-313031c233d1","Type":"ContainerStarted","Data":"857bf2657beee254818af5456c8f7e118255f4ae0424a6c4690c717c84bad520"} Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.850626 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" event={"ID":"41bdd8e6-130d-4e3e-b466-313031c233d1","Type":"ContainerStarted","Data":"0ff69f31ad6b7663eea82f645860976a8b58e3375c2645c3d0393159c8caf7aa"} Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.854782 4681 generic.go:334] "Generic (PLEG): container finished" podID="da41f745-08e9-4d36-ad1d-3b054a4f0a2f" containerID="c437580a9d444045ca6aaa683532dc88323dfeca43cb76045415025b047ace9d" exitCode=0 Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.854856 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8stk" event={"ID":"da41f745-08e9-4d36-ad1d-3b054a4f0a2f","Type":"ContainerDied","Data":"c437580a9d444045ca6aaa683532dc88323dfeca43cb76045415025b047ace9d"} Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.856658 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph" event={"ID":"b62eaf1b-9cab-4297-a047-7993d2506c1d","Type":"ContainerStarted","Data":"577b9b31094b1be4c07d6d0e6dece9540503c35c11615f70876a8ba1412ed3de"} Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.856702 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph" event={"ID":"b62eaf1b-9cab-4297-a047-7993d2506c1d","Type":"ContainerStarted","Data":"2b6b35ae870480e763d4382c40ec3dcd419db759561b86cfcf9f493064623bac"} Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.857084 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph" Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.858246 4681 generic.go:334] "Generic (PLEG): container finished" podID="b0cbd40c-5c8c-451b-af65-fb67ba867ced" containerID="258fb6d829e1afab622e32830f0c3a1037a67688eae1f24ae2637fcd57409690" exitCode=0 Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.858341 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvhf7" event={"ID":"b0cbd40c-5c8c-451b-af65-fb67ba867ced","Type":"ContainerDied","Data":"258fb6d829e1afab622e32830f0c3a1037a67688eae1f24ae2637fcd57409690"} Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.869854 4681 generic.go:334] "Generic (PLEG): container finished" podID="72699dc0-10a9-45c2-9be8-e7a48b8f4edb" containerID="39c7c4d448639f0eebb9aaffcd5954959f90a2cd5f977733bc1d331082bf6b1e" exitCode=0 Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.869892 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sv8f4" event={"ID":"72699dc0-10a9-45c2-9be8-e7a48b8f4edb","Type":"ContainerDied","Data":"39c7c4d448639f0eebb9aaffcd5954959f90a2cd5f977733bc1d331082bf6b1e"} Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.887793 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" podStartSLOduration=185.887777185 podStartE2EDuration="3m5.887777185s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:21.883845054 +0000 UTC m=+241.549620174" watchObservedRunningTime="2026-04-04 01:59:21.887777185 +0000 UTC m=+241.553552315" Apr 04 01:59:21 crc kubenswrapper[4681]: I0404 01:59:21.912818 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-599b54464f-gtwzc" podStartSLOduration=5.912799939 podStartE2EDuration="5.912799939s" podCreationTimestamp="2026-04-04 01:59:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:21.908164329 +0000 UTC m=+241.573939479" watchObservedRunningTime="2026-04-04 01:59:21.912799939 +0000 UTC m=+241.578575059" Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.043887 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph" podStartSLOduration=10.043871708 podStartE2EDuration="10.043871708s" podCreationTimestamp="2026-04-04 01:59:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:22.03929178 +0000 UTC m=+241.705066910" watchObservedRunningTime="2026-04-04 01:59:22.043871708 +0000 UTC m=+241.709646828" Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.283688 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph" Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.575319 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6564c7f45c-s7m5v"] Apr 04 01:59:22 crc kubenswrapper[4681]: E0404 01:59:22.575715 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="645ae111-522a-4216-aadd-0901313020ce" containerName="controller-manager" Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.575726 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="645ae111-522a-4216-aadd-0901313020ce" containerName="controller-manager" Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.575824 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="645ae111-522a-4216-aadd-0901313020ce" containerName="controller-manager" Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.576223 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6564c7f45c-s7m5v" Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.582387 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6564c7f45c-s7m5v"] Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.585325 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.585506 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.585683 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.586203 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.586365 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.586437 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.592168 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.753391 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/441d6509-71d2-4ce0-9408-06023a888142-config\") pod \"controller-manager-6564c7f45c-s7m5v\" (UID: \"441d6509-71d2-4ce0-9408-06023a888142\") " pod="openshift-controller-manager/controller-manager-6564c7f45c-s7m5v" Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.753676 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/441d6509-71d2-4ce0-9408-06023a888142-client-ca\") pod \"controller-manager-6564c7f45c-s7m5v\" (UID: \"441d6509-71d2-4ce0-9408-06023a888142\") " pod="openshift-controller-manager/controller-manager-6564c7f45c-s7m5v" Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.753700 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfqqz\" (UniqueName: \"kubernetes.io/projected/441d6509-71d2-4ce0-9408-06023a888142-kube-api-access-zfqqz\") pod \"controller-manager-6564c7f45c-s7m5v\" (UID: \"441d6509-71d2-4ce0-9408-06023a888142\") " pod="openshift-controller-manager/controller-manager-6564c7f45c-s7m5v" Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.753742 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/441d6509-71d2-4ce0-9408-06023a888142-serving-cert\") pod \"controller-manager-6564c7f45c-s7m5v\" (UID: \"441d6509-71d2-4ce0-9408-06023a888142\") " pod="openshift-controller-manager/controller-manager-6564c7f45c-s7m5v" Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.753845 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/441d6509-71d2-4ce0-9408-06023a888142-proxy-ca-bundles\") pod \"controller-manager-6564c7f45c-s7m5v\" (UID: \"441d6509-71d2-4ce0-9408-06023a888142\") " pod="openshift-controller-manager/controller-manager-6564c7f45c-s7m5v" Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.854912 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/441d6509-71d2-4ce0-9408-06023a888142-serving-cert\") pod \"controller-manager-6564c7f45c-s7m5v\" (UID: \"441d6509-71d2-4ce0-9408-06023a888142\") " pod="openshift-controller-manager/controller-manager-6564c7f45c-s7m5v" Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.854960 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/441d6509-71d2-4ce0-9408-06023a888142-proxy-ca-bundles\") pod \"controller-manager-6564c7f45c-s7m5v\" (UID: \"441d6509-71d2-4ce0-9408-06023a888142\") " pod="openshift-controller-manager/controller-manager-6564c7f45c-s7m5v" Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.855020 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/441d6509-71d2-4ce0-9408-06023a888142-config\") pod \"controller-manager-6564c7f45c-s7m5v\" (UID: \"441d6509-71d2-4ce0-9408-06023a888142\") " pod="openshift-controller-manager/controller-manager-6564c7f45c-s7m5v" Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.855058 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/441d6509-71d2-4ce0-9408-06023a888142-client-ca\") pod \"controller-manager-6564c7f45c-s7m5v\" (UID: \"441d6509-71d2-4ce0-9408-06023a888142\") " pod="openshift-controller-manager/controller-manager-6564c7f45c-s7m5v" Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.855084 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfqqz\" (UniqueName: \"kubernetes.io/projected/441d6509-71d2-4ce0-9408-06023a888142-kube-api-access-zfqqz\") pod \"controller-manager-6564c7f45c-s7m5v\" (UID: \"441d6509-71d2-4ce0-9408-06023a888142\") " pod="openshift-controller-manager/controller-manager-6564c7f45c-s7m5v" Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.856062 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/441d6509-71d2-4ce0-9408-06023a888142-client-ca\") pod \"controller-manager-6564c7f45c-s7m5v\" (UID: \"441d6509-71d2-4ce0-9408-06023a888142\") " pod="openshift-controller-manager/controller-manager-6564c7f45c-s7m5v" Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.856371 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/441d6509-71d2-4ce0-9408-06023a888142-proxy-ca-bundles\") pod \"controller-manager-6564c7f45c-s7m5v\" (UID: \"441d6509-71d2-4ce0-9408-06023a888142\") " pod="openshift-controller-manager/controller-manager-6564c7f45c-s7m5v" Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.856593 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/441d6509-71d2-4ce0-9408-06023a888142-config\") pod \"controller-manager-6564c7f45c-s7m5v\" (UID: \"441d6509-71d2-4ce0-9408-06023a888142\") " pod="openshift-controller-manager/controller-manager-6564c7f45c-s7m5v" Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.862602 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/441d6509-71d2-4ce0-9408-06023a888142-serving-cert\") pod \"controller-manager-6564c7f45c-s7m5v\" (UID: \"441d6509-71d2-4ce0-9408-06023a888142\") " pod="openshift-controller-manager/controller-manager-6564c7f45c-s7m5v" Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.872960 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfqqz\" (UniqueName: \"kubernetes.io/projected/441d6509-71d2-4ce0-9408-06023a888142-kube-api-access-zfqqz\") pod \"controller-manager-6564c7f45c-s7m5v\" (UID: \"441d6509-71d2-4ce0-9408-06023a888142\") " pod="openshift-controller-manager/controller-manager-6564c7f45c-s7m5v" Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.885651 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jk6f6" event={"ID":"41bdd8e6-130d-4e3e-b466-313031c233d1","Type":"ContainerStarted","Data":"c9c58a43ef26a1e27c6a9f433193f516f96a7ca73fcd1da888a43df79c45c028"} Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.888553 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-dnkcc" event={"ID":"a2c6ee2e-54a9-4992-ac77-2b1f65957602","Type":"ContainerStarted","Data":"cc6ba57833eb31c56c0282fa98a1ef947577dbdeb3f5238545a5c7c2008fc2ab"} Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.897419 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587798-tmssr" event={"ID":"fcd50c8c-38c5-4c42-930d-2235c4384328","Type":"ContainerStarted","Data":"77fc41ed9dc3215e18eabf5836b3c72a6b725147a83366fb1517e2c064752993"} Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.907511 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-jk6f6" podStartSLOduration=186.907487367 podStartE2EDuration="3m6.907487367s" podCreationTimestamp="2026-04-04 01:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:22.904514404 +0000 UTC m=+242.570289554" watchObservedRunningTime="2026-04-04 01:59:22.907487367 +0000 UTC m=+242.573262487" Apr 04 01:59:22 crc kubenswrapper[4681]: I0404 01:59:22.917069 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6564c7f45c-s7m5v" Apr 04 01:59:23 crc kubenswrapper[4681]: I0404 01:59:23.195016 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6564c7f45c-s7m5v"] Apr 04 01:59:23 crc kubenswrapper[4681]: W0404 01:59:23.235002 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod441d6509_71d2_4ce0_9408_06023a888142.slice/crio-74d740c8534513d38b3dbdbfe77561694ca6fe2136b47d3db3bd27ac4197d20e WatchSource:0}: Error finding container 74d740c8534513d38b3dbdbfe77561694ca6fe2136b47d3db3bd27ac4197d20e: Status 404 returned error can't find the container with id 74d740c8534513d38b3dbdbfe77561694ca6fe2136b47d3db3bd27ac4197d20e Apr 04 01:59:23 crc kubenswrapper[4681]: I0404 01:59:23.903938 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6564c7f45c-s7m5v" event={"ID":"441d6509-71d2-4ce0-9408-06023a888142","Type":"ContainerStarted","Data":"74d740c8534513d38b3dbdbfe77561694ca6fe2136b47d3db3bd27ac4197d20e"} Apr 04 01:59:23 crc kubenswrapper[4681]: I0404 01:59:23.917635 4681 csr.go:261] certificate signing request csr-d7bxs is approved, waiting to be issued Apr 04 01:59:23 crc kubenswrapper[4681]: I0404 01:59:23.925945 4681 csr.go:257] certificate signing request csr-d7bxs is issued Apr 04 01:59:23 crc kubenswrapper[4681]: I0404 01:59:23.933883 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-dnkcc" podStartSLOduration=30.933862966 podStartE2EDuration="30.933862966s" podCreationTimestamp="2026-04-04 01:58:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:23.930095101 +0000 UTC m=+243.595870221" watchObservedRunningTime="2026-04-04 01:59:23.933862966 +0000 UTC m=+243.599638086" Apr 04 01:59:23 crc kubenswrapper[4681]: I0404 01:59:23.943286 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29587798-tmssr" podStartSLOduration=58.395048847 podStartE2EDuration="1m23.94325695s" podCreationTimestamp="2026-04-04 01:58:00 +0000 UTC" firstStartedPulling="2026-04-04 01:58:56.908558007 +0000 UTC m=+216.574333147" lastFinishedPulling="2026-04-04 01:59:22.45676613 +0000 UTC m=+242.122541250" observedRunningTime="2026-04-04 01:59:23.941972315 +0000 UTC m=+243.607747435" watchObservedRunningTime="2026-04-04 01:59:23.94325695 +0000 UTC m=+243.609032070" Apr 04 01:59:23 crc kubenswrapper[4681]: I0404 01:59:23.990407 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-t522l"] Apr 04 01:59:24 crc kubenswrapper[4681]: I0404 01:59:24.910769 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6564c7f45c-s7m5v" event={"ID":"441d6509-71d2-4ce0-9408-06023a888142","Type":"ContainerStarted","Data":"4dd02b8a213f58779486b466d211efdbc30b605b5c8996b74df64821a0508c12"} Apr 04 01:59:24 crc kubenswrapper[4681]: I0404 01:59:24.927341 4681 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-27 17:22:11.639025809 +0000 UTC Apr 04 01:59:24 crc kubenswrapper[4681]: I0404 01:59:24.927381 4681 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 5703h22m46.711647458s for next certificate rotation Apr 04 01:59:25 crc kubenswrapper[4681]: I0404 01:59:25.654340 4681 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn5hz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Apr 04 01:59:25 crc kubenswrapper[4681]: I0404 01:59:25.654436 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fn5hz" podUID="b964ba7c-9b8c-40d8-b671-915649b4d77b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Apr 04 01:59:25 crc kubenswrapper[4681]: I0404 01:59:25.654486 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-fn5hz" Apr 04 01:59:25 crc kubenswrapper[4681]: I0404 01:59:25.654358 4681 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn5hz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Apr 04 01:59:25 crc kubenswrapper[4681]: I0404 01:59:25.654549 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fn5hz" podUID="b964ba7c-9b8c-40d8-b671-915649b4d77b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Apr 04 01:59:25 crc kubenswrapper[4681]: I0404 01:59:25.655164 4681 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn5hz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Apr 04 01:59:25 crc kubenswrapper[4681]: I0404 01:59:25.655208 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fn5hz" podUID="b964ba7c-9b8c-40d8-b671-915649b4d77b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Apr 04 01:59:25 crc kubenswrapper[4681]: I0404 01:59:25.655219 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"427d17e02e448892f5ae843767760c5a27fcd731bbc3ed6efdec3be022b5a7b5"} pod="openshift-console/downloads-7954f5f757-fn5hz" containerMessage="Container download-server failed liveness probe, will be restarted" Apr 04 01:59:25 crc kubenswrapper[4681]: I0404 01:59:25.655279 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-fn5hz" podUID="b964ba7c-9b8c-40d8-b671-915649b4d77b" containerName="download-server" containerID="cri-o://427d17e02e448892f5ae843767760c5a27fcd731bbc3ed6efdec3be022b5a7b5" gracePeriod=2 Apr 04 01:59:25 crc kubenswrapper[4681]: I0404 01:59:25.953806 4681 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2027-01-02 23:06:00.656426719 +0000 UTC Apr 04 01:59:25 crc kubenswrapper[4681]: I0404 01:59:25.954081 4681 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6573h6m34.70234885s for next certificate rotation Apr 04 01:59:25 crc kubenswrapper[4681]: I0404 01:59:25.961886 4681 generic.go:334] "Generic (PLEG): container finished" podID="fcd50c8c-38c5-4c42-930d-2235c4384328" containerID="77fc41ed9dc3215e18eabf5836b3c72a6b725147a83366fb1517e2c064752993" exitCode=0 Apr 04 01:59:25 crc kubenswrapper[4681]: I0404 01:59:25.961938 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587798-tmssr" event={"ID":"fcd50c8c-38c5-4c42-930d-2235c4384328","Type":"ContainerDied","Data":"77fc41ed9dc3215e18eabf5836b3c72a6b725147a83366fb1517e2c064752993"} Apr 04 01:59:25 crc kubenswrapper[4681]: I0404 01:59:25.962188 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6564c7f45c-s7m5v" Apr 04 01:59:25 crc kubenswrapper[4681]: I0404 01:59:25.967551 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6564c7f45c-s7m5v" Apr 04 01:59:25 crc kubenswrapper[4681]: I0404 01:59:25.981835 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-c6ktd" Apr 04 01:59:25 crc kubenswrapper[4681]: I0404 01:59:25.986046 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-c6ktd" Apr 04 01:59:26 crc kubenswrapper[4681]: I0404 01:59:26.007596 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6564c7f45c-s7m5v" podStartSLOduration=14.007579795 podStartE2EDuration="14.007579795s" podCreationTimestamp="2026-04-04 01:59:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:26.006870196 +0000 UTC m=+245.672645316" watchObservedRunningTime="2026-04-04 01:59:26.007579795 +0000 UTC m=+245.673354915" Apr 04 01:59:26 crc kubenswrapper[4681]: I0404 01:59:26.299551 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-56c4884fb5-4p4lw"] Apr 04 01:59:26 crc kubenswrapper[4681]: I0404 01:59:26.300329 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-56c4884fb5-4p4lw" Apr 04 01:59:26 crc kubenswrapper[4681]: I0404 01:59:26.319802 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-56c4884fb5-4p4lw"] Apr 04 01:59:26 crc kubenswrapper[4681]: I0404 01:59:26.460618 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1e389ab6-12e2-4fa3-b338-3e2080ab710e-console-oauth-config\") pod \"console-56c4884fb5-4p4lw\" (UID: \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\") " pod="openshift-console/console-56c4884fb5-4p4lw" Apr 04 01:59:26 crc kubenswrapper[4681]: I0404 01:59:26.460673 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1e389ab6-12e2-4fa3-b338-3e2080ab710e-console-serving-cert\") pod \"console-56c4884fb5-4p4lw\" (UID: \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\") " pod="openshift-console/console-56c4884fb5-4p4lw" Apr 04 01:59:26 crc kubenswrapper[4681]: I0404 01:59:26.460752 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1e389ab6-12e2-4fa3-b338-3e2080ab710e-oauth-serving-cert\") pod \"console-56c4884fb5-4p4lw\" (UID: \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\") " pod="openshift-console/console-56c4884fb5-4p4lw" Apr 04 01:59:26 crc kubenswrapper[4681]: I0404 01:59:26.460815 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e389ab6-12e2-4fa3-b338-3e2080ab710e-trusted-ca-bundle\") pod \"console-56c4884fb5-4p4lw\" (UID: \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\") " pod="openshift-console/console-56c4884fb5-4p4lw" Apr 04 01:59:26 crc kubenswrapper[4681]: I0404 01:59:26.460846 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1e389ab6-12e2-4fa3-b338-3e2080ab710e-console-config\") pod \"console-56c4884fb5-4p4lw\" (UID: \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\") " pod="openshift-console/console-56c4884fb5-4p4lw" Apr 04 01:59:26 crc kubenswrapper[4681]: I0404 01:59:26.460884 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shj89\" (UniqueName: \"kubernetes.io/projected/1e389ab6-12e2-4fa3-b338-3e2080ab710e-kube-api-access-shj89\") pod \"console-56c4884fb5-4p4lw\" (UID: \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\") " pod="openshift-console/console-56c4884fb5-4p4lw" Apr 04 01:59:26 crc kubenswrapper[4681]: I0404 01:59:26.460947 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1e389ab6-12e2-4fa3-b338-3e2080ab710e-service-ca\") pod \"console-56c4884fb5-4p4lw\" (UID: \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\") " pod="openshift-console/console-56c4884fb5-4p4lw" Apr 04 01:59:26 crc kubenswrapper[4681]: I0404 01:59:26.561649 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1e389ab6-12e2-4fa3-b338-3e2080ab710e-console-serving-cert\") pod \"console-56c4884fb5-4p4lw\" (UID: \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\") " pod="openshift-console/console-56c4884fb5-4p4lw" Apr 04 01:59:26 crc kubenswrapper[4681]: I0404 01:59:26.562028 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1e389ab6-12e2-4fa3-b338-3e2080ab710e-oauth-serving-cert\") pod \"console-56c4884fb5-4p4lw\" (UID: \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\") " pod="openshift-console/console-56c4884fb5-4p4lw" Apr 04 01:59:26 crc kubenswrapper[4681]: I0404 01:59:26.562071 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e389ab6-12e2-4fa3-b338-3e2080ab710e-trusted-ca-bundle\") pod \"console-56c4884fb5-4p4lw\" (UID: \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\") " pod="openshift-console/console-56c4884fb5-4p4lw" Apr 04 01:59:26 crc kubenswrapper[4681]: I0404 01:59:26.562106 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1e389ab6-12e2-4fa3-b338-3e2080ab710e-console-config\") pod \"console-56c4884fb5-4p4lw\" (UID: \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\") " pod="openshift-console/console-56c4884fb5-4p4lw" Apr 04 01:59:26 crc kubenswrapper[4681]: I0404 01:59:26.562145 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shj89\" (UniqueName: \"kubernetes.io/projected/1e389ab6-12e2-4fa3-b338-3e2080ab710e-kube-api-access-shj89\") pod \"console-56c4884fb5-4p4lw\" (UID: \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\") " pod="openshift-console/console-56c4884fb5-4p4lw" Apr 04 01:59:26 crc kubenswrapper[4681]: I0404 01:59:26.562206 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1e389ab6-12e2-4fa3-b338-3e2080ab710e-service-ca\") pod \"console-56c4884fb5-4p4lw\" (UID: \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\") " pod="openshift-console/console-56c4884fb5-4p4lw" Apr 04 01:59:26 crc kubenswrapper[4681]: I0404 01:59:26.562868 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1e389ab6-12e2-4fa3-b338-3e2080ab710e-console-oauth-config\") pod \"console-56c4884fb5-4p4lw\" (UID: \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\") " pod="openshift-console/console-56c4884fb5-4p4lw" Apr 04 01:59:26 crc kubenswrapper[4681]: I0404 01:59:26.563049 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1e389ab6-12e2-4fa3-b338-3e2080ab710e-console-config\") pod \"console-56c4884fb5-4p4lw\" (UID: \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\") " pod="openshift-console/console-56c4884fb5-4p4lw" Apr 04 01:59:26 crc kubenswrapper[4681]: I0404 01:59:26.563166 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1e389ab6-12e2-4fa3-b338-3e2080ab710e-service-ca\") pod \"console-56c4884fb5-4p4lw\" (UID: \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\") " pod="openshift-console/console-56c4884fb5-4p4lw" Apr 04 01:59:26 crc kubenswrapper[4681]: I0404 01:59:26.568597 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1e389ab6-12e2-4fa3-b338-3e2080ab710e-console-serving-cert\") pod \"console-56c4884fb5-4p4lw\" (UID: \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\") " pod="openshift-console/console-56c4884fb5-4p4lw" Apr 04 01:59:26 crc kubenswrapper[4681]: I0404 01:59:26.569827 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1e389ab6-12e2-4fa3-b338-3e2080ab710e-console-oauth-config\") pod \"console-56c4884fb5-4p4lw\" (UID: \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\") " pod="openshift-console/console-56c4884fb5-4p4lw" Apr 04 01:59:26 crc kubenswrapper[4681]: I0404 01:59:26.749612 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1e389ab6-12e2-4fa3-b338-3e2080ab710e-oauth-serving-cert\") pod \"console-56c4884fb5-4p4lw\" (UID: \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\") " pod="openshift-console/console-56c4884fb5-4p4lw" Apr 04 01:59:26 crc kubenswrapper[4681]: I0404 01:59:26.752734 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e389ab6-12e2-4fa3-b338-3e2080ab710e-trusted-ca-bundle\") pod \"console-56c4884fb5-4p4lw\" (UID: \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\") " pod="openshift-console/console-56c4884fb5-4p4lw" Apr 04 01:59:26 crc kubenswrapper[4681]: I0404 01:59:26.755916 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shj89\" (UniqueName: \"kubernetes.io/projected/1e389ab6-12e2-4fa3-b338-3e2080ab710e-kube-api-access-shj89\") pod \"console-56c4884fb5-4p4lw\" (UID: \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\") " pod="openshift-console/console-56c4884fb5-4p4lw" Apr 04 01:59:26 crc kubenswrapper[4681]: I0404 01:59:26.916742 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-56c4884fb5-4p4lw" Apr 04 01:59:26 crc kubenswrapper[4681]: I0404 01:59:26.968651 4681 generic.go:334] "Generic (PLEG): container finished" podID="b964ba7c-9b8c-40d8-b671-915649b4d77b" containerID="427d17e02e448892f5ae843767760c5a27fcd731bbc3ed6efdec3be022b5a7b5" exitCode=0 Apr 04 01:59:26 crc kubenswrapper[4681]: I0404 01:59:26.968759 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fn5hz" event={"ID":"b964ba7c-9b8c-40d8-b671-915649b4d77b","Type":"ContainerDied","Data":"427d17e02e448892f5ae843767760c5a27fcd731bbc3ed6efdec3be022b5a7b5"} Apr 04 01:59:27 crc kubenswrapper[4681]: I0404 01:59:27.338755 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587798-tmssr" Apr 04 01:59:27 crc kubenswrapper[4681]: I0404 01:59:27.478853 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n78gm\" (UniqueName: \"kubernetes.io/projected/fcd50c8c-38c5-4c42-930d-2235c4384328-kube-api-access-n78gm\") pod \"fcd50c8c-38c5-4c42-930d-2235c4384328\" (UID: \"fcd50c8c-38c5-4c42-930d-2235c4384328\") " Apr 04 01:59:27 crc kubenswrapper[4681]: I0404 01:59:27.489879 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcd50c8c-38c5-4c42-930d-2235c4384328-kube-api-access-n78gm" (OuterVolumeSpecName: "kube-api-access-n78gm") pod "fcd50c8c-38c5-4c42-930d-2235c4384328" (UID: "fcd50c8c-38c5-4c42-930d-2235c4384328"). InnerVolumeSpecName "kube-api-access-n78gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:59:27 crc kubenswrapper[4681]: I0404 01:59:27.490314 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-56c4884fb5-4p4lw"] Apr 04 01:59:27 crc kubenswrapper[4681]: W0404 01:59:27.523150 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e389ab6_12e2_4fa3_b338_3e2080ab710e.slice/crio-f3f20ee447beb38922d02430fa2b359c153eef5e049e444d2bf66a16e46ea2ae WatchSource:0}: Error finding container f3f20ee447beb38922d02430fa2b359c153eef5e049e444d2bf66a16e46ea2ae: Status 404 returned error can't find the container with id f3f20ee447beb38922d02430fa2b359c153eef5e049e444d2bf66a16e46ea2ae Apr 04 01:59:27 crc kubenswrapper[4681]: I0404 01:59:27.580433 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n78gm\" (UniqueName: \"kubernetes.io/projected/fcd50c8c-38c5-4c42-930d-2235c4384328-kube-api-access-n78gm\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:27 crc kubenswrapper[4681]: I0404 01:59:27.979206 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587798-tmssr" event={"ID":"fcd50c8c-38c5-4c42-930d-2235c4384328","Type":"ContainerDied","Data":"5e2ff0fa32024e08e808e07c16cb5533271b5627f1b4a184fbfce846e62b3d22"} Apr 04 01:59:27 crc kubenswrapper[4681]: I0404 01:59:27.979253 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e2ff0fa32024e08e808e07c16cb5533271b5627f1b4a184fbfce846e62b3d22" Apr 04 01:59:27 crc kubenswrapper[4681]: I0404 01:59:27.979326 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587798-tmssr" Apr 04 01:59:27 crc kubenswrapper[4681]: I0404 01:59:27.984494 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-56c4884fb5-4p4lw" event={"ID":"1e389ab6-12e2-4fa3-b338-3e2080ab710e","Type":"ContainerStarted","Data":"f3f20ee447beb38922d02430fa2b359c153eef5e049e444d2bf66a16e46ea2ae"} Apr 04 01:59:28 crc kubenswrapper[4681]: I0404 01:59:28.990720 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fn5hz" event={"ID":"b964ba7c-9b8c-40d8-b671-915649b4d77b","Type":"ContainerStarted","Data":"2b0c63a57e7af6bd73fdf9a64faf6acca502dec262f05a6e2a15f5c8f69987b3"} Apr 04 01:59:28 crc kubenswrapper[4681]: I0404 01:59:28.992671 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-56c4884fb5-4p4lw" event={"ID":"1e389ab6-12e2-4fa3-b338-3e2080ab710e","Type":"ContainerStarted","Data":"34645b1e7815792bfb49338f9456ed6bd86790abe903222e7f2c6f9aa6e7f4d7"} Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.015036 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-56c4884fb5-4p4lw" podStartSLOduration=3.015018247 podStartE2EDuration="3.015018247s" podCreationTimestamp="2026-04-04 01:59:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:29.012193618 +0000 UTC m=+248.677968738" watchObservedRunningTime="2026-04-04 01:59:29.015018247 +0000 UTC m=+248.680793357" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.246960 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-59f479d65-sqzp5"] Apr 04 01:59:29 crc kubenswrapper[4681]: E0404 01:59:29.247965 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcd50c8c-38c5-4c42-930d-2235c4384328" containerName="oc" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.247990 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcd50c8c-38c5-4c42-930d-2235c4384328" containerName="oc" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.248128 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcd50c8c-38c5-4c42-930d-2235c4384328" containerName="oc" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.251681 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-59f479d65-sqzp5" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.255400 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-59f479d65-sqzp5"] Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.306793 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/752e5159-8e65-48e4-9c60-8eef98e9b792-ca-trust-extracted\") pod \"image-registry-59f479d65-sqzp5\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " pod="openshift-image-registry/image-registry-59f479d65-sqzp5" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.306875 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-59f479d65-sqzp5\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " pod="openshift-image-registry/image-registry-59f479d65-sqzp5" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.306921 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/752e5159-8e65-48e4-9c60-8eef98e9b792-trusted-ca\") pod \"image-registry-59f479d65-sqzp5\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " pod="openshift-image-registry/image-registry-59f479d65-sqzp5" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.306986 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/752e5159-8e65-48e4-9c60-8eef98e9b792-registry-tls\") pod \"image-registry-59f479d65-sqzp5\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " pod="openshift-image-registry/image-registry-59f479d65-sqzp5" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.307023 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/752e5159-8e65-48e4-9c60-8eef98e9b792-bound-sa-token\") pod \"image-registry-59f479d65-sqzp5\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " pod="openshift-image-registry/image-registry-59f479d65-sqzp5" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.307065 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/752e5159-8e65-48e4-9c60-8eef98e9b792-installation-pull-secrets\") pod \"image-registry-59f479d65-sqzp5\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " pod="openshift-image-registry/image-registry-59f479d65-sqzp5" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.307158 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fx8hc\" (UniqueName: \"kubernetes.io/projected/752e5159-8e65-48e4-9c60-8eef98e9b792-kube-api-access-fx8hc\") pod \"image-registry-59f479d65-sqzp5\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " pod="openshift-image-registry/image-registry-59f479d65-sqzp5" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.307195 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/752e5159-8e65-48e4-9c60-8eef98e9b792-registry-certificates\") pod \"image-registry-59f479d65-sqzp5\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " pod="openshift-image-registry/image-registry-59f479d65-sqzp5" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.326730 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-59f479d65-sqzp5\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " pod="openshift-image-registry/image-registry-59f479d65-sqzp5" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.407993 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/752e5159-8e65-48e4-9c60-8eef98e9b792-ca-trust-extracted\") pod \"image-registry-59f479d65-sqzp5\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " pod="openshift-image-registry/image-registry-59f479d65-sqzp5" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.408047 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/752e5159-8e65-48e4-9c60-8eef98e9b792-trusted-ca\") pod \"image-registry-59f479d65-sqzp5\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " pod="openshift-image-registry/image-registry-59f479d65-sqzp5" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.408076 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/752e5159-8e65-48e4-9c60-8eef98e9b792-registry-tls\") pod \"image-registry-59f479d65-sqzp5\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " pod="openshift-image-registry/image-registry-59f479d65-sqzp5" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.408097 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/752e5159-8e65-48e4-9c60-8eef98e9b792-bound-sa-token\") pod \"image-registry-59f479d65-sqzp5\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " pod="openshift-image-registry/image-registry-59f479d65-sqzp5" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.408123 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/752e5159-8e65-48e4-9c60-8eef98e9b792-installation-pull-secrets\") pod \"image-registry-59f479d65-sqzp5\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " pod="openshift-image-registry/image-registry-59f479d65-sqzp5" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.408149 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fx8hc\" (UniqueName: \"kubernetes.io/projected/752e5159-8e65-48e4-9c60-8eef98e9b792-kube-api-access-fx8hc\") pod \"image-registry-59f479d65-sqzp5\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " pod="openshift-image-registry/image-registry-59f479d65-sqzp5" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.408168 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/752e5159-8e65-48e4-9c60-8eef98e9b792-registry-certificates\") pod \"image-registry-59f479d65-sqzp5\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " pod="openshift-image-registry/image-registry-59f479d65-sqzp5" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.408800 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/752e5159-8e65-48e4-9c60-8eef98e9b792-ca-trust-extracted\") pod \"image-registry-59f479d65-sqzp5\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " pod="openshift-image-registry/image-registry-59f479d65-sqzp5" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.410848 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/752e5159-8e65-48e4-9c60-8eef98e9b792-registry-certificates\") pod \"image-registry-59f479d65-sqzp5\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " pod="openshift-image-registry/image-registry-59f479d65-sqzp5" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.410975 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/752e5159-8e65-48e4-9c60-8eef98e9b792-trusted-ca\") pod \"image-registry-59f479d65-sqzp5\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " pod="openshift-image-registry/image-registry-59f479d65-sqzp5" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.414522 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/752e5159-8e65-48e4-9c60-8eef98e9b792-registry-tls\") pod \"image-registry-59f479d65-sqzp5\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " pod="openshift-image-registry/image-registry-59f479d65-sqzp5" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.426434 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/752e5159-8e65-48e4-9c60-8eef98e9b792-bound-sa-token\") pod \"image-registry-59f479d65-sqzp5\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " pod="openshift-image-registry/image-registry-59f479d65-sqzp5" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.426905 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/752e5159-8e65-48e4-9c60-8eef98e9b792-installation-pull-secrets\") pod \"image-registry-59f479d65-sqzp5\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " pod="openshift-image-registry/image-registry-59f479d65-sqzp5" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.426836 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fx8hc\" (UniqueName: \"kubernetes.io/projected/752e5159-8e65-48e4-9c60-8eef98e9b792-kube-api-access-fx8hc\") pod \"image-registry-59f479d65-sqzp5\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " pod="openshift-image-registry/image-registry-59f479d65-sqzp5" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.443224 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-10-crc"] Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.444156 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-10-crc" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.445918 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.447803 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.455344 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-10-crc"] Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.509108 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8c81a297-30d8-4555-9eec-0da7d04b49eb-kubelet-dir\") pod \"revision-pruner-10-crc\" (UID: \"8c81a297-30d8-4555-9eec-0da7d04b49eb\") " pod="openshift-kube-controller-manager/revision-pruner-10-crc" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.509191 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c81a297-30d8-4555-9eec-0da7d04b49eb-kube-api-access\") pod \"revision-pruner-10-crc\" (UID: \"8c81a297-30d8-4555-9eec-0da7d04b49eb\") " pod="openshift-kube-controller-manager/revision-pruner-10-crc" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.570812 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-59f479d65-sqzp5" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.609877 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c81a297-30d8-4555-9eec-0da7d04b49eb-kube-api-access\") pod \"revision-pruner-10-crc\" (UID: \"8c81a297-30d8-4555-9eec-0da7d04b49eb\") " pod="openshift-kube-controller-manager/revision-pruner-10-crc" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.609949 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8c81a297-30d8-4555-9eec-0da7d04b49eb-kubelet-dir\") pod \"revision-pruner-10-crc\" (UID: \"8c81a297-30d8-4555-9eec-0da7d04b49eb\") " pod="openshift-kube-controller-manager/revision-pruner-10-crc" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.610021 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8c81a297-30d8-4555-9eec-0da7d04b49eb-kubelet-dir\") pod \"revision-pruner-10-crc\" (UID: \"8c81a297-30d8-4555-9eec-0da7d04b49eb\") " pod="openshift-kube-controller-manager/revision-pruner-10-crc" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.638799 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c81a297-30d8-4555-9eec-0da7d04b49eb-kube-api-access\") pod \"revision-pruner-10-crc\" (UID: \"8c81a297-30d8-4555-9eec-0da7d04b49eb\") " pod="openshift-kube-controller-manager/revision-pruner-10-crc" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.830314 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-10-crc" Apr 04 01:59:29 crc kubenswrapper[4681]: I0404 01:59:29.962063 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-59f479d65-sqzp5"] Apr 04 01:59:30 crc kubenswrapper[4681]: I0404 01:59:30.008076 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-fn5hz" Apr 04 01:59:30 crc kubenswrapper[4681]: I0404 01:59:30.009140 4681 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn5hz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Apr 04 01:59:30 crc kubenswrapper[4681]: I0404 01:59:30.009180 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fn5hz" podUID="b964ba7c-9b8c-40d8-b671-915649b4d77b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Apr 04 01:59:30 crc kubenswrapper[4681]: I0404 01:59:30.501065 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-10-crc"] Apr 04 01:59:30 crc kubenswrapper[4681]: I0404 01:59:30.979511 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6564c7f45c-s7m5v"] Apr 04 01:59:30 crc kubenswrapper[4681]: I0404 01:59:30.981302 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6564c7f45c-s7m5v" podUID="441d6509-71d2-4ce0-9408-06023a888142" containerName="controller-manager" containerID="cri-o://4dd02b8a213f58779486b466d211efdbc30b605b5c8996b74df64821a0508c12" gracePeriod=30 Apr 04 01:59:30 crc kubenswrapper[4681]: I0404 01:59:30.985504 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph"] Apr 04 01:59:30 crc kubenswrapper[4681]: I0404 01:59:30.985789 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph" podUID="b62eaf1b-9cab-4297-a047-7993d2506c1d" containerName="route-controller-manager" containerID="cri-o://577b9b31094b1be4c07d6d0e6dece9540503c35c11615f70876a8ba1412ed3de" gracePeriod=30 Apr 04 01:59:31 crc kubenswrapper[4681]: I0404 01:59:31.018812 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-59f479d65-sqzp5" event={"ID":"752e5159-8e65-48e4-9c60-8eef98e9b792","Type":"ContainerStarted","Data":"fcbe01930201d305b064246bca54e3687a22649c2773802654becbc0d2996dea"} Apr 04 01:59:31 crc kubenswrapper[4681]: I0404 01:59:31.021139 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-10-crc" event={"ID":"8c81a297-30d8-4555-9eec-0da7d04b49eb","Type":"ContainerStarted","Data":"a3b9b3b05a59a46de1dc31cacb9bc3d9663c649a374418e0d0cc0639f4a957d7"} Apr 04 01:59:31 crc kubenswrapper[4681]: I0404 01:59:31.021920 4681 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn5hz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Apr 04 01:59:31 crc kubenswrapper[4681]: I0404 01:59:31.021960 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fn5hz" podUID="b964ba7c-9b8c-40d8-b671-915649b4d77b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Apr 04 01:59:31 crc kubenswrapper[4681]: I0404 01:59:31.451381 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-10-crc"] Apr 04 01:59:31 crc kubenswrapper[4681]: I0404 01:59:31.452565 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-crc" Apr 04 01:59:31 crc kubenswrapper[4681]: I0404 01:59:31.464541 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-10-crc"] Apr 04 01:59:31 crc kubenswrapper[4681]: I0404 01:59:31.602354 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cd3e2f32-1fb8-4458-938c-25b7e4a3fb33-var-lock\") pod \"installer-10-crc\" (UID: \"cd3e2f32-1fb8-4458-938c-25b7e4a3fb33\") " pod="openshift-kube-controller-manager/installer-10-crc" Apr 04 01:59:31 crc kubenswrapper[4681]: I0404 01:59:31.602411 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd3e2f32-1fb8-4458-938c-25b7e4a3fb33-kubelet-dir\") pod \"installer-10-crc\" (UID: \"cd3e2f32-1fb8-4458-938c-25b7e4a3fb33\") " pod="openshift-kube-controller-manager/installer-10-crc" Apr 04 01:59:31 crc kubenswrapper[4681]: I0404 01:59:31.602485 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd3e2f32-1fb8-4458-938c-25b7e4a3fb33-kube-api-access\") pod \"installer-10-crc\" (UID: \"cd3e2f32-1fb8-4458-938c-25b7e4a3fb33\") " pod="openshift-kube-controller-manager/installer-10-crc" Apr 04 01:59:31 crc kubenswrapper[4681]: I0404 01:59:31.704135 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd3e2f32-1fb8-4458-938c-25b7e4a3fb33-kube-api-access\") pod \"installer-10-crc\" (UID: \"cd3e2f32-1fb8-4458-938c-25b7e4a3fb33\") " pod="openshift-kube-controller-manager/installer-10-crc" Apr 04 01:59:31 crc kubenswrapper[4681]: I0404 01:59:31.704218 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cd3e2f32-1fb8-4458-938c-25b7e4a3fb33-var-lock\") pod \"installer-10-crc\" (UID: \"cd3e2f32-1fb8-4458-938c-25b7e4a3fb33\") " pod="openshift-kube-controller-manager/installer-10-crc" Apr 04 01:59:31 crc kubenswrapper[4681]: I0404 01:59:31.704254 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd3e2f32-1fb8-4458-938c-25b7e4a3fb33-kubelet-dir\") pod \"installer-10-crc\" (UID: \"cd3e2f32-1fb8-4458-938c-25b7e4a3fb33\") " pod="openshift-kube-controller-manager/installer-10-crc" Apr 04 01:59:31 crc kubenswrapper[4681]: I0404 01:59:31.704349 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cd3e2f32-1fb8-4458-938c-25b7e4a3fb33-var-lock\") pod \"installer-10-crc\" (UID: \"cd3e2f32-1fb8-4458-938c-25b7e4a3fb33\") " pod="openshift-kube-controller-manager/installer-10-crc" Apr 04 01:59:31 crc kubenswrapper[4681]: I0404 01:59:31.704357 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd3e2f32-1fb8-4458-938c-25b7e4a3fb33-kubelet-dir\") pod \"installer-10-crc\" (UID: \"cd3e2f32-1fb8-4458-938c-25b7e4a3fb33\") " pod="openshift-kube-controller-manager/installer-10-crc" Apr 04 01:59:31 crc kubenswrapper[4681]: I0404 01:59:31.723227 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd3e2f32-1fb8-4458-938c-25b7e4a3fb33-kube-api-access\") pod \"installer-10-crc\" (UID: \"cd3e2f32-1fb8-4458-938c-25b7e4a3fb33\") " pod="openshift-kube-controller-manager/installer-10-crc" Apr 04 01:59:31 crc kubenswrapper[4681]: I0404 01:59:31.781626 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-crc" Apr 04 01:59:32 crc kubenswrapper[4681]: I0404 01:59:32.027279 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-59f479d65-sqzp5" event={"ID":"752e5159-8e65-48e4-9c60-8eef98e9b792","Type":"ContainerStarted","Data":"7377aac9196bc2cd47c23b0b9ea0ea2d905772ca6ba9e9272883a0acbe33dcec"} Apr 04 01:59:32 crc kubenswrapper[4681]: I0404 01:59:32.203131 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-10-crc"] Apr 04 01:59:32 crc kubenswrapper[4681]: W0404 01:59:32.208196 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podcd3e2f32_1fb8_4458_938c_25b7e4a3fb33.slice/crio-b22ea33dcfeb50195c4299c5793da2bb081602a030b28ff0e5ab462e04416292 WatchSource:0}: Error finding container b22ea33dcfeb50195c4299c5793da2bb081602a030b28ff0e5ab462e04416292: Status 404 returned error can't find the container with id b22ea33dcfeb50195c4299c5793da2bb081602a030b28ff0e5ab462e04416292 Apr 04 01:59:32 crc kubenswrapper[4681]: I0404 01:59:32.918675 4681 patch_prober.go:28] interesting pod/controller-manager-6564c7f45c-s7m5v container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Apr 04 01:59:32 crc kubenswrapper[4681]: I0404 01:59:32.918953 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6564c7f45c-s7m5v" podUID="441d6509-71d2-4ce0-9408-06023a888142" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Apr 04 01:59:33 crc kubenswrapper[4681]: I0404 01:59:33.036340 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-10-crc" event={"ID":"8c81a297-30d8-4555-9eec-0da7d04b49eb","Type":"ContainerStarted","Data":"20705132d1598805561a8ba0e145615a4800b8c9ff823221f4097c2d0c65233e"} Apr 04 01:59:33 crc kubenswrapper[4681]: I0404 01:59:33.039070 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-10-crc" event={"ID":"cd3e2f32-1fb8-4458-938c-25b7e4a3fb33","Type":"ContainerStarted","Data":"b22ea33dcfeb50195c4299c5793da2bb081602a030b28ff0e5ab462e04416292"} Apr 04 01:59:33 crc kubenswrapper[4681]: I0404 01:59:33.041051 4681 generic.go:334] "Generic (PLEG): container finished" podID="b62eaf1b-9cab-4297-a047-7993d2506c1d" containerID="577b9b31094b1be4c07d6d0e6dece9540503c35c11615f70876a8ba1412ed3de" exitCode=0 Apr 04 01:59:33 crc kubenswrapper[4681]: I0404 01:59:33.041088 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph" event={"ID":"b62eaf1b-9cab-4297-a047-7993d2506c1d","Type":"ContainerDied","Data":"577b9b31094b1be4c07d6d0e6dece9540503c35c11615f70876a8ba1412ed3de"} Apr 04 01:59:34 crc kubenswrapper[4681]: I0404 01:59:34.048056 4681 generic.go:334] "Generic (PLEG): container finished" podID="441d6509-71d2-4ce0-9408-06023a888142" containerID="4dd02b8a213f58779486b466d211efdbc30b605b5c8996b74df64821a0508c12" exitCode=0 Apr 04 01:59:34 crc kubenswrapper[4681]: I0404 01:59:34.048154 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6564c7f45c-s7m5v" event={"ID":"441d6509-71d2-4ce0-9408-06023a888142","Type":"ContainerDied","Data":"4dd02b8a213f58779486b466d211efdbc30b605b5c8996b74df64821a0508c12"} Apr 04 01:59:34 crc kubenswrapper[4681]: I0404 01:59:34.048382 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-59f479d65-sqzp5" Apr 04 01:59:34 crc kubenswrapper[4681]: I0404 01:59:34.065535 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-59f479d65-sqzp5" podStartSLOduration=5.065498544 podStartE2EDuration="5.065498544s" podCreationTimestamp="2026-04-04 01:59:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:34.065177214 +0000 UTC m=+253.730952334" watchObservedRunningTime="2026-04-04 01:59:34.065498544 +0000 UTC m=+253.731273664" Apr 04 01:59:34 crc kubenswrapper[4681]: I0404 01:59:34.651697 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nzjzv"] Apr 04 01:59:34 crc kubenswrapper[4681]: I0404 01:59:34.669037 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-7bdb65549f-f4hr5"] Apr 04 01:59:34 crc kubenswrapper[4681]: I0404 01:59:34.670015 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" Apr 04 01:59:34 crc kubenswrapper[4681]: I0404 01:59:34.680963 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-7bdb65549f-f4hr5"] Apr 04 01:59:34 crc kubenswrapper[4681]: I0404 01:59:34.850188 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/434092c3-92f3-4a1f-833a-872828fdd96e-bound-sa-token\") pod \"image-registry-7bdb65549f-f4hr5\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" Apr 04 01:59:34 crc kubenswrapper[4681]: I0404 01:59:34.850495 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/434092c3-92f3-4a1f-833a-872828fdd96e-trusted-ca\") pod \"image-registry-7bdb65549f-f4hr5\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" Apr 04 01:59:34 crc kubenswrapper[4681]: I0404 01:59:34.850525 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-7bdb65549f-f4hr5\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" Apr 04 01:59:34 crc kubenswrapper[4681]: I0404 01:59:34.850549 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/434092c3-92f3-4a1f-833a-872828fdd96e-registry-certificates\") pod \"image-registry-7bdb65549f-f4hr5\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" Apr 04 01:59:34 crc kubenswrapper[4681]: I0404 01:59:34.850574 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/434092c3-92f3-4a1f-833a-872828fdd96e-installation-pull-secrets\") pod \"image-registry-7bdb65549f-f4hr5\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" Apr 04 01:59:34 crc kubenswrapper[4681]: I0404 01:59:34.850617 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/434092c3-92f3-4a1f-833a-872828fdd96e-registry-tls\") pod \"image-registry-7bdb65549f-f4hr5\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" Apr 04 01:59:34 crc kubenswrapper[4681]: I0404 01:59:34.850642 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/434092c3-92f3-4a1f-833a-872828fdd96e-ca-trust-extracted\") pod \"image-registry-7bdb65549f-f4hr5\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" Apr 04 01:59:34 crc kubenswrapper[4681]: I0404 01:59:34.850885 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rrws\" (UniqueName: \"kubernetes.io/projected/434092c3-92f3-4a1f-833a-872828fdd96e-kube-api-access-5rrws\") pod \"image-registry-7bdb65549f-f4hr5\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" Apr 04 01:59:34 crc kubenswrapper[4681]: I0404 01:59:34.869213 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-7bdb65549f-f4hr5\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" Apr 04 01:59:34 crc kubenswrapper[4681]: I0404 01:59:34.952363 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/434092c3-92f3-4a1f-833a-872828fdd96e-trusted-ca\") pod \"image-registry-7bdb65549f-f4hr5\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" Apr 04 01:59:34 crc kubenswrapper[4681]: I0404 01:59:34.952462 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/434092c3-92f3-4a1f-833a-872828fdd96e-registry-certificates\") pod \"image-registry-7bdb65549f-f4hr5\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" Apr 04 01:59:34 crc kubenswrapper[4681]: I0404 01:59:34.952498 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/434092c3-92f3-4a1f-833a-872828fdd96e-installation-pull-secrets\") pod \"image-registry-7bdb65549f-f4hr5\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" Apr 04 01:59:34 crc kubenswrapper[4681]: I0404 01:59:34.952532 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/434092c3-92f3-4a1f-833a-872828fdd96e-registry-tls\") pod \"image-registry-7bdb65549f-f4hr5\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" Apr 04 01:59:34 crc kubenswrapper[4681]: I0404 01:59:34.952566 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/434092c3-92f3-4a1f-833a-872828fdd96e-ca-trust-extracted\") pod \"image-registry-7bdb65549f-f4hr5\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" Apr 04 01:59:34 crc kubenswrapper[4681]: I0404 01:59:34.952611 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rrws\" (UniqueName: \"kubernetes.io/projected/434092c3-92f3-4a1f-833a-872828fdd96e-kube-api-access-5rrws\") pod \"image-registry-7bdb65549f-f4hr5\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" Apr 04 01:59:34 crc kubenswrapper[4681]: I0404 01:59:34.952654 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/434092c3-92f3-4a1f-833a-872828fdd96e-bound-sa-token\") pod \"image-registry-7bdb65549f-f4hr5\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" Apr 04 01:59:34 crc kubenswrapper[4681]: I0404 01:59:34.953358 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/434092c3-92f3-4a1f-833a-872828fdd96e-ca-trust-extracted\") pod \"image-registry-7bdb65549f-f4hr5\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" Apr 04 01:59:34 crc kubenswrapper[4681]: I0404 01:59:34.953675 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/434092c3-92f3-4a1f-833a-872828fdd96e-trusted-ca\") pod \"image-registry-7bdb65549f-f4hr5\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" Apr 04 01:59:34 crc kubenswrapper[4681]: I0404 01:59:34.953830 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/434092c3-92f3-4a1f-833a-872828fdd96e-registry-certificates\") pod \"image-registry-7bdb65549f-f4hr5\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" Apr 04 01:59:34 crc kubenswrapper[4681]: I0404 01:59:34.957915 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/434092c3-92f3-4a1f-833a-872828fdd96e-installation-pull-secrets\") pod \"image-registry-7bdb65549f-f4hr5\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" Apr 04 01:59:34 crc kubenswrapper[4681]: I0404 01:59:34.967173 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/434092c3-92f3-4a1f-833a-872828fdd96e-bound-sa-token\") pod \"image-registry-7bdb65549f-f4hr5\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" Apr 04 01:59:34 crc kubenswrapper[4681]: I0404 01:59:34.977214 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rrws\" (UniqueName: \"kubernetes.io/projected/434092c3-92f3-4a1f-833a-872828fdd96e-kube-api-access-5rrws\") pod \"image-registry-7bdb65549f-f4hr5\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" Apr 04 01:59:35 crc kubenswrapper[4681]: I0404 01:59:35.081687 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-10-crc" event={"ID":"cd3e2f32-1fb8-4458-938c-25b7e4a3fb33","Type":"ContainerStarted","Data":"cca89d0c625c92de78b959c7681c61100c50a196c94800a28182a6db3a3ec143"} Apr 04 01:59:35 crc kubenswrapper[4681]: I0404 01:59:35.102397 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-10-crc" podStartSLOduration=6.10237469 podStartE2EDuration="6.10237469s" podCreationTimestamp="2026-04-04 01:59:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:35.098889281 +0000 UTC m=+254.764664431" watchObservedRunningTime="2026-04-04 01:59:35.10237469 +0000 UTC m=+254.768149820" Apr 04 01:59:35 crc kubenswrapper[4681]: I0404 01:59:35.156702 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/434092c3-92f3-4a1f-833a-872828fdd96e-registry-tls\") pod \"image-registry-7bdb65549f-f4hr5\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" Apr 04 01:59:35 crc kubenswrapper[4681]: I0404 01:59:35.303527 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" Apr 04 01:59:35 crc kubenswrapper[4681]: I0404 01:59:35.665256 4681 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn5hz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Apr 04 01:59:35 crc kubenswrapper[4681]: I0404 01:59:35.666174 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fn5hz" podUID="b964ba7c-9b8c-40d8-b671-915649b4d77b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Apr 04 01:59:35 crc kubenswrapper[4681]: I0404 01:59:35.680065 4681 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn5hz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Apr 04 01:59:35 crc kubenswrapper[4681]: I0404 01:59:35.680135 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fn5hz" podUID="b964ba7c-9b8c-40d8-b671-915649b4d77b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Apr 04 01:59:36 crc kubenswrapper[4681]: I0404 01:59:36.094199 4681 generic.go:334] "Generic (PLEG): container finished" podID="8c81a297-30d8-4555-9eec-0da7d04b49eb" containerID="20705132d1598805561a8ba0e145615a4800b8c9ff823221f4097c2d0c65233e" exitCode=0 Apr 04 01:59:36 crc kubenswrapper[4681]: I0404 01:59:36.094404 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-10-crc" event={"ID":"8c81a297-30d8-4555-9eec-0da7d04b49eb","Type":"ContainerDied","Data":"20705132d1598805561a8ba0e145615a4800b8c9ff823221f4097c2d0c65233e"} Apr 04 01:59:36 crc kubenswrapper[4681]: I0404 01:59:36.126625 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-10-crc" podStartSLOduration=5.126609208 podStartE2EDuration="5.126609208s" podCreationTimestamp="2026-04-04 01:59:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 01:59:36.122927315 +0000 UTC m=+255.788702445" watchObservedRunningTime="2026-04-04 01:59:36.126609208 +0000 UTC m=+255.792384328" Apr 04 01:59:36 crc kubenswrapper[4681]: I0404 01:59:36.507553 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-l6r2x" Apr 04 01:59:36 crc kubenswrapper[4681]: I0404 01:59:36.917842 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-56c4884fb5-4p4lw" Apr 04 01:59:36 crc kubenswrapper[4681]: I0404 01:59:36.917898 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-56c4884fb5-4p4lw" Apr 04 01:59:36 crc kubenswrapper[4681]: I0404 01:59:36.919746 4681 patch_prober.go:28] interesting pod/console-56c4884fb5-4p4lw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.58:8443/health\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Apr 04 01:59:36 crc kubenswrapper[4681]: I0404 01:59:36.919807 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-56c4884fb5-4p4lw" podUID="1e389ab6-12e2-4fa3-b338-3e2080ab710e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.58:8443/health\": dial tcp 10.217.0.58:8443: connect: connection refused" Apr 04 01:59:39 crc kubenswrapper[4681]: I0404 01:59:39.896900 4681 patch_prober.go:28] interesting pod/route-controller-manager-7fd9496cc9-jndph container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 01:59:39 crc kubenswrapper[4681]: I0404 01:59:39.897244 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph" podUID="b62eaf1b-9cab-4297-a047-7993d2506c1d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 01:59:43 crc kubenswrapper[4681]: I0404 01:59:43.566528 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Apr 04 01:59:43 crc kubenswrapper[4681]: I0404 01:59:43.567579 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Apr 04 01:59:43 crc kubenswrapper[4681]: I0404 01:59:43.575715 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Apr 04 01:59:43 crc kubenswrapper[4681]: I0404 01:59:43.575853 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Apr 04 01:59:43 crc kubenswrapper[4681]: I0404 01:59:43.577089 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Apr 04 01:59:43 crc kubenswrapper[4681]: I0404 01:59:43.762899 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/669511ba-ae82-499c-bf4e-a2893e990205-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"669511ba-ae82-499c-bf4e-a2893e990205\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Apr 04 01:59:43 crc kubenswrapper[4681]: I0404 01:59:43.763109 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/669511ba-ae82-499c-bf4e-a2893e990205-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"669511ba-ae82-499c-bf4e-a2893e990205\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Apr 04 01:59:43 crc kubenswrapper[4681]: I0404 01:59:43.864618 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/669511ba-ae82-499c-bf4e-a2893e990205-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"669511ba-ae82-499c-bf4e-a2893e990205\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Apr 04 01:59:43 crc kubenswrapper[4681]: I0404 01:59:43.864686 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/669511ba-ae82-499c-bf4e-a2893e990205-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"669511ba-ae82-499c-bf4e-a2893e990205\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Apr 04 01:59:43 crc kubenswrapper[4681]: I0404 01:59:43.864764 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/669511ba-ae82-499c-bf4e-a2893e990205-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"669511ba-ae82-499c-bf4e-a2893e990205\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Apr 04 01:59:43 crc kubenswrapper[4681]: I0404 01:59:43.882341 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/669511ba-ae82-499c-bf4e-a2893e990205-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"669511ba-ae82-499c-bf4e-a2893e990205\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Apr 04 01:59:43 crc kubenswrapper[4681]: I0404 01:59:43.911150 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Apr 04 01:59:43 crc kubenswrapper[4681]: I0404 01:59:43.918573 4681 patch_prober.go:28] interesting pod/controller-manager-6564c7f45c-s7m5v container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 01:59:43 crc kubenswrapper[4681]: I0404 01:59:43.918666 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6564c7f45c-s7m5v" podUID="441d6509-71d2-4ce0-9408-06023a888142" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 01:59:44 crc kubenswrapper[4681]: I0404 01:59:44.452222 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Apr 04 01:59:45 crc kubenswrapper[4681]: I0404 01:59:45.135991 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 01:59:45 crc kubenswrapper[4681]: I0404 01:59:45.654582 4681 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn5hz container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Apr 04 01:59:45 crc kubenswrapper[4681]: I0404 01:59:45.654629 4681 patch_prober.go:28] interesting pod/downloads-7954f5f757-fn5hz container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Apr 04 01:59:45 crc kubenswrapper[4681]: I0404 01:59:45.654651 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fn5hz" podUID="b964ba7c-9b8c-40d8-b671-915649b4d77b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Apr 04 01:59:45 crc kubenswrapper[4681]: I0404 01:59:45.654676 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fn5hz" podUID="b964ba7c-9b8c-40d8-b671-915649b4d77b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.21:8080/\": dial tcp 10.217.0.21:8080: connect: connection refused" Apr 04 01:59:46 crc kubenswrapper[4681]: I0404 01:59:46.917867 4681 patch_prober.go:28] interesting pod/console-56c4884fb5-4p4lw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.58:8443/health\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Apr 04 01:59:46 crc kubenswrapper[4681]: I0404 01:59:46.918242 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-56c4884fb5-4p4lw" podUID="1e389ab6-12e2-4fa3-b338-3e2080ab710e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.58:8443/health\": dial tcp 10.217.0.58:8443: connect: connection refused" Apr 04 01:59:47 crc kubenswrapper[4681]: I0404 01:59:47.721620 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-10-crc" Apr 04 01:59:47 crc kubenswrapper[4681]: I0404 01:59:47.726520 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph" Apr 04 01:59:47 crc kubenswrapper[4681]: I0404 01:59:47.917002 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b62eaf1b-9cab-4297-a047-7993d2506c1d-config\") pod \"b62eaf1b-9cab-4297-a047-7993d2506c1d\" (UID: \"b62eaf1b-9cab-4297-a047-7993d2506c1d\") " Apr 04 01:59:47 crc kubenswrapper[4681]: I0404 01:59:47.917134 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b62eaf1b-9cab-4297-a047-7993d2506c1d-serving-cert\") pod \"b62eaf1b-9cab-4297-a047-7993d2506c1d\" (UID: \"b62eaf1b-9cab-4297-a047-7993d2506c1d\") " Apr 04 01:59:47 crc kubenswrapper[4681]: I0404 01:59:47.917340 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8c81a297-30d8-4555-9eec-0da7d04b49eb-kubelet-dir\") pod \"8c81a297-30d8-4555-9eec-0da7d04b49eb\" (UID: \"8c81a297-30d8-4555-9eec-0da7d04b49eb\") " Apr 04 01:59:47 crc kubenswrapper[4681]: I0404 01:59:47.917385 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c81a297-30d8-4555-9eec-0da7d04b49eb-kube-api-access\") pod \"8c81a297-30d8-4555-9eec-0da7d04b49eb\" (UID: \"8c81a297-30d8-4555-9eec-0da7d04b49eb\") " Apr 04 01:59:47 crc kubenswrapper[4681]: I0404 01:59:47.917486 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmh7f\" (UniqueName: \"kubernetes.io/projected/b62eaf1b-9cab-4297-a047-7993d2506c1d-kube-api-access-gmh7f\") pod \"b62eaf1b-9cab-4297-a047-7993d2506c1d\" (UID: \"b62eaf1b-9cab-4297-a047-7993d2506c1d\") " Apr 04 01:59:47 crc kubenswrapper[4681]: I0404 01:59:47.917529 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b62eaf1b-9cab-4297-a047-7993d2506c1d-client-ca\") pod \"b62eaf1b-9cab-4297-a047-7993d2506c1d\" (UID: \"b62eaf1b-9cab-4297-a047-7993d2506c1d\") " Apr 04 01:59:47 crc kubenswrapper[4681]: I0404 01:59:47.917599 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c81a297-30d8-4555-9eec-0da7d04b49eb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8c81a297-30d8-4555-9eec-0da7d04b49eb" (UID: "8c81a297-30d8-4555-9eec-0da7d04b49eb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 01:59:47 crc kubenswrapper[4681]: I0404 01:59:47.918321 4681 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8c81a297-30d8-4555-9eec-0da7d04b49eb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:47 crc kubenswrapper[4681]: I0404 01:59:47.918462 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b62eaf1b-9cab-4297-a047-7993d2506c1d-client-ca" (OuterVolumeSpecName: "client-ca") pod "b62eaf1b-9cab-4297-a047-7993d2506c1d" (UID: "b62eaf1b-9cab-4297-a047-7993d2506c1d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:59:47 crc kubenswrapper[4681]: I0404 01:59:47.918618 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b62eaf1b-9cab-4297-a047-7993d2506c1d-config" (OuterVolumeSpecName: "config") pod "b62eaf1b-9cab-4297-a047-7993d2506c1d" (UID: "b62eaf1b-9cab-4297-a047-7993d2506c1d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:59:47 crc kubenswrapper[4681]: I0404 01:59:47.923464 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b62eaf1b-9cab-4297-a047-7993d2506c1d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b62eaf1b-9cab-4297-a047-7993d2506c1d" (UID: "b62eaf1b-9cab-4297-a047-7993d2506c1d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:59:47 crc kubenswrapper[4681]: I0404 01:59:47.923522 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b62eaf1b-9cab-4297-a047-7993d2506c1d-kube-api-access-gmh7f" (OuterVolumeSpecName: "kube-api-access-gmh7f") pod "b62eaf1b-9cab-4297-a047-7993d2506c1d" (UID: "b62eaf1b-9cab-4297-a047-7993d2506c1d"). InnerVolumeSpecName "kube-api-access-gmh7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:59:47 crc kubenswrapper[4681]: I0404 01:59:47.924503 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c81a297-30d8-4555-9eec-0da7d04b49eb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8c81a297-30d8-4555-9eec-0da7d04b49eb" (UID: "8c81a297-30d8-4555-9eec-0da7d04b49eb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:59:48 crc kubenswrapper[4681]: I0404 01:59:48.019525 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8c81a297-30d8-4555-9eec-0da7d04b49eb-kube-api-access\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:48 crc kubenswrapper[4681]: I0404 01:59:48.019573 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmh7f\" (UniqueName: \"kubernetes.io/projected/b62eaf1b-9cab-4297-a047-7993d2506c1d-kube-api-access-gmh7f\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:48 crc kubenswrapper[4681]: I0404 01:59:48.019589 4681 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b62eaf1b-9cab-4297-a047-7993d2506c1d-client-ca\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:48 crc kubenswrapper[4681]: I0404 01:59:48.019601 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b62eaf1b-9cab-4297-a047-7993d2506c1d-config\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:48 crc kubenswrapper[4681]: I0404 01:59:48.019613 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b62eaf1b-9cab-4297-a047-7993d2506c1d-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:48 crc kubenswrapper[4681]: I0404 01:59:48.166798 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-10-crc" event={"ID":"8c81a297-30d8-4555-9eec-0da7d04b49eb","Type":"ContainerDied","Data":"a3b9b3b05a59a46de1dc31cacb9bc3d9663c649a374418e0d0cc0639f4a957d7"} Apr 04 01:59:48 crc kubenswrapper[4681]: I0404 01:59:48.166850 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3b9b3b05a59a46de1dc31cacb9bc3d9663c649a374418e0d0cc0639f4a957d7" Apr 04 01:59:48 crc kubenswrapper[4681]: I0404 01:59:48.167921 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-10-crc" Apr 04 01:59:48 crc kubenswrapper[4681]: I0404 01:59:48.168662 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph" event={"ID":"b62eaf1b-9cab-4297-a047-7993d2506c1d","Type":"ContainerDied","Data":"2b6b35ae870480e763d4382c40ec3dcd419db759561b86cfcf9f493064623bac"} Apr 04 01:59:48 crc kubenswrapper[4681]: I0404 01:59:48.168701 4681 scope.go:117] "RemoveContainer" containerID="577b9b31094b1be4c07d6d0e6dece9540503c35c11615f70876a8ba1412ed3de" Apr 04 01:59:48 crc kubenswrapper[4681]: I0404 01:59:48.168770 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph" Apr 04 01:59:48 crc kubenswrapper[4681]: I0404 01:59:48.229900 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph"] Apr 04 01:59:48 crc kubenswrapper[4681]: I0404 01:59:48.233026 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7fd9496cc9-jndph"] Apr 04 01:59:49 crc kubenswrapper[4681]: I0404 01:59:49.015211 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-t522l" podUID="0b28142c-7b85-406e-b158-42517bab7f11" containerName="oauth-openshift" containerID="cri-o://75acc2d5d4742918bd229f104b0e1670f1a8e438b16f50b2fc0a86469929bab3" gracePeriod=15 Apr 04 01:59:49 crc kubenswrapper[4681]: I0404 01:59:49.142184 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Apr 04 01:59:49 crc kubenswrapper[4681]: E0404 01:59:49.142472 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b62eaf1b-9cab-4297-a047-7993d2506c1d" containerName="route-controller-manager" Apr 04 01:59:49 crc kubenswrapper[4681]: I0404 01:59:49.142486 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="b62eaf1b-9cab-4297-a047-7993d2506c1d" containerName="route-controller-manager" Apr 04 01:59:49 crc kubenswrapper[4681]: E0404 01:59:49.142544 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c81a297-30d8-4555-9eec-0da7d04b49eb" containerName="pruner" Apr 04 01:59:49 crc kubenswrapper[4681]: I0404 01:59:49.142554 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c81a297-30d8-4555-9eec-0da7d04b49eb" containerName="pruner" Apr 04 01:59:49 crc kubenswrapper[4681]: I0404 01:59:49.142680 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="b62eaf1b-9cab-4297-a047-7993d2506c1d" containerName="route-controller-manager" Apr 04 01:59:49 crc kubenswrapper[4681]: I0404 01:59:49.142700 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c81a297-30d8-4555-9eec-0da7d04b49eb" containerName="pruner" Apr 04 01:59:49 crc kubenswrapper[4681]: I0404 01:59:49.143293 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Apr 04 01:59:49 crc kubenswrapper[4681]: I0404 01:59:49.150157 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Apr 04 01:59:49 crc kubenswrapper[4681]: I0404 01:59:49.213107 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b62eaf1b-9cab-4297-a047-7993d2506c1d" path="/var/lib/kubelet/pods/b62eaf1b-9cab-4297-a047-7993d2506c1d/volumes" Apr 04 01:59:49 crc kubenswrapper[4681]: I0404 01:59:49.240031 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/afeaf156-4185-4c46-b29d-8c865f90cab3-kubelet-dir\") pod \"installer-9-crc\" (UID: \"afeaf156-4185-4c46-b29d-8c865f90cab3\") " pod="openshift-kube-apiserver/installer-9-crc" Apr 04 01:59:49 crc kubenswrapper[4681]: I0404 01:59:49.240088 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afeaf156-4185-4c46-b29d-8c865f90cab3-kube-api-access\") pod \"installer-9-crc\" (UID: \"afeaf156-4185-4c46-b29d-8c865f90cab3\") " pod="openshift-kube-apiserver/installer-9-crc" Apr 04 01:59:49 crc kubenswrapper[4681]: I0404 01:59:49.240111 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/afeaf156-4185-4c46-b29d-8c865f90cab3-var-lock\") pod \"installer-9-crc\" (UID: \"afeaf156-4185-4c46-b29d-8c865f90cab3\") " pod="openshift-kube-apiserver/installer-9-crc" Apr 04 01:59:49 crc kubenswrapper[4681]: I0404 01:59:49.341631 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/afeaf156-4185-4c46-b29d-8c865f90cab3-kubelet-dir\") pod \"installer-9-crc\" (UID: \"afeaf156-4185-4c46-b29d-8c865f90cab3\") " pod="openshift-kube-apiserver/installer-9-crc" Apr 04 01:59:49 crc kubenswrapper[4681]: I0404 01:59:49.341706 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afeaf156-4185-4c46-b29d-8c865f90cab3-kube-api-access\") pod \"installer-9-crc\" (UID: \"afeaf156-4185-4c46-b29d-8c865f90cab3\") " pod="openshift-kube-apiserver/installer-9-crc" Apr 04 01:59:49 crc kubenswrapper[4681]: I0404 01:59:49.341733 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/afeaf156-4185-4c46-b29d-8c865f90cab3-var-lock\") pod \"installer-9-crc\" (UID: \"afeaf156-4185-4c46-b29d-8c865f90cab3\") " pod="openshift-kube-apiserver/installer-9-crc" Apr 04 01:59:49 crc kubenswrapper[4681]: I0404 01:59:49.341759 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/afeaf156-4185-4c46-b29d-8c865f90cab3-kubelet-dir\") pod \"installer-9-crc\" (UID: \"afeaf156-4185-4c46-b29d-8c865f90cab3\") " pod="openshift-kube-apiserver/installer-9-crc" Apr 04 01:59:49 crc kubenswrapper[4681]: I0404 01:59:49.341814 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/afeaf156-4185-4c46-b29d-8c865f90cab3-var-lock\") pod \"installer-9-crc\" (UID: \"afeaf156-4185-4c46-b29d-8c865f90cab3\") " pod="openshift-kube-apiserver/installer-9-crc" Apr 04 01:59:49 crc kubenswrapper[4681]: I0404 01:59:49.363147 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afeaf156-4185-4c46-b29d-8c865f90cab3-kube-api-access\") pod \"installer-9-crc\" (UID: \"afeaf156-4185-4c46-b29d-8c865f90cab3\") " pod="openshift-kube-apiserver/installer-9-crc" Apr 04 01:59:49 crc kubenswrapper[4681]: I0404 01:59:49.467532 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Apr 04 01:59:49 crc kubenswrapper[4681]: I0404 01:59:49.576027 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-59f479d65-sqzp5" Apr 04 01:59:50 crc kubenswrapper[4681]: I0404 01:59:50.181563 4681 generic.go:334] "Generic (PLEG): container finished" podID="0b28142c-7b85-406e-b158-42517bab7f11" containerID="75acc2d5d4742918bd229f104b0e1670f1a8e438b16f50b2fc0a86469929bab3" exitCode=0 Apr 04 01:59:50 crc kubenswrapper[4681]: I0404 01:59:50.181620 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-t522l" event={"ID":"0b28142c-7b85-406e-b158-42517bab7f11","Type":"ContainerDied","Data":"75acc2d5d4742918bd229f104b0e1670f1a8e438b16f50b2fc0a86469929bab3"} Apr 04 01:59:50 crc kubenswrapper[4681]: I0404 01:59:50.610508 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d7b6f8969-zdl55"] Apr 04 01:59:50 crc kubenswrapper[4681]: I0404 01:59:50.634378 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d7b6f8969-zdl55" Apr 04 01:59:50 crc kubenswrapper[4681]: I0404 01:59:50.639604 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Apr 04 01:59:50 crc kubenswrapper[4681]: I0404 01:59:50.639994 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Apr 04 01:59:50 crc kubenswrapper[4681]: I0404 01:59:50.640420 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Apr 04 01:59:50 crc kubenswrapper[4681]: I0404 01:59:50.640682 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Apr 04 01:59:50 crc kubenswrapper[4681]: I0404 01:59:50.641025 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Apr 04 01:59:50 crc kubenswrapper[4681]: I0404 01:59:50.641967 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Apr 04 01:59:50 crc kubenswrapper[4681]: I0404 01:59:50.642444 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d7b6f8969-zdl55"] Apr 04 01:59:50 crc kubenswrapper[4681]: I0404 01:59:50.762571 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5d40eed-1459-4527-bc5b-cec63456d141-serving-cert\") pod \"route-controller-manager-7d7b6f8969-zdl55\" (UID: \"e5d40eed-1459-4527-bc5b-cec63456d141\") " pod="openshift-route-controller-manager/route-controller-manager-7d7b6f8969-zdl55" Apr 04 01:59:50 crc kubenswrapper[4681]: I0404 01:59:50.762675 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5d40eed-1459-4527-bc5b-cec63456d141-client-ca\") pod \"route-controller-manager-7d7b6f8969-zdl55\" (UID: \"e5d40eed-1459-4527-bc5b-cec63456d141\") " pod="openshift-route-controller-manager/route-controller-manager-7d7b6f8969-zdl55" Apr 04 01:59:50 crc kubenswrapper[4681]: I0404 01:59:50.762828 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpddn\" (UniqueName: \"kubernetes.io/projected/e5d40eed-1459-4527-bc5b-cec63456d141-kube-api-access-fpddn\") pod \"route-controller-manager-7d7b6f8969-zdl55\" (UID: \"e5d40eed-1459-4527-bc5b-cec63456d141\") " pod="openshift-route-controller-manager/route-controller-manager-7d7b6f8969-zdl55" Apr 04 01:59:50 crc kubenswrapper[4681]: I0404 01:59:50.763240 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d40eed-1459-4527-bc5b-cec63456d141-config\") pod \"route-controller-manager-7d7b6f8969-zdl55\" (UID: \"e5d40eed-1459-4527-bc5b-cec63456d141\") " pod="openshift-route-controller-manager/route-controller-manager-7d7b6f8969-zdl55" Apr 04 01:59:50 crc kubenswrapper[4681]: I0404 01:59:50.865146 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5d40eed-1459-4527-bc5b-cec63456d141-serving-cert\") pod \"route-controller-manager-7d7b6f8969-zdl55\" (UID: \"e5d40eed-1459-4527-bc5b-cec63456d141\") " pod="openshift-route-controller-manager/route-controller-manager-7d7b6f8969-zdl55" Apr 04 01:59:50 crc kubenswrapper[4681]: I0404 01:59:50.865203 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5d40eed-1459-4527-bc5b-cec63456d141-client-ca\") pod \"route-controller-manager-7d7b6f8969-zdl55\" (UID: \"e5d40eed-1459-4527-bc5b-cec63456d141\") " pod="openshift-route-controller-manager/route-controller-manager-7d7b6f8969-zdl55" Apr 04 01:59:50 crc kubenswrapper[4681]: I0404 01:59:50.865254 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpddn\" (UniqueName: \"kubernetes.io/projected/e5d40eed-1459-4527-bc5b-cec63456d141-kube-api-access-fpddn\") pod \"route-controller-manager-7d7b6f8969-zdl55\" (UID: \"e5d40eed-1459-4527-bc5b-cec63456d141\") " pod="openshift-route-controller-manager/route-controller-manager-7d7b6f8969-zdl55" Apr 04 01:59:50 crc kubenswrapper[4681]: I0404 01:59:50.865308 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d40eed-1459-4527-bc5b-cec63456d141-config\") pod \"route-controller-manager-7d7b6f8969-zdl55\" (UID: \"e5d40eed-1459-4527-bc5b-cec63456d141\") " pod="openshift-route-controller-manager/route-controller-manager-7d7b6f8969-zdl55" Apr 04 01:59:50 crc kubenswrapper[4681]: I0404 01:59:50.866669 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d40eed-1459-4527-bc5b-cec63456d141-config\") pod \"route-controller-manager-7d7b6f8969-zdl55\" (UID: \"e5d40eed-1459-4527-bc5b-cec63456d141\") " pod="openshift-route-controller-manager/route-controller-manager-7d7b6f8969-zdl55" Apr 04 01:59:50 crc kubenswrapper[4681]: I0404 01:59:50.867822 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5d40eed-1459-4527-bc5b-cec63456d141-client-ca\") pod \"route-controller-manager-7d7b6f8969-zdl55\" (UID: \"e5d40eed-1459-4527-bc5b-cec63456d141\") " pod="openshift-route-controller-manager/route-controller-manager-7d7b6f8969-zdl55" Apr 04 01:59:50 crc kubenswrapper[4681]: I0404 01:59:50.873837 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5d40eed-1459-4527-bc5b-cec63456d141-serving-cert\") pod \"route-controller-manager-7d7b6f8969-zdl55\" (UID: \"e5d40eed-1459-4527-bc5b-cec63456d141\") " pod="openshift-route-controller-manager/route-controller-manager-7d7b6f8969-zdl55" Apr 04 01:59:50 crc kubenswrapper[4681]: I0404 01:59:50.885100 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpddn\" (UniqueName: \"kubernetes.io/projected/e5d40eed-1459-4527-bc5b-cec63456d141-kube-api-access-fpddn\") pod \"route-controller-manager-7d7b6f8969-zdl55\" (UID: \"e5d40eed-1459-4527-bc5b-cec63456d141\") " pod="openshift-route-controller-manager/route-controller-manager-7d7b6f8969-zdl55" Apr 04 01:59:50 crc kubenswrapper[4681]: I0404 01:59:50.962770 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d7b6f8969-zdl55" Apr 04 01:59:51 crc kubenswrapper[4681]: I0404 01:59:51.039703 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d7b6f8969-zdl55"] Apr 04 01:59:52 crc kubenswrapper[4681]: I0404 01:59:52.406456 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-7-crc"] Apr 04 01:59:52 crc kubenswrapper[4681]: I0404 01:59:52.407612 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-7-crc" Apr 04 01:59:52 crc kubenswrapper[4681]: I0404 01:59:52.410223 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-5vhrm" Apr 04 01:59:52 crc kubenswrapper[4681]: I0404 01:59:52.412832 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Apr 04 01:59:52 crc kubenswrapper[4681]: I0404 01:59:52.416119 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-7-crc"] Apr 04 01:59:52 crc kubenswrapper[4681]: I0404 01:59:52.589179 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/de32363f-da7c-4423-9648-bab862a94c60-kubelet-dir\") pod \"installer-7-crc\" (UID: \"de32363f-da7c-4423-9648-bab862a94c60\") " pod="openshift-kube-scheduler/installer-7-crc" Apr 04 01:59:52 crc kubenswrapper[4681]: I0404 01:59:52.589339 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/de32363f-da7c-4423-9648-bab862a94c60-var-lock\") pod \"installer-7-crc\" (UID: \"de32363f-da7c-4423-9648-bab862a94c60\") " pod="openshift-kube-scheduler/installer-7-crc" Apr 04 01:59:52 crc kubenswrapper[4681]: I0404 01:59:52.589413 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de32363f-da7c-4423-9648-bab862a94c60-kube-api-access\") pod \"installer-7-crc\" (UID: \"de32363f-da7c-4423-9648-bab862a94c60\") " pod="openshift-kube-scheduler/installer-7-crc" Apr 04 01:59:52 crc kubenswrapper[4681]: I0404 01:59:52.690999 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de32363f-da7c-4423-9648-bab862a94c60-kube-api-access\") pod \"installer-7-crc\" (UID: \"de32363f-da7c-4423-9648-bab862a94c60\") " pod="openshift-kube-scheduler/installer-7-crc" Apr 04 01:59:52 crc kubenswrapper[4681]: I0404 01:59:52.691120 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/de32363f-da7c-4423-9648-bab862a94c60-kubelet-dir\") pod \"installer-7-crc\" (UID: \"de32363f-da7c-4423-9648-bab862a94c60\") " pod="openshift-kube-scheduler/installer-7-crc" Apr 04 01:59:52 crc kubenswrapper[4681]: I0404 01:59:52.691166 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/de32363f-da7c-4423-9648-bab862a94c60-var-lock\") pod \"installer-7-crc\" (UID: \"de32363f-da7c-4423-9648-bab862a94c60\") " pod="openshift-kube-scheduler/installer-7-crc" Apr 04 01:59:52 crc kubenswrapper[4681]: I0404 01:59:52.691281 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/de32363f-da7c-4423-9648-bab862a94c60-kubelet-dir\") pod \"installer-7-crc\" (UID: \"de32363f-da7c-4423-9648-bab862a94c60\") " pod="openshift-kube-scheduler/installer-7-crc" Apr 04 01:59:52 crc kubenswrapper[4681]: I0404 01:59:52.691309 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/de32363f-da7c-4423-9648-bab862a94c60-var-lock\") pod \"installer-7-crc\" (UID: \"de32363f-da7c-4423-9648-bab862a94c60\") " pod="openshift-kube-scheduler/installer-7-crc" Apr 04 01:59:52 crc kubenswrapper[4681]: I0404 01:59:52.731814 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de32363f-da7c-4423-9648-bab862a94c60-kube-api-access\") pod \"installer-7-crc\" (UID: \"de32363f-da7c-4423-9648-bab862a94c60\") " pod="openshift-kube-scheduler/installer-7-crc" Apr 04 01:59:52 crc kubenswrapper[4681]: I0404 01:59:52.742798 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-7-crc" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.400522 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6564c7f45c-s7m5v" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.446907 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-57f89ccc88-6hpw6"] Apr 04 01:59:53 crc kubenswrapper[4681]: E0404 01:59:53.447238 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="441d6509-71d2-4ce0-9408-06023a888142" containerName="controller-manager" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.447256 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="441d6509-71d2-4ce0-9408-06023a888142" containerName="controller-manager" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.447462 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="441d6509-71d2-4ce0-9408-06023a888142" containerName="controller-manager" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.448024 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57f89ccc88-6hpw6" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.458173 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57f89ccc88-6hpw6"] Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.507043 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/441d6509-71d2-4ce0-9408-06023a888142-client-ca\") pod \"441d6509-71d2-4ce0-9408-06023a888142\" (UID: \"441d6509-71d2-4ce0-9408-06023a888142\") " Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.507118 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfqqz\" (UniqueName: \"kubernetes.io/projected/441d6509-71d2-4ce0-9408-06023a888142-kube-api-access-zfqqz\") pod \"441d6509-71d2-4ce0-9408-06023a888142\" (UID: \"441d6509-71d2-4ce0-9408-06023a888142\") " Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.507142 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/441d6509-71d2-4ce0-9408-06023a888142-serving-cert\") pod \"441d6509-71d2-4ce0-9408-06023a888142\" (UID: \"441d6509-71d2-4ce0-9408-06023a888142\") " Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.507203 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/441d6509-71d2-4ce0-9408-06023a888142-config\") pod \"441d6509-71d2-4ce0-9408-06023a888142\" (UID: \"441d6509-71d2-4ce0-9408-06023a888142\") " Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.507224 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/441d6509-71d2-4ce0-9408-06023a888142-proxy-ca-bundles\") pod \"441d6509-71d2-4ce0-9408-06023a888142\" (UID: \"441d6509-71d2-4ce0-9408-06023a888142\") " Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.507696 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/441d6509-71d2-4ce0-9408-06023a888142-client-ca" (OuterVolumeSpecName: "client-ca") pod "441d6509-71d2-4ce0-9408-06023a888142" (UID: "441d6509-71d2-4ce0-9408-06023a888142"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.507886 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/441d6509-71d2-4ce0-9408-06023a888142-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "441d6509-71d2-4ce0-9408-06023a888142" (UID: "441d6509-71d2-4ce0-9408-06023a888142"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.508191 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/441d6509-71d2-4ce0-9408-06023a888142-config" (OuterVolumeSpecName: "config") pod "441d6509-71d2-4ce0-9408-06023a888142" (UID: "441d6509-71d2-4ce0-9408-06023a888142"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.512468 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/441d6509-71d2-4ce0-9408-06023a888142-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "441d6509-71d2-4ce0-9408-06023a888142" (UID: "441d6509-71d2-4ce0-9408-06023a888142"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.513953 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/441d6509-71d2-4ce0-9408-06023a888142-kube-api-access-zfqqz" (OuterVolumeSpecName: "kube-api-access-zfqqz") pod "441d6509-71d2-4ce0-9408-06023a888142" (UID: "441d6509-71d2-4ce0-9408-06023a888142"). InnerVolumeSpecName "kube-api-access-zfqqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.608935 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts6mk\" (UniqueName: \"kubernetes.io/projected/16fd856c-870d-4c0b-986c-844ca3a36bbc-kube-api-access-ts6mk\") pod \"controller-manager-57f89ccc88-6hpw6\" (UID: \"16fd856c-870d-4c0b-986c-844ca3a36bbc\") " pod="openshift-controller-manager/controller-manager-57f89ccc88-6hpw6" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.609150 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16fd856c-870d-4c0b-986c-844ca3a36bbc-proxy-ca-bundles\") pod \"controller-manager-57f89ccc88-6hpw6\" (UID: \"16fd856c-870d-4c0b-986c-844ca3a36bbc\") " pod="openshift-controller-manager/controller-manager-57f89ccc88-6hpw6" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.609209 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16fd856c-870d-4c0b-986c-844ca3a36bbc-serving-cert\") pod \"controller-manager-57f89ccc88-6hpw6\" (UID: \"16fd856c-870d-4c0b-986c-844ca3a36bbc\") " pod="openshift-controller-manager/controller-manager-57f89ccc88-6hpw6" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.609281 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16fd856c-870d-4c0b-986c-844ca3a36bbc-config\") pod \"controller-manager-57f89ccc88-6hpw6\" (UID: \"16fd856c-870d-4c0b-986c-844ca3a36bbc\") " pod="openshift-controller-manager/controller-manager-57f89ccc88-6hpw6" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.609314 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16fd856c-870d-4c0b-986c-844ca3a36bbc-client-ca\") pod \"controller-manager-57f89ccc88-6hpw6\" (UID: \"16fd856c-870d-4c0b-986c-844ca3a36bbc\") " pod="openshift-controller-manager/controller-manager-57f89ccc88-6hpw6" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.609378 4681 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/441d6509-71d2-4ce0-9408-06023a888142-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.609395 4681 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/441d6509-71d2-4ce0-9408-06023a888142-client-ca\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.609408 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfqqz\" (UniqueName: \"kubernetes.io/projected/441d6509-71d2-4ce0-9408-06023a888142-kube-api-access-zfqqz\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.609421 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/441d6509-71d2-4ce0-9408-06023a888142-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.609431 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/441d6509-71d2-4ce0-9408-06023a888142-config\") on node \"crc\" DevicePath \"\"" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.710696 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16fd856c-870d-4c0b-986c-844ca3a36bbc-config\") pod \"controller-manager-57f89ccc88-6hpw6\" (UID: \"16fd856c-870d-4c0b-986c-844ca3a36bbc\") " pod="openshift-controller-manager/controller-manager-57f89ccc88-6hpw6" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.710764 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16fd856c-870d-4c0b-986c-844ca3a36bbc-client-ca\") pod \"controller-manager-57f89ccc88-6hpw6\" (UID: \"16fd856c-870d-4c0b-986c-844ca3a36bbc\") " pod="openshift-controller-manager/controller-manager-57f89ccc88-6hpw6" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.710843 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts6mk\" (UniqueName: \"kubernetes.io/projected/16fd856c-870d-4c0b-986c-844ca3a36bbc-kube-api-access-ts6mk\") pod \"controller-manager-57f89ccc88-6hpw6\" (UID: \"16fd856c-870d-4c0b-986c-844ca3a36bbc\") " pod="openshift-controller-manager/controller-manager-57f89ccc88-6hpw6" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.710913 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16fd856c-870d-4c0b-986c-844ca3a36bbc-proxy-ca-bundles\") pod \"controller-manager-57f89ccc88-6hpw6\" (UID: \"16fd856c-870d-4c0b-986c-844ca3a36bbc\") " pod="openshift-controller-manager/controller-manager-57f89ccc88-6hpw6" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.710983 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16fd856c-870d-4c0b-986c-844ca3a36bbc-serving-cert\") pod \"controller-manager-57f89ccc88-6hpw6\" (UID: \"16fd856c-870d-4c0b-986c-844ca3a36bbc\") " pod="openshift-controller-manager/controller-manager-57f89ccc88-6hpw6" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.712032 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16fd856c-870d-4c0b-986c-844ca3a36bbc-client-ca\") pod \"controller-manager-57f89ccc88-6hpw6\" (UID: \"16fd856c-870d-4c0b-986c-844ca3a36bbc\") " pod="openshift-controller-manager/controller-manager-57f89ccc88-6hpw6" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.712428 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16fd856c-870d-4c0b-986c-844ca3a36bbc-config\") pod \"controller-manager-57f89ccc88-6hpw6\" (UID: \"16fd856c-870d-4c0b-986c-844ca3a36bbc\") " pod="openshift-controller-manager/controller-manager-57f89ccc88-6hpw6" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.714168 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16fd856c-870d-4c0b-986c-844ca3a36bbc-proxy-ca-bundles\") pod \"controller-manager-57f89ccc88-6hpw6\" (UID: \"16fd856c-870d-4c0b-986c-844ca3a36bbc\") " pod="openshift-controller-manager/controller-manager-57f89ccc88-6hpw6" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.716672 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16fd856c-870d-4c0b-986c-844ca3a36bbc-serving-cert\") pod \"controller-manager-57f89ccc88-6hpw6\" (UID: \"16fd856c-870d-4c0b-986c-844ca3a36bbc\") " pod="openshift-controller-manager/controller-manager-57f89ccc88-6hpw6" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.740013 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts6mk\" (UniqueName: \"kubernetes.io/projected/16fd856c-870d-4c0b-986c-844ca3a36bbc-kube-api-access-ts6mk\") pod \"controller-manager-57f89ccc88-6hpw6\" (UID: \"16fd856c-870d-4c0b-986c-844ca3a36bbc\") " pod="openshift-controller-manager/controller-manager-57f89ccc88-6hpw6" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.776642 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57f89ccc88-6hpw6" Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.918597 4681 patch_prober.go:28] interesting pod/controller-manager-6564c7f45c-s7m5v container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 01:59:53 crc kubenswrapper[4681]: I0404 01:59:53.918946 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6564c7f45c-s7m5v" podUID="441d6509-71d2-4ce0-9408-06023a888142" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 01:59:54 crc kubenswrapper[4681]: I0404 01:59:54.208520 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6564c7f45c-s7m5v" event={"ID":"441d6509-71d2-4ce0-9408-06023a888142","Type":"ContainerDied","Data":"74d740c8534513d38b3dbdbfe77561694ca6fe2136b47d3db3bd27ac4197d20e"} Apr 04 01:59:54 crc kubenswrapper[4681]: I0404 01:59:54.208860 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6564c7f45c-s7m5v" Apr 04 01:59:54 crc kubenswrapper[4681]: I0404 01:59:54.235346 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6564c7f45c-s7m5v"] Apr 04 01:59:54 crc kubenswrapper[4681]: I0404 01:59:54.238747 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6564c7f45c-s7m5v"] Apr 04 01:59:55 crc kubenswrapper[4681]: I0404 01:59:55.207331 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="441d6509-71d2-4ce0-9408-06023a888142" path="/var/lib/kubelet/pods/441d6509-71d2-4ce0-9408-06023a888142/volumes" Apr 04 01:59:55 crc kubenswrapper[4681]: I0404 01:59:55.669933 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-fn5hz" Apr 04 01:59:56 crc kubenswrapper[4681]: I0404 01:59:56.524688 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 01:59:56 crc kubenswrapper[4681]: I0404 01:59:56.524780 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 01:59:56 crc kubenswrapper[4681]: I0404 01:59:56.653671 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-11-crc"] Apr 04 01:59:56 crc kubenswrapper[4681]: I0404 01:59:56.654972 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-11-crc" Apr 04 01:59:56 crc kubenswrapper[4681]: I0404 01:59:56.662698 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-11-crc"] Apr 04 01:59:56 crc kubenswrapper[4681]: I0404 01:59:56.697954 4681 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-t522l container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.25:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 01:59:56 crc kubenswrapper[4681]: I0404 01:59:56.698019 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-t522l" podUID="0b28142c-7b85-406e-b158-42517bab7f11" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.25:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 01:59:56 crc kubenswrapper[4681]: I0404 01:59:56.848470 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-10-crc"] Apr 04 01:59:56 crc kubenswrapper[4681]: I0404 01:59:56.848765 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-10-crc" podUID="cd3e2f32-1fb8-4458-938c-25b7e4a3fb33" containerName="installer" containerID="cri-o://cca89d0c625c92de78b959c7681c61100c50a196c94800a28182a6db3a3ec143" gracePeriod=30 Apr 04 01:59:56 crc kubenswrapper[4681]: I0404 01:59:56.855198 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ab9d863-161a-42f6-a6eb-279b97ad4703-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"4ab9d863-161a-42f6-a6eb-279b97ad4703\") " pod="openshift-kube-controller-manager/revision-pruner-11-crc" Apr 04 01:59:56 crc kubenswrapper[4681]: I0404 01:59:56.855382 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ab9d863-161a-42f6-a6eb-279b97ad4703-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"4ab9d863-161a-42f6-a6eb-279b97ad4703\") " pod="openshift-kube-controller-manager/revision-pruner-11-crc" Apr 04 01:59:56 crc kubenswrapper[4681]: I0404 01:59:56.918291 4681 patch_prober.go:28] interesting pod/console-56c4884fb5-4p4lw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.58:8443/health\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Apr 04 01:59:56 crc kubenswrapper[4681]: I0404 01:59:56.918369 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-56c4884fb5-4p4lw" podUID="1e389ab6-12e2-4fa3-b338-3e2080ab710e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.58:8443/health\": dial tcp 10.217.0.58:8443: connect: connection refused" Apr 04 01:59:56 crc kubenswrapper[4681]: I0404 01:59:56.956732 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ab9d863-161a-42f6-a6eb-279b97ad4703-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"4ab9d863-161a-42f6-a6eb-279b97ad4703\") " pod="openshift-kube-controller-manager/revision-pruner-11-crc" Apr 04 01:59:56 crc kubenswrapper[4681]: I0404 01:59:56.956817 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ab9d863-161a-42f6-a6eb-279b97ad4703-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"4ab9d863-161a-42f6-a6eb-279b97ad4703\") " pod="openshift-kube-controller-manager/revision-pruner-11-crc" Apr 04 01:59:56 crc kubenswrapper[4681]: I0404 01:59:56.956892 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ab9d863-161a-42f6-a6eb-279b97ad4703-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"4ab9d863-161a-42f6-a6eb-279b97ad4703\") " pod="openshift-kube-controller-manager/revision-pruner-11-crc" Apr 04 01:59:56 crc kubenswrapper[4681]: I0404 01:59:56.977657 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ab9d863-161a-42f6-a6eb-279b97ad4703-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"4ab9d863-161a-42f6-a6eb-279b97ad4703\") " pod="openshift-kube-controller-manager/revision-pruner-11-crc" Apr 04 01:59:57 crc kubenswrapper[4681]: I0404 01:59:57.272709 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-11-crc" Apr 04 01:59:57 crc kubenswrapper[4681]: I0404 01:59:57.551485 4681 patch_prober.go:28] interesting pod/router-default-5444994796-d25mp container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 01:59:57 crc kubenswrapper[4681]: I0404 01:59:57.551748 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-d25mp" podUID="8b59f8f0-e1c8-4187-a509-1a7f58a0ba37" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 04 01:59:59 crc kubenswrapper[4681]: I0404 01:59:59.247948 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-11-crc"] Apr 04 01:59:59 crc kubenswrapper[4681]: I0404 01:59:59.248795 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-11-crc" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 01:59:59.254899 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-11-crc"] Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 01:59:59.296981 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c2803d0-196e-4be0-8027-76566b1f53b5-kube-api-access\") pod \"installer-11-crc\" (UID: \"6c2803d0-196e-4be0-8027-76566b1f53b5\") " pod="openshift-kube-controller-manager/installer-11-crc" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 01:59:59.297045 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c2803d0-196e-4be0-8027-76566b1f53b5-kubelet-dir\") pod \"installer-11-crc\" (UID: \"6c2803d0-196e-4be0-8027-76566b1f53b5\") " pod="openshift-kube-controller-manager/installer-11-crc" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 01:59:59.297082 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6c2803d0-196e-4be0-8027-76566b1f53b5-var-lock\") pod \"installer-11-crc\" (UID: \"6c2803d0-196e-4be0-8027-76566b1f53b5\") " pod="openshift-kube-controller-manager/installer-11-crc" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 01:59:59.398653 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c2803d0-196e-4be0-8027-76566b1f53b5-kubelet-dir\") pod \"installer-11-crc\" (UID: \"6c2803d0-196e-4be0-8027-76566b1f53b5\") " pod="openshift-kube-controller-manager/installer-11-crc" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 01:59:59.398706 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6c2803d0-196e-4be0-8027-76566b1f53b5-var-lock\") pod \"installer-11-crc\" (UID: \"6c2803d0-196e-4be0-8027-76566b1f53b5\") " pod="openshift-kube-controller-manager/installer-11-crc" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 01:59:59.398771 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c2803d0-196e-4be0-8027-76566b1f53b5-kube-api-access\") pod \"installer-11-crc\" (UID: \"6c2803d0-196e-4be0-8027-76566b1f53b5\") " pod="openshift-kube-controller-manager/installer-11-crc" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 01:59:59.398856 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c2803d0-196e-4be0-8027-76566b1f53b5-kubelet-dir\") pod \"installer-11-crc\" (UID: \"6c2803d0-196e-4be0-8027-76566b1f53b5\") " pod="openshift-kube-controller-manager/installer-11-crc" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 01:59:59.398931 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6c2803d0-196e-4be0-8027-76566b1f53b5-var-lock\") pod \"installer-11-crc\" (UID: \"6c2803d0-196e-4be0-8027-76566b1f53b5\") " pod="openshift-kube-controller-manager/installer-11-crc" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 01:59:59.417223 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c2803d0-196e-4be0-8027-76566b1f53b5-kube-api-access\") pod \"installer-11-crc\" (UID: \"6c2803d0-196e-4be0-8027-76566b1f53b5\") " pod="openshift-kube-controller-manager/installer-11-crc" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 01:59:59.609246 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-11-crc" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 01:59:59.707882 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" podUID="b26036bc-4cff-472f-a379-8dc4541cf018" containerName="registry" containerID="cri-o://a7e788b7107c4e3259472a3dc00b36075c3e1912a5f966f02120b87a1eac6fa5" gracePeriod=30 Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:00.156328 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587800-g4n9q"] Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:00.157328 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587800-g4n9q" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:00.159940 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:00.159998 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:00.165721 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587800-xd7wt"] Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:00.166469 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587800-xd7wt" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:00.170304 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587800-xd7wt"] Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:00.172293 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:00.172512 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:00.172754 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:00.173668 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587800-g4n9q"] Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:00.312612 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d26f8dc-36cc-47b4-9729-60177c3ca6e1-config-volume\") pod \"collect-profiles-29587800-g4n9q\" (UID: \"6d26f8dc-36cc-47b4-9729-60177c3ca6e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587800-g4n9q" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:00.312854 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shz9h\" (UniqueName: \"kubernetes.io/projected/9eff11bc-ec44-4492-90f2-c24f4b0438bc-kube-api-access-shz9h\") pod \"auto-csr-approver-29587800-xd7wt\" (UID: \"9eff11bc-ec44-4492-90f2-c24f4b0438bc\") " pod="openshift-infra/auto-csr-approver-29587800-xd7wt" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:00.312877 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d26f8dc-36cc-47b4-9729-60177c3ca6e1-secret-volume\") pod \"collect-profiles-29587800-g4n9q\" (UID: \"6d26f8dc-36cc-47b4-9729-60177c3ca6e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587800-g4n9q" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:00.312904 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqk24\" (UniqueName: \"kubernetes.io/projected/6d26f8dc-36cc-47b4-9729-60177c3ca6e1-kube-api-access-pqk24\") pod \"collect-profiles-29587800-g4n9q\" (UID: \"6d26f8dc-36cc-47b4-9729-60177c3ca6e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587800-g4n9q" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:00.413884 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d26f8dc-36cc-47b4-9729-60177c3ca6e1-config-volume\") pod \"collect-profiles-29587800-g4n9q\" (UID: \"6d26f8dc-36cc-47b4-9729-60177c3ca6e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587800-g4n9q" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:00.413928 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shz9h\" (UniqueName: \"kubernetes.io/projected/9eff11bc-ec44-4492-90f2-c24f4b0438bc-kube-api-access-shz9h\") pod \"auto-csr-approver-29587800-xd7wt\" (UID: \"9eff11bc-ec44-4492-90f2-c24f4b0438bc\") " pod="openshift-infra/auto-csr-approver-29587800-xd7wt" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:00.413946 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d26f8dc-36cc-47b4-9729-60177c3ca6e1-secret-volume\") pod \"collect-profiles-29587800-g4n9q\" (UID: \"6d26f8dc-36cc-47b4-9729-60177c3ca6e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587800-g4n9q" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:00.413972 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqk24\" (UniqueName: \"kubernetes.io/projected/6d26f8dc-36cc-47b4-9729-60177c3ca6e1-kube-api-access-pqk24\") pod \"collect-profiles-29587800-g4n9q\" (UID: \"6d26f8dc-36cc-47b4-9729-60177c3ca6e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587800-g4n9q" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:00.415175 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d26f8dc-36cc-47b4-9729-60177c3ca6e1-config-volume\") pod \"collect-profiles-29587800-g4n9q\" (UID: \"6d26f8dc-36cc-47b4-9729-60177c3ca6e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587800-g4n9q" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:00.422617 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d26f8dc-36cc-47b4-9729-60177c3ca6e1-secret-volume\") pod \"collect-profiles-29587800-g4n9q\" (UID: \"6d26f8dc-36cc-47b4-9729-60177c3ca6e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587800-g4n9q" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:00.437297 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqk24\" (UniqueName: \"kubernetes.io/projected/6d26f8dc-36cc-47b4-9729-60177c3ca6e1-kube-api-access-pqk24\") pod \"collect-profiles-29587800-g4n9q\" (UID: \"6d26f8dc-36cc-47b4-9729-60177c3ca6e1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587800-g4n9q" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:00.442918 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shz9h\" (UniqueName: \"kubernetes.io/projected/9eff11bc-ec44-4492-90f2-c24f4b0438bc-kube-api-access-shz9h\") pod \"auto-csr-approver-29587800-xd7wt\" (UID: \"9eff11bc-ec44-4492-90f2-c24f4b0438bc\") " pod="openshift-infra/auto-csr-approver-29587800-xd7wt" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:00.517105 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587800-g4n9q" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:00.523490 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587800-xd7wt" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:01.260480 4681 generic.go:334] "Generic (PLEG): container finished" podID="b26036bc-4cff-472f-a379-8dc4541cf018" containerID="a7e788b7107c4e3259472a3dc00b36075c3e1912a5f966f02120b87a1eac6fa5" exitCode=0 Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:01.260515 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" event={"ID":"b26036bc-4cff-472f-a379-8dc4541cf018","Type":"ContainerDied","Data":"a7e788b7107c4e3259472a3dc00b36075c3e1912a5f966f02120b87a1eac6fa5"} Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:05.129754 4681 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-nzjzv container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.36:5000/healthz\": dial tcp 10.217.0.36:5000: connect: connection refused" start-of-body= Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:05.130083 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" podUID="b26036bc-4cff-472f-a379-8dc4541cf018" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.36:5000/healthz\": dial tcp 10.217.0.36:5000: connect: connection refused" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:06.698452 4681 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-t522l container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.25:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:06.698539 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-t522l" podUID="0b28142c-7b85-406e-b158-42517bab7f11" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.25:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:06.918683 4681 patch_prober.go:28] interesting pod/console-56c4884fb5-4p4lw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.58:8443/health\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:06.918771 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-56c4884fb5-4p4lw" podUID="1e389ab6-12e2-4fa3-b338-3e2080ab710e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.58:8443/health\": dial tcp 10.217.0.58:8443: connect: connection refused" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:09.313351 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-10-crc_cd3e2f32-1fb8-4458-938c-25b7e4a3fb33/installer/0.log" Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:09.313809 4681 generic.go:334] "Generic (PLEG): container finished" podID="cd3e2f32-1fb8-4458-938c-25b7e4a3fb33" containerID="cca89d0c625c92de78b959c7681c61100c50a196c94800a28182a6db3a3ec143" exitCode=1 Apr 04 02:00:13 crc kubenswrapper[4681]: I0404 02:00:09.313857 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-10-crc" event={"ID":"cd3e2f32-1fb8-4458-938c-25b7e4a3fb33","Type":"ContainerDied","Data":"cca89d0c625c92de78b959c7681c61100c50a196c94800a28182a6db3a3ec143"} Apr 04 02:00:15 crc kubenswrapper[4681]: I0404 02:00:15.130333 4681 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-nzjzv container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.36:5000/healthz\": dial tcp 10.217.0.36:5000: connect: connection refused" start-of-body= Apr 04 02:00:15 crc kubenswrapper[4681]: I0404 02:00:15.130815 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" podUID="b26036bc-4cff-472f-a379-8dc4541cf018" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.36:5000/healthz\": dial tcp 10.217.0.36:5000: connect: connection refused" Apr 04 02:00:15 crc kubenswrapper[4681]: I0404 02:00:15.276840 4681 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-blqhv container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Apr 04 02:00:15 crc kubenswrapper[4681]: I0404 02:00:15.276939 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-blqhv" podUID="5bb4d019-ae1f-4aa2-b255-a6974c4edf4a" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Apr 04 02:00:16 crc kubenswrapper[4681]: I0404 02:00:16.697865 4681 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-t522l container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.25:6443/healthz\": dial tcp 10.217.0.25:6443: i/o timeout" start-of-body= Apr 04 02:00:16 crc kubenswrapper[4681]: I0404 02:00:16.697953 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-t522l" podUID="0b28142c-7b85-406e-b158-42517bab7f11" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.25:6443/healthz\": dial tcp 10.217.0.25:6443: i/o timeout" Apr 04 02:00:16 crc kubenswrapper[4681]: I0404 02:00:16.698062 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 02:00:16 crc kubenswrapper[4681]: I0404 02:00:16.919860 4681 patch_prober.go:28] interesting pod/console-56c4884fb5-4p4lw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.58:8443/health\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Apr 04 02:00:16 crc kubenswrapper[4681]: I0404 02:00:16.919930 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-56c4884fb5-4p4lw" podUID="1e389ab6-12e2-4fa3-b338-3e2080ab710e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.58:8443/health\": dial tcp 10.217.0.58:8443: connect: connection refused" Apr 04 02:00:25 crc kubenswrapper[4681]: I0404 02:00:25.130394 4681 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-nzjzv container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.36:5000/healthz\": dial tcp 10.217.0.36:5000: connect: connection refused" start-of-body= Apr 04 02:00:25 crc kubenswrapper[4681]: I0404 02:00:25.131050 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" podUID="b26036bc-4cff-472f-a379-8dc4541cf018" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.36:5000/healthz\": dial tcp 10.217.0.36:5000: connect: connection refused" Apr 04 02:00:25 crc kubenswrapper[4681]: I0404 02:00:25.131153 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 02:00:25 crc kubenswrapper[4681]: I0404 02:00:25.275826 4681 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-blqhv container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Apr 04 02:00:25 crc kubenswrapper[4681]: I0404 02:00:25.275911 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-blqhv" podUID="5bb4d019-ae1f-4aa2-b255-a6974c4edf4a" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Apr 04 02:00:26 crc kubenswrapper[4681]: I0404 02:00:26.524067 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:00:26 crc kubenswrapper[4681]: I0404 02:00:26.524163 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:00:26 crc kubenswrapper[4681]: I0404 02:00:26.697986 4681 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-t522l container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.25:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 02:00:26 crc kubenswrapper[4681]: I0404 02:00:26.698338 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-t522l" podUID="0b28142c-7b85-406e-b158-42517bab7f11" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.25:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 02:00:26 crc kubenswrapper[4681]: I0404 02:00:26.918990 4681 patch_prober.go:28] interesting pod/console-56c4884fb5-4p4lw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.58:8443/health\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Apr 04 02:00:26 crc kubenswrapper[4681]: I0404 02:00:26.919050 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-56c4884fb5-4p4lw" podUID="1e389ab6-12e2-4fa3-b338-3e2080ab710e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.58:8443/health\": dial tcp 10.217.0.58:8443: connect: connection refused" Apr 04 02:00:31 crc kubenswrapper[4681]: I0404 02:00:31.839994 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:00:31 crc kubenswrapper[4681]: I0404 02:00:31.839997 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:00:31 crc kubenswrapper[4681]: I0404 02:00:31.840193 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:00:31 crc kubenswrapper[4681]: I0404 02:00:31.840103 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:00:34 crc kubenswrapper[4681]: I0404 02:00:34.839927 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:00:34 crc kubenswrapper[4681]: I0404 02:00:34.840361 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:00:34 crc kubenswrapper[4681]: I0404 02:00:34.839933 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:00:34 crc kubenswrapper[4681]: I0404 02:00:34.840446 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:00:35 crc kubenswrapper[4681]: I0404 02:00:35.129696 4681 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-nzjzv container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.36:5000/healthz\": dial tcp 10.217.0.36:5000: connect: connection refused" start-of-body= Apr 04 02:00:35 crc kubenswrapper[4681]: I0404 02:00:35.129768 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" podUID="b26036bc-4cff-472f-a379-8dc4541cf018" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.36:5000/healthz\": dial tcp 10.217.0.36:5000: connect: connection refused" Apr 04 02:00:35 crc kubenswrapper[4681]: I0404 02:00:35.277104 4681 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-blqhv container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Apr 04 02:00:35 crc kubenswrapper[4681]: I0404 02:00:35.277198 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-blqhv" podUID="5bb4d019-ae1f-4aa2-b255-a6974c4edf4a" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Apr 04 02:00:35 crc kubenswrapper[4681]: I0404 02:00:35.277301 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-69f744f599-blqhv" Apr 04 02:00:35 crc kubenswrapper[4681]: I0404 02:00:35.278101 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"dfb53a9838ae7da2900b72730dec271d562c080298c0a9671181461f7be926f9"} pod="openshift-authentication-operator/authentication-operator-69f744f599-blqhv" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Apr 04 02:00:35 crc kubenswrapper[4681]: I0404 02:00:35.278165 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-69f744f599-blqhv" podUID="5bb4d019-ae1f-4aa2-b255-a6974c4edf4a" containerName="authentication-operator" containerID="cri-o://dfb53a9838ae7da2900b72730dec271d562c080298c0a9671181461f7be926f9" gracePeriod=30 Apr 04 02:00:36 crc kubenswrapper[4681]: I0404 02:00:36.701497 4681 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-t522l container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.25:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 02:00:36 crc kubenswrapper[4681]: I0404 02:00:36.701577 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-t522l" podUID="0b28142c-7b85-406e-b158-42517bab7f11" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.25:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 02:00:36 crc kubenswrapper[4681]: I0404 02:00:36.918400 4681 patch_prober.go:28] interesting pod/console-56c4884fb5-4p4lw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.58:8443/health\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Apr 04 02:00:36 crc kubenswrapper[4681]: I0404 02:00:36.918478 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-56c4884fb5-4p4lw" podUID="1e389ab6-12e2-4fa3-b338-3e2080ab710e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.58:8443/health\": dial tcp 10.217.0.58:8443: connect: connection refused" Apr 04 02:00:36 crc kubenswrapper[4681]: I0404 02:00:36.964294 4681 patch_prober.go:28] interesting pod/console-operator-58897d9998-tq7nn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 02:00:36 crc kubenswrapper[4681]: I0404 02:00:36.964435 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" podUID="394b01ea-0b57-4565-aa56-96b6c5372a15" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 02:00:36 crc kubenswrapper[4681]: I0404 02:00:36.964517 4681 patch_prober.go:28] interesting pod/console-operator-58897d9998-tq7nn container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 02:00:36 crc kubenswrapper[4681]: I0404 02:00:36.964554 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" podUID="394b01ea-0b57-4565-aa56-96b6c5372a15" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 02:00:37 crc kubenswrapper[4681]: I0404 02:00:37.839940 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:00:37 crc kubenswrapper[4681]: I0404 02:00:37.839989 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:00:37 crc kubenswrapper[4681]: I0404 02:00:37.840035 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:00:37 crc kubenswrapper[4681]: I0404 02:00:37.840068 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:00:37 crc kubenswrapper[4681]: I0404 02:00:37.840119 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" Apr 04 02:00:37 crc kubenswrapper[4681]: I0404 02:00:37.840248 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" Apr 04 02:00:37 crc kubenswrapper[4681]: I0404 02:00:37.841147 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"9814347660aac18022f412075fdc0df5de3558d1b5ffe6e7fbd8e6cbaebc905b"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Apr 04 02:00:37 crc kubenswrapper[4681]: I0404 02:00:37.841214 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" containerID="cri-o://9814347660aac18022f412075fdc0df5de3558d1b5ffe6e7fbd8e6cbaebc905b" gracePeriod=30 Apr 04 02:00:37 crc kubenswrapper[4681]: I0404 02:00:37.842115 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:00:37 crc kubenswrapper[4681]: I0404 02:00:37.844473 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:00:40 crc kubenswrapper[4681]: I0404 02:00:40.840112 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:00:40 crc kubenswrapper[4681]: I0404 02:00:40.840591 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:00:43 crc kubenswrapper[4681]: I0404 02:00:43.839750 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:00:43 crc kubenswrapper[4681]: I0404 02:00:43.839826 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:00:45 crc kubenswrapper[4681]: I0404 02:00:45.131166 4681 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-nzjzv container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.36:5000/healthz\": dial tcp 10.217.0.36:5000: connect: connection refused" start-of-body= Apr 04 02:00:45 crc kubenswrapper[4681]: I0404 02:00:45.132835 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" podUID="b26036bc-4cff-472f-a379-8dc4541cf018" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.36:5000/healthz\": dial tcp 10.217.0.36:5000: connect: connection refused" Apr 04 02:00:46 crc kubenswrapper[4681]: I0404 02:00:46.698962 4681 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-t522l container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.25:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 02:00:46 crc kubenswrapper[4681]: I0404 02:00:46.699287 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-t522l" podUID="0b28142c-7b85-406e-b158-42517bab7f11" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.25:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 02:00:46 crc kubenswrapper[4681]: I0404 02:00:46.840638 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:00:46 crc kubenswrapper[4681]: I0404 02:00:46.840781 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:00:46 crc kubenswrapper[4681]: I0404 02:00:46.918085 4681 patch_prober.go:28] interesting pod/console-56c4884fb5-4p4lw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.58:8443/health\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Apr 04 02:00:46 crc kubenswrapper[4681]: I0404 02:00:46.918160 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-56c4884fb5-4p4lw" podUID="1e389ab6-12e2-4fa3-b338-3e2080ab710e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.58:8443/health\": dial tcp 10.217.0.58:8443: connect: connection refused" Apr 04 02:00:46 crc kubenswrapper[4681]: I0404 02:00:46.964538 4681 patch_prober.go:28] interesting pod/console-operator-58897d9998-tq7nn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 02:00:46 crc kubenswrapper[4681]: I0404 02:00:46.964470 4681 patch_prober.go:28] interesting pod/console-operator-58897d9998-tq7nn container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 02:00:46 crc kubenswrapper[4681]: I0404 02:00:46.964630 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" podUID="394b01ea-0b57-4565-aa56-96b6c5372a15" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 02:00:46 crc kubenswrapper[4681]: I0404 02:00:46.964748 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" podUID="394b01ea-0b57-4565-aa56-96b6c5372a15" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 02:00:47 crc kubenswrapper[4681]: I0404 02:00:47.016681 4681 scope.go:117] "RemoveContainer" containerID="4dd02b8a213f58779486b466d211efdbc30b605b5c8996b74df64821a0508c12" Apr 04 02:00:49 crc kubenswrapper[4681]: E0404 02:00:49.009848 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:fee7ad88261227d651bd0b69fa04e516e2f9926e5d9cf08495b140471787265b: Get \"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:fee7ad88261227d651bd0b69fa04e516e2f9926e5d9cf08495b140471787265b\": context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Apr 04 02:00:49 crc kubenswrapper[4681]: E0404 02:00:49.010320 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdgzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-xdpcd_openshift-marketplace(cbcf0420-aff0-484c-9c2b-134552760373): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:fee7ad88261227d651bd0b69fa04e516e2f9926e5d9cf08495b140471787265b: Get \"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:fee7ad88261227d651bd0b69fa04e516e2f9926e5d9cf08495b140471787265b\": context canceled" logger="UnhandledError" Apr 04 02:00:49 crc kubenswrapper[4681]: E0404 02:00:49.011571 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:fee7ad88261227d651bd0b69fa04e516e2f9926e5d9cf08495b140471787265b: Get \\\"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:fee7ad88261227d651bd0b69fa04e516e2f9926e5d9cf08495b140471787265b\\\": context canceled\"" pod="openshift-marketplace/redhat-operators-xdpcd" podUID="cbcf0420-aff0-484c-9c2b-134552760373" Apr 04 02:00:49 crc kubenswrapper[4681]: I0404 02:00:49.840424 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:00:49 crc kubenswrapper[4681]: I0404 02:00:49.840473 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:00:52 crc kubenswrapper[4681]: I0404 02:00:52.840250 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:00:52 crc kubenswrapper[4681]: I0404 02:00:52.840382 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:00:55 crc kubenswrapper[4681]: I0404 02:00:55.129400 4681 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-nzjzv container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.36:5000/healthz\": dial tcp 10.217.0.36:5000: connect: connection refused" start-of-body= Apr 04 02:00:55 crc kubenswrapper[4681]: I0404 02:00:55.130485 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" podUID="b26036bc-4cff-472f-a379-8dc4541cf018" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.36:5000/healthz\": dial tcp 10.217.0.36:5000: connect: connection refused" Apr 04 02:00:55 crc kubenswrapper[4681]: I0404 02:00:55.813350 4681 patch_prober.go:28] interesting pod/etcd-operator-b45778765-qj9xc container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Apr 04 02:00:55 crc kubenswrapper[4681]: I0404 02:00:55.813459 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-b45778765-qj9xc" podUID="3c1605aa-6f4d-4754-9e6e-5f2c2d564f73" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/healthz\": dial tcp 10.217.0.14:8443: connect: connection refused" Apr 04 02:00:55 crc kubenswrapper[4681]: I0404 02:00:55.840539 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:00:55 crc kubenswrapper[4681]: I0404 02:00:55.840619 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:00:56 crc kubenswrapper[4681]: I0404 02:00:56.523720 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:00:56 crc kubenswrapper[4681]: I0404 02:00:56.523806 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:00:56 crc kubenswrapper[4681]: I0404 02:00:56.523863 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 02:00:56 crc kubenswrapper[4681]: I0404 02:00:56.524653 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5"} pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Apr 04 02:00:56 crc kubenswrapper[4681]: I0404 02:00:56.524754 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" containerID="cri-o://e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5" gracePeriod=600 Apr 04 02:00:56 crc kubenswrapper[4681]: I0404 02:00:56.698534 4681 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-t522l container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.25:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 02:00:56 crc kubenswrapper[4681]: I0404 02:00:56.698606 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-t522l" podUID="0b28142c-7b85-406e-b158-42517bab7f11" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.25:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 02:00:56 crc kubenswrapper[4681]: I0404 02:00:56.918723 4681 patch_prober.go:28] interesting pod/console-56c4884fb5-4p4lw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.58:8443/health\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Apr 04 02:00:56 crc kubenswrapper[4681]: I0404 02:00:56.918794 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-56c4884fb5-4p4lw" podUID="1e389ab6-12e2-4fa3-b338-3e2080ab710e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.58:8443/health\": dial tcp 10.217.0.58:8443: connect: connection refused" Apr 04 02:00:56 crc kubenswrapper[4681]: I0404 02:00:56.964082 4681 patch_prober.go:28] interesting pod/console-operator-58897d9998-tq7nn container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 02:00:56 crc kubenswrapper[4681]: I0404 02:00:56.964156 4681 patch_prober.go:28] interesting pod/console-operator-58897d9998-tq7nn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 02:00:56 crc kubenswrapper[4681]: I0404 02:00:56.964186 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" podUID="394b01ea-0b57-4565-aa56-96b6c5372a15" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 02:00:56 crc kubenswrapper[4681]: I0404 02:00:56.964214 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" podUID="394b01ea-0b57-4565-aa56-96b6c5372a15" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 02:00:56 crc kubenswrapper[4681]: I0404 02:00:56.964254 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" Apr 04 02:00:56 crc kubenswrapper[4681]: I0404 02:00:56.964395 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" Apr 04 02:00:56 crc kubenswrapper[4681]: I0404 02:00:56.965152 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"2a1f24c7419777930ac69ece450ebbe839f350b3626d923e81d0e0b7a955385c"} pod="openshift-console-operator/console-operator-58897d9998-tq7nn" containerMessage="Container console-operator failed liveness probe, will be restarted" Apr 04 02:00:56 crc kubenswrapper[4681]: I0404 02:00:56.965222 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" podUID="394b01ea-0b57-4565-aa56-96b6c5372a15" containerName="console-operator" containerID="cri-o://2a1f24c7419777930ac69ece450ebbe839f350b3626d923e81d0e0b7a955385c" gracePeriod=30 Apr 04 02:00:57 crc kubenswrapper[4681]: I0404 02:00:57.965993 4681 patch_prober.go:28] interesting pod/console-operator-58897d9998-tq7nn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 02:00:57 crc kubenswrapper[4681]: I0404 02:00:57.966109 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" podUID="394b01ea-0b57-4565-aa56-96b6c5372a15" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 02:00:58 crc kubenswrapper[4681]: I0404 02:00:58.841165 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:00:58 crc kubenswrapper[4681]: I0404 02:00:58.841244 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:01:01 crc kubenswrapper[4681]: I0404 02:01:01.840937 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:01:01 crc kubenswrapper[4681]: I0404 02:01:01.841349 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:01:04 crc kubenswrapper[4681]: I0404 02:01:04.678635 4681 generic.go:334] "Generic (PLEG): container finished" podID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerID="e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5" exitCode=0 Apr 04 02:01:04 crc kubenswrapper[4681]: I0404 02:01:04.678700 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerDied","Data":"e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5"} Apr 04 02:01:04 crc kubenswrapper[4681]: I0404 02:01:04.841725 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:01:04 crc kubenswrapper[4681]: I0404 02:01:04.841833 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:01:05 crc kubenswrapper[4681]: I0404 02:01:05.129449 4681 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-nzjzv container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.36:5000/healthz\": dial tcp 10.217.0.36:5000: connect: connection refused" start-of-body= Apr 04 02:01:05 crc kubenswrapper[4681]: I0404 02:01:05.129539 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" podUID="b26036bc-4cff-472f-a379-8dc4541cf018" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.36:5000/healthz\": dial tcp 10.217.0.36:5000: connect: connection refused" Apr 04 02:01:06 crc kubenswrapper[4681]: I0404 02:01:06.698337 4681 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-t522l container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.25:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 02:01:06 crc kubenswrapper[4681]: I0404 02:01:06.698510 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-t522l" podUID="0b28142c-7b85-406e-b158-42517bab7f11" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.25:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 02:01:06 crc kubenswrapper[4681]: I0404 02:01:06.918977 4681 patch_prober.go:28] interesting pod/console-56c4884fb5-4p4lw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.58:8443/health\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Apr 04 02:01:06 crc kubenswrapper[4681]: I0404 02:01:06.919079 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-56c4884fb5-4p4lw" podUID="1e389ab6-12e2-4fa3-b338-3e2080ab710e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.58:8443/health\": dial tcp 10.217.0.58:8443: connect: connection refused" Apr 04 02:01:06 crc kubenswrapper[4681]: I0404 02:01:06.963498 4681 patch_prober.go:28] interesting pod/console-operator-58897d9998-tq7nn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 02:01:06 crc kubenswrapper[4681]: I0404 02:01:06.963574 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" podUID="394b01ea-0b57-4565-aa56-96b6c5372a15" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 02:01:07 crc kubenswrapper[4681]: I0404 02:01:07.840249 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:01:07 crc kubenswrapper[4681]: I0404 02:01:07.840790 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:01:10 crc kubenswrapper[4681]: I0404 02:01:10.840396 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:01:10 crc kubenswrapper[4681]: I0404 02:01:10.840509 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:01:10 crc kubenswrapper[4681]: E0404 02:01:10.977159 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage3942587704/4\": happened during read: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Apr 04 02:01:10 crc kubenswrapper[4681]: E0404 02:01:10.977529 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9k9ng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-kmgrn_openshift-marketplace(c99a24fb-60ac-48a9-9158-40827f6e3737): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage3942587704/4\": happened during read: context canceled" logger="UnhandledError" Apr 04 02:01:10 crc kubenswrapper[4681]: E0404 02:01:10.978829 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \\\"/var/tmp/container_images_storage3942587704/4\\\": happened during read: context canceled\"" pod="openshift-marketplace/certified-operators-kmgrn" podUID="c99a24fb-60ac-48a9-9158-40827f6e3737" Apr 04 02:01:13 crc kubenswrapper[4681]: I0404 02:01:13.840293 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:01:13 crc kubenswrapper[4681]: I0404 02:01:13.840379 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:01:15 crc kubenswrapper[4681]: I0404 02:01:15.129107 4681 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-nzjzv container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.36:5000/healthz\": dial tcp 10.217.0.36:5000: connect: connection refused" start-of-body= Apr 04 02:01:15 crc kubenswrapper[4681]: I0404 02:01:15.129642 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" podUID="b26036bc-4cff-472f-a379-8dc4541cf018" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.36:5000/healthz\": dial tcp 10.217.0.36:5000: connect: connection refused" Apr 04 02:01:16 crc kubenswrapper[4681]: I0404 02:01:16.699065 4681 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-t522l container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.25:6443/healthz\": dial tcp 10.217.0.25:6443: i/o timeout" start-of-body= Apr 04 02:01:16 crc kubenswrapper[4681]: I0404 02:01:16.699121 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-t522l" podUID="0b28142c-7b85-406e-b158-42517bab7f11" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.25:6443/healthz\": dial tcp 10.217.0.25:6443: i/o timeout" Apr 04 02:01:16 crc kubenswrapper[4681]: I0404 02:01:16.839686 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:01:16 crc kubenswrapper[4681]: I0404 02:01:16.839767 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:01:16 crc kubenswrapper[4681]: I0404 02:01:16.918745 4681 patch_prober.go:28] interesting pod/console-56c4884fb5-4p4lw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.58:8443/health\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Apr 04 02:01:16 crc kubenswrapper[4681]: I0404 02:01:16.918861 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-56c4884fb5-4p4lw" podUID="1e389ab6-12e2-4fa3-b338-3e2080ab710e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.58:8443/health\": dial tcp 10.217.0.58:8443: connect: connection refused" Apr 04 02:01:16 crc kubenswrapper[4681]: I0404 02:01:16.964669 4681 patch_prober.go:28] interesting pod/console-operator-58897d9998-tq7nn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 02:01:16 crc kubenswrapper[4681]: I0404 02:01:16.964742 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" podUID="394b01ea-0b57-4565-aa56-96b6c5372a15" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 02:01:19 crc kubenswrapper[4681]: I0404 02:01:19.488703 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-gwptd_0cedaefc-2211-4575-8993-8aff39f0d5a3/openshift-config-operator/0.log" Apr 04 02:01:19 crc kubenswrapper[4681]: I0404 02:01:19.489314 4681 generic.go:334] "Generic (PLEG): container finished" podID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerID="9814347660aac18022f412075fdc0df5de3558d1b5ffe6e7fbd8e6cbaebc905b" exitCode=-1 Apr 04 02:01:19 crc kubenswrapper[4681]: I0404 02:01:19.489353 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" event={"ID":"0cedaefc-2211-4575-8993-8aff39f0d5a3","Type":"ContainerDied","Data":"9814347660aac18022f412075fdc0df5de3558d1b5ffe6e7fbd8e6cbaebc905b"} Apr 04 02:01:19 crc kubenswrapper[4681]: I0404 02:01:19.840694 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:01:19 crc kubenswrapper[4681]: I0404 02:01:19.840780 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.638721 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.678235 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-67445d46b-m2v67"] Apr 04 02:01:21 crc kubenswrapper[4681]: E0404 02:01:21.678534 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b28142c-7b85-406e-b158-42517bab7f11" containerName="oauth-openshift" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.678550 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b28142c-7b85-406e-b158-42517bab7f11" containerName="oauth-openshift" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.678789 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b28142c-7b85-406e-b158-42517bab7f11" containerName="oauth-openshift" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.679502 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.686825 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-67445d46b-m2v67"] Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.722625 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-user-idp-0-file-data\") pod \"0b28142c-7b85-406e-b158-42517bab7f11\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.722680 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-serving-cert\") pod \"0b28142c-7b85-406e-b158-42517bab7f11\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.722708 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-service-ca\") pod \"0b28142c-7b85-406e-b158-42517bab7f11\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.722735 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-ocp-branding-template\") pod \"0b28142c-7b85-406e-b158-42517bab7f11\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.722759 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-router-certs\") pod \"0b28142c-7b85-406e-b158-42517bab7f11\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.722814 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-user-template-login\") pod \"0b28142c-7b85-406e-b158-42517bab7f11\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.722836 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-trusted-ca-bundle\") pod \"0b28142c-7b85-406e-b158-42517bab7f11\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.722875 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-session\") pod \"0b28142c-7b85-406e-b158-42517bab7f11\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.722919 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-user-template-error\") pod \"0b28142c-7b85-406e-b158-42517bab7f11\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.722946 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvqhh\" (UniqueName: \"kubernetes.io/projected/0b28142c-7b85-406e-b158-42517bab7f11-kube-api-access-wvqhh\") pod \"0b28142c-7b85-406e-b158-42517bab7f11\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.722973 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0b28142c-7b85-406e-b158-42517bab7f11-audit-policies\") pod \"0b28142c-7b85-406e-b158-42517bab7f11\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.723012 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b28142c-7b85-406e-b158-42517bab7f11-audit-dir\") pod \"0b28142c-7b85-406e-b158-42517bab7f11\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.723041 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-user-template-provider-selection\") pod \"0b28142c-7b85-406e-b158-42517bab7f11\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.723094 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-cliconfig\") pod \"0b28142c-7b85-406e-b158-42517bab7f11\" (UID: \"0b28142c-7b85-406e-b158-42517bab7f11\") " Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.723288 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.723490 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-router-certs\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.723526 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.723551 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c37dc134-cc6c-4a22-add8-c694808f8bb0-audit-policies\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.723594 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmcgb\" (UniqueName: \"kubernetes.io/projected/c37dc134-cc6c-4a22-add8-c694808f8bb0-kube-api-access-tmcgb\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.723621 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-session\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.723477 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "0b28142c-7b85-406e-b158-42517bab7f11" (UID: "0b28142c-7b85-406e-b158-42517bab7f11"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.723483 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "0b28142c-7b85-406e-b158-42517bab7f11" (UID: "0b28142c-7b85-406e-b158-42517bab7f11"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.723500 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b28142c-7b85-406e-b158-42517bab7f11-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "0b28142c-7b85-406e-b158-42517bab7f11" (UID: "0b28142c-7b85-406e-b158-42517bab7f11"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.723915 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b28142c-7b85-406e-b158-42517bab7f11-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "0b28142c-7b85-406e-b158-42517bab7f11" (UID: "0b28142c-7b85-406e-b158-42517bab7f11"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.723649 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.724129 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.724169 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c37dc134-cc6c-4a22-add8-c694808f8bb0-audit-dir\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.724199 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-service-ca\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.724386 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.724433 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-user-template-error\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.724531 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.724567 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-user-template-login\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.724654 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.724688 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.724708 4681 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0b28142c-7b85-406e-b158-42517bab7f11-audit-policies\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.724720 4681 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0b28142c-7b85-406e-b158-42517bab7f11-audit-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.724769 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "0b28142c-7b85-406e-b158-42517bab7f11" (UID: "0b28142c-7b85-406e-b158-42517bab7f11"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.727830 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "0b28142c-7b85-406e-b158-42517bab7f11" (UID: "0b28142c-7b85-406e-b158-42517bab7f11"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.728440 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "0b28142c-7b85-406e-b158-42517bab7f11" (UID: "0b28142c-7b85-406e-b158-42517bab7f11"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.728488 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "0b28142c-7b85-406e-b158-42517bab7f11" (UID: "0b28142c-7b85-406e-b158-42517bab7f11"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.741419 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "0b28142c-7b85-406e-b158-42517bab7f11" (UID: "0b28142c-7b85-406e-b158-42517bab7f11"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.741513 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b28142c-7b85-406e-b158-42517bab7f11-kube-api-access-wvqhh" (OuterVolumeSpecName: "kube-api-access-wvqhh") pod "0b28142c-7b85-406e-b158-42517bab7f11" (UID: "0b28142c-7b85-406e-b158-42517bab7f11"). InnerVolumeSpecName "kube-api-access-wvqhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.741633 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "0b28142c-7b85-406e-b158-42517bab7f11" (UID: "0b28142c-7b85-406e-b158-42517bab7f11"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.742085 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "0b28142c-7b85-406e-b158-42517bab7f11" (UID: "0b28142c-7b85-406e-b158-42517bab7f11"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.742318 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "0b28142c-7b85-406e-b158-42517bab7f11" (UID: "0b28142c-7b85-406e-b158-42517bab7f11"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.742550 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "0b28142c-7b85-406e-b158-42517bab7f11" (UID: "0b28142c-7b85-406e-b158-42517bab7f11"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.826105 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-session\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.826181 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.826239 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.826338 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c37dc134-cc6c-4a22-add8-c694808f8bb0-audit-dir\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.826382 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-service-ca\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.826441 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.826483 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-user-template-error\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.826520 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c37dc134-cc6c-4a22-add8-c694808f8bb0-audit-dir\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.826534 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.827256 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-cliconfig\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.827414 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-user-template-login\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.827548 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.827732 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-router-certs\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.827778 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.827831 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c37dc134-cc6c-4a22-add8-c694808f8bb0-audit-policies\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.827880 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmcgb\" (UniqueName: \"kubernetes.io/projected/c37dc134-cc6c-4a22-add8-c694808f8bb0-kube-api-access-tmcgb\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.828002 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.828033 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.828053 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.828074 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.828095 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.828114 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvqhh\" (UniqueName: \"kubernetes.io/projected/0b28142c-7b85-406e-b158-42517bab7f11-kube-api-access-wvqhh\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.828136 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.828159 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.828179 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.828198 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0b28142c-7b85-406e-b158-42517bab7f11-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.827830 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-service-ca\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.829384 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.829437 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c37dc134-cc6c-4a22-add8-c694808f8bb0-audit-policies\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.831846 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-router-certs\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.831865 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-session\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.833588 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-user-template-login\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.834144 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.834726 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-user-template-error\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.834875 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-serving-cert\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.835768 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.842824 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:21 crc kubenswrapper[4681]: I0404 02:01:21.859095 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmcgb\" (UniqueName: \"kubernetes.io/projected/c37dc134-cc6c-4a22-add8-c694808f8bb0-kube-api-access-tmcgb\") pod \"oauth-openshift-67445d46b-m2v67\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:22 crc kubenswrapper[4681]: I0404 02:01:22.005646 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:22 crc kubenswrapper[4681]: I0404 02:01:22.509912 4681 generic.go:334] "Generic (PLEG): container finished" podID="42fde299-09b3-4bec-83c9-71af1d27475a" containerID="a606b74ce3d0f018b88b04eff4acd3a2d0fd8e0da446a3c38609b52e7180e83c" exitCode=0 Apr 04 02:01:22 crc kubenswrapper[4681]: I0404 02:01:22.509980 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tb658" event={"ID":"42fde299-09b3-4bec-83c9-71af1d27475a","Type":"ContainerDied","Data":"a606b74ce3d0f018b88b04eff4acd3a2d0fd8e0da446a3c38609b52e7180e83c"} Apr 04 02:01:22 crc kubenswrapper[4681]: I0404 02:01:22.511451 4681 scope.go:117] "RemoveContainer" containerID="a606b74ce3d0f018b88b04eff4acd3a2d0fd8e0da446a3c38609b52e7180e83c" Apr 04 02:01:22 crc kubenswrapper[4681]: I0404 02:01:22.512711 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-t522l" event={"ID":"0b28142c-7b85-406e-b158-42517bab7f11","Type":"ContainerDied","Data":"40efb309662502b632afcd3f3796db3bc782ef5d2337372770a5f6aa135f37ce"} Apr 04 02:01:22 crc kubenswrapper[4681]: I0404 02:01:22.512843 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-t522l" Apr 04 02:01:22 crc kubenswrapper[4681]: I0404 02:01:22.566833 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-t522l"] Apr 04 02:01:22 crc kubenswrapper[4681]: I0404 02:01:22.571404 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-t522l"] Apr 04 02:01:22 crc kubenswrapper[4681]: I0404 02:01:22.862584 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:01:22 crc kubenswrapper[4681]: I0404 02:01:22.862685 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.211791 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b28142c-7b85-406e-b158-42517bab7f11" path="/var/lib/kubelet/pods/0b28142c-7b85-406e-b158-42517bab7f11/volumes" Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.545639 4681 generic.go:334] "Generic (PLEG): container finished" podID="966e01cf-5149-43ef-8967-517e68e2bbaa" containerID="a18bb7e65df8663198eaf96166d1c10a5d066f067a893c6e55b67a1aa4ca67f7" exitCode=0 Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.545773 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x7dlg" event={"ID":"966e01cf-5149-43ef-8967-517e68e2bbaa","Type":"ContainerDied","Data":"a18bb7e65df8663198eaf96166d1c10a5d066f067a893c6e55b67a1aa4ca67f7"} Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.546551 4681 scope.go:117] "RemoveContainer" containerID="a18bb7e65df8663198eaf96166d1c10a5d066f067a893c6e55b67a1aa4ca67f7" Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.550057 4681 generic.go:334] "Generic (PLEG): container finished" podID="cbede535-d73e-41cf-b483-6f6794647f90" containerID="137b49d36236682a5892f9c88d55ba606f4c8f442ad4a967ca1e8a28140cda4a" exitCode=0 Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.550083 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghxgb" event={"ID":"cbede535-d73e-41cf-b483-6f6794647f90","Type":"ContainerDied","Data":"137b49d36236682a5892f9c88d55ba606f4c8f442ad4a967ca1e8a28140cda4a"} Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.551039 4681 scope.go:117] "RemoveContainer" containerID="137b49d36236682a5892f9c88d55ba606f4c8f442ad4a967ca1e8a28140cda4a" Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.553140 4681 generic.go:334] "Generic (PLEG): container finished" podID="5bb4d019-ae1f-4aa2-b255-a6974c4edf4a" containerID="dfb53a9838ae7da2900b72730dec271d562c080298c0a9671181461f7be926f9" exitCode=0 Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.553233 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-blqhv" event={"ID":"5bb4d019-ae1f-4aa2-b255-a6974c4edf4a","Type":"ContainerDied","Data":"dfb53a9838ae7da2900b72730dec271d562c080298c0a9671181461f7be926f9"} Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.555198 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-756b6f6bc6-ntv7l_b3fc9a5b-081d-4321-ac46-42992adcf541/openshift-controller-manager-operator/0.log" Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.555236 4681 generic.go:334] "Generic (PLEG): container finished" podID="b3fc9a5b-081d-4321-ac46-42992adcf541" containerID="48f92583d96237bf98858630a8c28a5d19fb0c8bbf6e89d15e95cb5ff6a6207b" exitCode=1 Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.555302 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ntv7l" event={"ID":"b3fc9a5b-081d-4321-ac46-42992adcf541","Type":"ContainerDied","Data":"48f92583d96237bf98858630a8c28a5d19fb0c8bbf6e89d15e95cb5ff6a6207b"} Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.555554 4681 scope.go:117] "RemoveContainer" containerID="48f92583d96237bf98858630a8c28a5d19fb0c8bbf6e89d15e95cb5ff6a6207b" Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.556750 4681 generic.go:334] "Generic (PLEG): container finished" podID="36b289c9-56bc-4b1a-ab7d-1777b34bcaf4" containerID="506ea35e33ea7f12bd1f1a9804e6494b655169d3b6a4d9f0bf515024a8afb565" exitCode=0 Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.556783 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjl4b" event={"ID":"36b289c9-56bc-4b1a-ab7d-1777b34bcaf4","Type":"ContainerDied","Data":"506ea35e33ea7f12bd1f1a9804e6494b655169d3b6a4d9f0bf515024a8afb565"} Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.558390 4681 generic.go:334] "Generic (PLEG): container finished" podID="37a5e44f-9a88-4405-be8a-b645485e7312" containerID="95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622" exitCode=0 Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.558463 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerDied","Data":"95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622"} Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.559474 4681 scope.go:117] "RemoveContainer" containerID="95920083f6bec0e4ed18a274f21375b8afaeeef1f90c35efddfe641316d1b622" Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.559660 4681 scope.go:117] "RemoveContainer" containerID="506ea35e33ea7f12bd1f1a9804e6494b655169d3b6a4d9f0bf515024a8afb565" Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.560061 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-tq7nn_394b01ea-0b57-4565-aa56-96b6c5372a15/console-operator/0.log" Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.560114 4681 generic.go:334] "Generic (PLEG): container finished" podID="394b01ea-0b57-4565-aa56-96b6c5372a15" containerID="2a1f24c7419777930ac69ece450ebbe839f350b3626d923e81d0e0b7a955385c" exitCode=1 Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.560164 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" event={"ID":"394b01ea-0b57-4565-aa56-96b6c5372a15","Type":"ContainerDied","Data":"2a1f24c7419777930ac69ece450ebbe839f350b3626d923e81d0e0b7a955385c"} Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.565697 4681 generic.go:334] "Generic (PLEG): container finished" podID="f9a818bf-9611-4945-9350-97ca20b42b26" containerID="e6d53063a95559c19e9ca96dc2fb8363db32e94796634599bd272c393ad6976b" exitCode=0 Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.565787 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9t5n9" event={"ID":"f9a818bf-9611-4945-9350-97ca20b42b26","Type":"ContainerDied","Data":"e6d53063a95559c19e9ca96dc2fb8363db32e94796634599bd272c393ad6976b"} Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.566250 4681 scope.go:117] "RemoveContainer" containerID="e6d53063a95559c19e9ca96dc2fb8363db32e94796634599bd272c393ad6976b" Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.572015 4681 generic.go:334] "Generic (PLEG): container finished" podID="559af3cb-f642-4e99-91e1-155840a1629c" containerID="10004ce6501c19d7f74b260f5bea4cd3eee850d332c372dd4781b2c61490db22" exitCode=0 Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.572080 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt66q" event={"ID":"559af3cb-f642-4e99-91e1-155840a1629c","Type":"ContainerDied","Data":"10004ce6501c19d7f74b260f5bea4cd3eee850d332c372dd4781b2c61490db22"} Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.572615 4681 scope.go:117] "RemoveContainer" containerID="10004ce6501c19d7f74b260f5bea4cd3eee850d332c372dd4781b2c61490db22" Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.574801 4681 generic.go:334] "Generic (PLEG): container finished" podID="3c1605aa-6f4d-4754-9e6e-5f2c2d564f73" containerID="bf1675afa1371d17b5bb0f92e7a0e8065323d33a0ec0a6ebe8fff02c09a13723" exitCode=0 Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.574842 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-qj9xc" event={"ID":"3c1605aa-6f4d-4754-9e6e-5f2c2d564f73","Type":"ContainerDied","Data":"bf1675afa1371d17b5bb0f92e7a0e8065323d33a0ec0a6ebe8fff02c09a13723"} Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.575077 4681 scope.go:117] "RemoveContainer" containerID="bf1675afa1371d17b5bb0f92e7a0e8065323d33a0ec0a6ebe8fff02c09a13723" Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.578757 4681 generic.go:334] "Generic (PLEG): container finished" podID="867152c5-9f9e-40b4-8623-3437a9793b5d" containerID="f699336c8258ea169d6c3c60ab186072142716549775d122c424fef520decb32" exitCode=0 Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.578803 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8g27v" event={"ID":"867152c5-9f9e-40b4-8623-3437a9793b5d","Type":"ContainerDied","Data":"f699336c8258ea169d6c3c60ab186072142716549775d122c424fef520decb32"} Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.579073 4681 scope.go:117] "RemoveContainer" containerID="f699336c8258ea169d6c3c60ab186072142716549775d122c424fef520decb32" Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.592106 4681 generic.go:334] "Generic (PLEG): container finished" podID="1182c93a-3e68-4418-aeb7-8394689b55c2" containerID="2753d908d6de165cd9e4ec66646ecc2bce0121441e484548758f88050cc459a6" exitCode=0 Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.592167 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p6wvt" event={"ID":"1182c93a-3e68-4418-aeb7-8394689b55c2","Type":"ContainerDied","Data":"2753d908d6de165cd9e4ec66646ecc2bce0121441e484548758f88050cc459a6"} Apr 04 02:01:23 crc kubenswrapper[4681]: I0404 02:01:23.593102 4681 scope.go:117] "RemoveContainer" containerID="2753d908d6de165cd9e4ec66646ecc2bce0121441e484548758f88050cc459a6" Apr 04 02:01:25 crc kubenswrapper[4681]: I0404 02:01:25.839621 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:01:25 crc kubenswrapper[4681]: I0404 02:01:25.839960 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:01:25 crc kubenswrapper[4681]: I0404 02:01:25.963803 4681 patch_prober.go:28] interesting pod/console-operator-58897d9998-tq7nn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/readyz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Apr 04 02:01:25 crc kubenswrapper[4681]: I0404 02:01:25.963863 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" podUID="394b01ea-0b57-4565-aa56-96b6c5372a15" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/readyz\": dial tcp 10.217.0.31:8443: connect: connection refused" Apr 04 02:01:26 crc kubenswrapper[4681]: I0404 02:01:26.918597 4681 patch_prober.go:28] interesting pod/console-56c4884fb5-4p4lw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.58:8443/health\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Apr 04 02:01:26 crc kubenswrapper[4681]: I0404 02:01:26.918926 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-56c4884fb5-4p4lw" podUID="1e389ab6-12e2-4fa3-b338-3e2080ab710e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.58:8443/health\": dial tcp 10.217.0.58:8443: connect: connection refused" Apr 04 02:01:28 crc kubenswrapper[4681]: I0404 02:01:28.840663 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:01:28 crc kubenswrapper[4681]: I0404 02:01:28.840762 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:01:30 crc kubenswrapper[4681]: I0404 02:01:30.129224 4681 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-nzjzv container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.36:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 02:01:30 crc kubenswrapper[4681]: I0404 02:01:30.129330 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" podUID="b26036bc-4cff-472f-a379-8dc4541cf018" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.36:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 02:01:31 crc kubenswrapper[4681]: I0404 02:01:31.856975 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:01:31 crc kubenswrapper[4681]: I0404 02:01:31.857062 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:01:34 crc kubenswrapper[4681]: I0404 02:01:34.840810 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:01:34 crc kubenswrapper[4681]: I0404 02:01:34.841292 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:01:35 crc kubenswrapper[4681]: I0404 02:01:35.963996 4681 patch_prober.go:28] interesting pod/console-operator-58897d9998-tq7nn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/readyz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Apr 04 02:01:35 crc kubenswrapper[4681]: I0404 02:01:35.964066 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" podUID="394b01ea-0b57-4565-aa56-96b6c5372a15" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/readyz\": dial tcp 10.217.0.31:8443: connect: connection refused" Apr 04 02:01:36 crc kubenswrapper[4681]: I0404 02:01:36.918499 4681 patch_prober.go:28] interesting pod/console-56c4884fb5-4p4lw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.58:8443/health\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Apr 04 02:01:36 crc kubenswrapper[4681]: I0404 02:01:36.918577 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-56c4884fb5-4p4lw" podUID="1e389ab6-12e2-4fa3-b338-3e2080ab710e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.58:8443/health\": dial tcp 10.217.0.58:8443: connect: connection refused" Apr 04 02:01:37 crc kubenswrapper[4681]: I0404 02:01:37.840469 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:01:37 crc kubenswrapper[4681]: I0404 02:01:37.840544 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:01:40 crc kubenswrapper[4681]: I0404 02:01:40.129148 4681 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-nzjzv container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.36:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 02:01:40 crc kubenswrapper[4681]: I0404 02:01:40.129564 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" podUID="b26036bc-4cff-472f-a379-8dc4541cf018" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.36:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 02:01:40 crc kubenswrapper[4681]: E0404 02:01:40.322955 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Apr 04 02:01:40 crc kubenswrapper[4681]: E0404 02:01:40.323189 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r2bfd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-5qn4m_openshift-marketplace(f00114dc-2aae-4d37-8143-71336f144be3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Apr 04 02:01:40 crc kubenswrapper[4681]: E0404 02:01:40.324482 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-5qn4m" podUID="f00114dc-2aae-4d37-8143-71336f144be3" Apr 04 02:01:40 crc kubenswrapper[4681]: I0404 02:01:40.840074 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:01:40 crc kubenswrapper[4681]: I0404 02:01:40.840150 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:01:43 crc kubenswrapper[4681]: I0404 02:01:43.840127 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:01:43 crc kubenswrapper[4681]: I0404 02:01:43.841356 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:01:45 crc kubenswrapper[4681]: I0404 02:01:45.964928 4681 patch_prober.go:28] interesting pod/console-operator-58897d9998-tq7nn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/readyz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Apr 04 02:01:45 crc kubenswrapper[4681]: I0404 02:01:45.964997 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" podUID="394b01ea-0b57-4565-aa56-96b6c5372a15" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/readyz\": dial tcp 10.217.0.31:8443: connect: connection refused" Apr 04 02:01:46 crc kubenswrapper[4681]: E0404 02:01:46.129416 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-5qn4m" podUID="f00114dc-2aae-4d37-8143-71336f144be3" Apr 04 02:01:46 crc kubenswrapper[4681]: I0404 02:01:46.839758 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:01:46 crc kubenswrapper[4681]: I0404 02:01:46.840257 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:01:46 crc kubenswrapper[4681]: I0404 02:01:46.918315 4681 patch_prober.go:28] interesting pod/console-56c4884fb5-4p4lw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.58:8443/health\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Apr 04 02:01:46 crc kubenswrapper[4681]: I0404 02:01:46.918368 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-56c4884fb5-4p4lw" podUID="1e389ab6-12e2-4fa3-b338-3e2080ab710e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.58:8443/health\": dial tcp 10.217.0.58:8443: connect: connection refused" Apr 04 02:01:49 crc kubenswrapper[4681]: I0404 02:01:49.839674 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:01:49 crc kubenswrapper[4681]: I0404 02:01:49.839761 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:01:50 crc kubenswrapper[4681]: I0404 02:01:50.129558 4681 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-nzjzv container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.36:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 02:01:50 crc kubenswrapper[4681]: I0404 02:01:50.129681 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" podUID="b26036bc-4cff-472f-a379-8dc4541cf018" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.36:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 02:01:50 crc kubenswrapper[4681]: E0404 02:01:50.424111 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Apr 04 02:01:50 crc kubenswrapper[4681]: E0404 02:01:50.424647 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kb4kb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-m8stk_openshift-marketplace(da41f745-08e9-4d36-ad1d-3b054a4f0a2f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Apr 04 02:01:50 crc kubenswrapper[4681]: E0404 02:01:50.426839 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-m8stk" podUID="da41f745-08e9-4d36-ad1d-3b054a4f0a2f" Apr 04 02:01:52 crc kubenswrapper[4681]: E0404 02:01:52.442112 4681 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="nil" Apr 04 02:01:52 crc kubenswrapper[4681]: E0404 02:01:52.442492 4681 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 04 02:01:52 crc kubenswrapper[4681]: I0404 02:01:52.442511 4681 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Apr 04 02:01:52 crc kubenswrapper[4681]: I0404 02:01:52.839791 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:01:52 crc kubenswrapper[4681]: I0404 02:01:52.839852 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:01:53 crc kubenswrapper[4681]: E0404 02:01:53.824094 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-m8stk" podUID="da41f745-08e9-4d36-ad1d-3b054a4f0a2f" Apr 04 02:01:53 crc kubenswrapper[4681]: E0404 02:01:53.954167 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Apr 04 02:01:53 crc kubenswrapper[4681]: E0404 02:01:53.954327 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9k9ng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-kmgrn_openshift-marketplace(c99a24fb-60ac-48a9-9158-40827f6e3737): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Apr 04 02:01:53 crc kubenswrapper[4681]: E0404 02:01:53.955649 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-kmgrn" podUID="c99a24fb-60ac-48a9-9158-40827f6e3737" Apr 04 02:01:54 crc kubenswrapper[4681]: E0404 02:01:54.092227 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Apr 04 02:01:54 crc kubenswrapper[4681]: E0404 02:01:54.092578 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z87tn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dvhf7_openshift-marketplace(b0cbd40c-5c8c-451b-af65-fb67ba867ced): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Apr 04 02:01:54 crc kubenswrapper[4681]: E0404 02:01:54.093750 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-dvhf7" podUID="b0cbd40c-5c8c-451b-af65-fb67ba867ced" Apr 04 02:01:54 crc kubenswrapper[4681]: I0404 02:01:54.140763 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Apr 04 02:01:54 crc kubenswrapper[4681]: E0404 02:01:54.211936 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Apr 04 02:01:54 crc kubenswrapper[4681]: E0404 02:01:54.212217 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s4jdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-sv8f4_openshift-marketplace(72699dc0-10a9-45c2-9be8-e7a48b8f4edb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Apr 04 02:01:54 crc kubenswrapper[4681]: E0404 02:01:54.214380 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-sv8f4" podUID="72699dc0-10a9-45c2-9be8-e7a48b8f4edb" Apr 04 02:01:55 crc kubenswrapper[4681]: E0404 02:01:55.602141 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-dvhf7" podUID="b0cbd40c-5c8c-451b-af65-fb67ba867ced" Apr 04 02:01:55 crc kubenswrapper[4681]: E0404 02:01:55.602612 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-kmgrn" podUID="c99a24fb-60ac-48a9-9158-40827f6e3737" Apr 04 02:01:55 crc kubenswrapper[4681]: E0404 02:01:55.602835 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-sv8f4" podUID="72699dc0-10a9-45c2-9be8-e7a48b8f4edb" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.702137 4681 scope.go:117] "RemoveContainer" containerID="75acc2d5d4742918bd229f104b0e1670f1a8e438b16f50b2fc0a86469929bab3" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.739575 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.754662 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-10-crc_cd3e2f32-1fb8-4458-938c-25b7e4a3fb33/installer/0.log" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.754767 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-crc" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.810124 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"669511ba-ae82-499c-bf4e-a2893e990205","Type":"ContainerStarted","Data":"bee42f0e988a2d17781091179c96881098621f17fbe01dc2f09fcae2e05002b9"} Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.810736 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-b45778765-qj9xc" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.817239 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-10-crc" event={"ID":"cd3e2f32-1fb8-4458-938c-25b7e4a3fb33","Type":"ContainerDied","Data":"b22ea33dcfeb50195c4299c5793da2bb081602a030b28ff0e5ab462e04416292"} Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.817325 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-10-crc" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.826605 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" event={"ID":"b26036bc-4cff-472f-a379-8dc4541cf018","Type":"ContainerDied","Data":"28c30d2faad9128ae836e246429d7cae5b3e5eed7ae23ca96311680854fe8597"} Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.826702 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.839510 4681 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gwptd container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.839553 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" podUID="0cedaefc-2211-4575-8993-8aff39f0d5a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/healthz\": dial tcp 10.217.0.16:8443: connect: connection refused" Apr 04 02:01:55 crc kubenswrapper[4681]: E0404 02:01:55.870802 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Apr 04 02:01:55 crc kubenswrapper[4681]: E0404 02:01:55.870979 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qpblw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-w2zrg_openshift-marketplace(1b3e95cc-25d6-4efd-8828-894657c29bcb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Apr 04 02:01:55 crc kubenswrapper[4681]: E0404 02:01:55.873111 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-w2zrg" podUID="1b3e95cc-25d6-4efd-8828-894657c29bcb" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.891412 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b26036bc-4cff-472f-a379-8dc4541cf018-bound-sa-token\") pod \"b26036bc-4cff-472f-a379-8dc4541cf018\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.891473 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b26036bc-4cff-472f-a379-8dc4541cf018-registry-tls\") pod \"b26036bc-4cff-472f-a379-8dc4541cf018\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.891587 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b26036bc-4cff-472f-a379-8dc4541cf018-trusted-ca\") pod \"b26036bc-4cff-472f-a379-8dc4541cf018\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.891670 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b26036bc-4cff-472f-a379-8dc4541cf018-registry-certificates\") pod \"b26036bc-4cff-472f-a379-8dc4541cf018\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.891700 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b26036bc-4cff-472f-a379-8dc4541cf018-installation-pull-secrets\") pod \"b26036bc-4cff-472f-a379-8dc4541cf018\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.891759 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b26036bc-4cff-472f-a379-8dc4541cf018-ca-trust-extracted\") pod \"b26036bc-4cff-472f-a379-8dc4541cf018\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.891823 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxjjm\" (UniqueName: \"kubernetes.io/projected/b26036bc-4cff-472f-a379-8dc4541cf018-kube-api-access-zxjjm\") pod \"b26036bc-4cff-472f-a379-8dc4541cf018\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.892054 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"b26036bc-4cff-472f-a379-8dc4541cf018\" (UID: \"b26036bc-4cff-472f-a379-8dc4541cf018\") " Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.892093 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd3e2f32-1fb8-4458-938c-25b7e4a3fb33-kubelet-dir\") pod \"cd3e2f32-1fb8-4458-938c-25b7e4a3fb33\" (UID: \"cd3e2f32-1fb8-4458-938c-25b7e4a3fb33\") " Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.892154 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cd3e2f32-1fb8-4458-938c-25b7e4a3fb33-var-lock\") pod \"cd3e2f32-1fb8-4458-938c-25b7e4a3fb33\" (UID: \"cd3e2f32-1fb8-4458-938c-25b7e4a3fb33\") " Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.892225 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd3e2f32-1fb8-4458-938c-25b7e4a3fb33-kube-api-access\") pod \"cd3e2f32-1fb8-4458-938c-25b7e4a3fb33\" (UID: \"cd3e2f32-1fb8-4458-938c-25b7e4a3fb33\") " Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.892769 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b26036bc-4cff-472f-a379-8dc4541cf018-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "b26036bc-4cff-472f-a379-8dc4541cf018" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.897497 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd3e2f32-1fb8-4458-938c-25b7e4a3fb33-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cd3e2f32-1fb8-4458-938c-25b7e4a3fb33" (UID: "cd3e2f32-1fb8-4458-938c-25b7e4a3fb33"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.897531 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd3e2f32-1fb8-4458-938c-25b7e4a3fb33-var-lock" (OuterVolumeSpecName: "var-lock") pod "cd3e2f32-1fb8-4458-938c-25b7e4a3fb33" (UID: "cd3e2f32-1fb8-4458-938c-25b7e4a3fb33"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.911909 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b26036bc-4cff-472f-a379-8dc4541cf018-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "b26036bc-4cff-472f-a379-8dc4541cf018" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.912328 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b26036bc-4cff-472f-a379-8dc4541cf018-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "b26036bc-4cff-472f-a379-8dc4541cf018" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.914918 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b26036bc-4cff-472f-a379-8dc4541cf018-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "b26036bc-4cff-472f-a379-8dc4541cf018" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.915341 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd3e2f32-1fb8-4458-938c-25b7e4a3fb33-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cd3e2f32-1fb8-4458-938c-25b7e4a3fb33" (UID: "cd3e2f32-1fb8-4458-938c-25b7e4a3fb33"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.916020 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b26036bc-4cff-472f-a379-8dc4541cf018-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "b26036bc-4cff-472f-a379-8dc4541cf018" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.916858 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b26036bc-4cff-472f-a379-8dc4541cf018-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "b26036bc-4cff-472f-a379-8dc4541cf018" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.919160 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-7bdb65549f-f4hr5"] Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.922246 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b26036bc-4cff-472f-a379-8dc4541cf018-kube-api-access-zxjjm" (OuterVolumeSpecName: "kube-api-access-zxjjm") pod "b26036bc-4cff-472f-a379-8dc4541cf018" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018"). InnerVolumeSpecName "kube-api-access-zxjjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.927534 4681 scope.go:117] "RemoveContainer" containerID="cca89d0c625c92de78b959c7681c61100c50a196c94800a28182a6db3a3ec143" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.963853 4681 patch_prober.go:28] interesting pod/console-operator-58897d9998-tq7nn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/readyz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.963898 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" podUID="394b01ea-0b57-4565-aa56-96b6c5372a15" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/readyz\": dial tcp 10.217.0.31:8443: connect: connection refused" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.973851 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "b26036bc-4cff-472f-a379-8dc4541cf018" (UID: "b26036bc-4cff-472f-a379-8dc4541cf018"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.993613 4681 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cd3e2f32-1fb8-4458-938c-25b7e4a3fb33-var-lock\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.993638 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd3e2f32-1fb8-4458-938c-25b7e4a3fb33-kube-api-access\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.993648 4681 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b26036bc-4cff-472f-a379-8dc4541cf018-bound-sa-token\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.993655 4681 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b26036bc-4cff-472f-a379-8dc4541cf018-registry-tls\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.993666 4681 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b26036bc-4cff-472f-a379-8dc4541cf018-trusted-ca\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.993675 4681 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b26036bc-4cff-472f-a379-8dc4541cf018-registry-certificates\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.993684 4681 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b26036bc-4cff-472f-a379-8dc4541cf018-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.993692 4681 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b26036bc-4cff-472f-a379-8dc4541cf018-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.993701 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxjjm\" (UniqueName: \"kubernetes.io/projected/b26036bc-4cff-472f-a379-8dc4541cf018-kube-api-access-zxjjm\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:55 crc kubenswrapper[4681]: I0404 02:01:55.993710 4681 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd3e2f32-1fb8-4458-938c-25b7e4a3fb33-kubelet-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:56 crc kubenswrapper[4681]: I0404 02:01:56.056444 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Apr 04 02:01:56 crc kubenswrapper[4681]: E0404 02:01:56.069837 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Apr 04 02:01:56 crc kubenswrapper[4681]: E0404 02:01:56.070174 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdgzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-xdpcd_openshift-marketplace(cbcf0420-aff0-484c-9c2b-134552760373): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Apr 04 02:01:56 crc kubenswrapper[4681]: E0404 02:01:56.071343 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-xdpcd" podUID="cbcf0420-aff0-484c-9c2b-134552760373" Apr 04 02:01:56 crc kubenswrapper[4681]: I0404 02:01:56.157096 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-10-crc"] Apr 04 02:01:56 crc kubenswrapper[4681]: I0404 02:01:56.162274 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-10-crc"] Apr 04 02:01:56 crc kubenswrapper[4681]: I0404 02:01:56.167705 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nzjzv"] Apr 04 02:01:56 crc kubenswrapper[4681]: I0404 02:01:56.173300 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-nzjzv"] Apr 04 02:01:56 crc kubenswrapper[4681]: I0404 02:01:56.190191 4681 scope.go:117] "RemoveContainer" containerID="a7e788b7107c4e3259472a3dc00b36075c3e1912a5f966f02120b87a1eac6fa5" Apr 04 02:01:56 crc kubenswrapper[4681]: I0404 02:01:56.251594 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d7b6f8969-zdl55"] Apr 04 02:01:56 crc kubenswrapper[4681]: I0404 02:01:56.261041 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-11-crc"] Apr 04 02:01:56 crc kubenswrapper[4681]: W0404 02:01:56.307438 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4ab9d863_161a_42f6_a6eb_279b97ad4703.slice/crio-b0347a168433d665ad1ec9319d2f08abe4732bf54b90a804bcb986cf48f92a3f WatchSource:0}: Error finding container b0347a168433d665ad1ec9319d2f08abe4732bf54b90a804bcb986cf48f92a3f: Status 404 returned error can't find the container with id b0347a168433d665ad1ec9319d2f08abe4732bf54b90a804bcb986cf48f92a3f Apr 04 02:01:56 crc kubenswrapper[4681]: I0404 02:01:56.347488 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57f89ccc88-6hpw6"] Apr 04 02:01:56 crc kubenswrapper[4681]: I0404 02:01:56.356902 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-7-crc"] Apr 04 02:01:56 crc kubenswrapper[4681]: I0404 02:01:56.367706 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587800-xd7wt"] Apr 04 02:01:56 crc kubenswrapper[4681]: W0404 02:01:56.386236 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16fd856c_870d_4c0b_986c_844ca3a36bbc.slice/crio-a1eb0580bd090a9ea797f43da377a3a5b74fc0c5e81a6ade6294d4c09cd0517e WatchSource:0}: Error finding container a1eb0580bd090a9ea797f43da377a3a5b74fc0c5e81a6ade6294d4c09cd0517e: Status 404 returned error can't find the container with id a1eb0580bd090a9ea797f43da377a3a5b74fc0c5e81a6ade6294d4c09cd0517e Apr 04 02:01:56 crc kubenswrapper[4681]: W0404 02:01:56.396073 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9eff11bc_ec44_4492_90f2_c24f4b0438bc.slice/crio-1e1ab1b3acebcb3ac30917e6449bf843f484cb021c21e00ad3b7beeceb9be924 WatchSource:0}: Error finding container 1e1ab1b3acebcb3ac30917e6449bf843f484cb021c21e00ad3b7beeceb9be924: Status 404 returned error can't find the container with id 1e1ab1b3acebcb3ac30917e6449bf843f484cb021c21e00ad3b7beeceb9be924 Apr 04 02:01:56 crc kubenswrapper[4681]: I0404 02:01:56.444579 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587800-g4n9q"] Apr 04 02:01:56 crc kubenswrapper[4681]: W0404 02:01:56.465482 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d26f8dc_36cc_47b4_9729_60177c3ca6e1.slice/crio-7dab0a1490e54b270d96f532741e7b30983251e38646293d2c97a777c336caff WatchSource:0}: Error finding container 7dab0a1490e54b270d96f532741e7b30983251e38646293d2c97a777c336caff: Status 404 returned error can't find the container with id 7dab0a1490e54b270d96f532741e7b30983251e38646293d2c97a777c336caff Apr 04 02:01:56 crc kubenswrapper[4681]: I0404 02:01:56.565682 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-67445d46b-m2v67"] Apr 04 02:01:56 crc kubenswrapper[4681]: I0404 02:01:56.577690 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-11-crc"] Apr 04 02:01:56 crc kubenswrapper[4681]: W0404 02:01:56.586325 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6c2803d0_196e_4be0_8027_76566b1f53b5.slice/crio-88decdfafcb60bfc0dd79c5d41da4f343befb6a20d9f6edf579069d0fd64e5cc WatchSource:0}: Error finding container 88decdfafcb60bfc0dd79c5d41da4f343befb6a20d9f6edf579069d0fd64e5cc: Status 404 returned error can't find the container with id 88decdfafcb60bfc0dd79c5d41da4f343befb6a20d9f6edf579069d0fd64e5cc Apr 04 02:01:56 crc kubenswrapper[4681]: I0404 02:01:56.832241 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587800-xd7wt" event={"ID":"9eff11bc-ec44-4492-90f2-c24f4b0438bc","Type":"ContainerStarted","Data":"1e1ab1b3acebcb3ac30917e6449bf843f484cb021c21e00ad3b7beeceb9be924"} Apr 04 02:01:56 crc kubenswrapper[4681]: I0404 02:01:56.833582 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587800-g4n9q" event={"ID":"6d26f8dc-36cc-47b4-9729-60177c3ca6e1","Type":"ContainerStarted","Data":"7dab0a1490e54b270d96f532741e7b30983251e38646293d2c97a777c336caff"} Apr 04 02:01:56 crc kubenswrapper[4681]: I0404 02:01:56.835501 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57f89ccc88-6hpw6" event={"ID":"16fd856c-870d-4c0b-986c-844ca3a36bbc","Type":"ContainerStarted","Data":"a1eb0580bd090a9ea797f43da377a3a5b74fc0c5e81a6ade6294d4c09cd0517e"} Apr 04 02:01:56 crc kubenswrapper[4681]: I0404 02:01:56.836815 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-7-crc" event={"ID":"de32363f-da7c-4423-9648-bab862a94c60","Type":"ContainerStarted","Data":"d579369e2c58eac774be11ecbde5845746b9216b0788e8fdee6166f86bba9f1e"} Apr 04 02:01:56 crc kubenswrapper[4681]: I0404 02:01:56.837684 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"afeaf156-4185-4c46-b29d-8c865f90cab3","Type":"ContainerStarted","Data":"1c956bf5a737e8896fd15a3bab9f65c8e7bd9d376f15d7c66e1415a22d3e2a12"} Apr 04 02:01:56 crc kubenswrapper[4681]: I0404 02:01:56.838777 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" event={"ID":"434092c3-92f3-4a1f-833a-872828fdd96e","Type":"ContainerStarted","Data":"d2058dbdc852ebbe96ac3955dae456a3da364aa1e4e8253617a4738f39a710d3"} Apr 04 02:01:56 crc kubenswrapper[4681]: I0404 02:01:56.839833 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-11-crc" event={"ID":"6c2803d0-196e-4be0-8027-76566b1f53b5","Type":"ContainerStarted","Data":"88decdfafcb60bfc0dd79c5d41da4f343befb6a20d9f6edf579069d0fd64e5cc"} Apr 04 02:01:56 crc kubenswrapper[4681]: I0404 02:01:56.840883 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d7b6f8969-zdl55" event={"ID":"e5d40eed-1459-4527-bc5b-cec63456d141","Type":"ContainerStarted","Data":"4f60cc1e80d8a2e0c5b39325e6fe741b506524f5e3f8ba9d4cbc7521750cf40d"} Apr 04 02:01:56 crc kubenswrapper[4681]: I0404 02:01:56.842113 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-11-crc" event={"ID":"4ab9d863-161a-42f6-a6eb-279b97ad4703","Type":"ContainerStarted","Data":"b0347a168433d665ad1ec9319d2f08abe4732bf54b90a804bcb986cf48f92a3f"} Apr 04 02:01:56 crc kubenswrapper[4681]: I0404 02:01:56.843065 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" event={"ID":"c37dc134-cc6c-4a22-add8-c694808f8bb0","Type":"ContainerStarted","Data":"fc02541be394d74ef0c6766e6af7305774c54f4ff3cee8649859850830103691"} Apr 04 02:01:56 crc kubenswrapper[4681]: E0404 02:01:56.847228 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-xdpcd" podUID="cbcf0420-aff0-484c-9c2b-134552760373" Apr 04 02:01:56 crc kubenswrapper[4681]: E0404 02:01:56.847404 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-w2zrg" podUID="1b3e95cc-25d6-4efd-8828-894657c29bcb" Apr 04 02:01:56 crc kubenswrapper[4681]: I0404 02:01:56.918600 4681 patch_prober.go:28] interesting pod/console-56c4884fb5-4p4lw container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.58:8443/health\": dial tcp 10.217.0.58:8443: connect: connection refused" start-of-body= Apr 04 02:01:56 crc kubenswrapper[4681]: I0404 02:01:56.918657 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-56c4884fb5-4p4lw" podUID="1e389ab6-12e2-4fa3-b338-3e2080ab710e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.58:8443/health\": dial tcp 10.217.0.58:8443: connect: connection refused" Apr 04 02:01:57 crc kubenswrapper[4681]: I0404 02:01:57.242321 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b26036bc-4cff-472f-a379-8dc4541cf018" path="/var/lib/kubelet/pods/b26036bc-4cff-472f-a379-8dc4541cf018/volumes" Apr 04 02:01:57 crc kubenswrapper[4681]: I0404 02:01:57.243013 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd3e2f32-1fb8-4458-938c-25b7e4a3fb33" path="/var/lib/kubelet/pods/cd3e2f32-1fb8-4458-938c-25b7e4a3fb33/volumes" Apr 04 02:01:57 crc kubenswrapper[4681]: E0404 02:01:57.356021 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Apr 04 02:01:57 crc kubenswrapper[4681]: E0404 02:01:57.356154 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rfcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-qdvrs_openshift-marketplace(d83b7914-ed31-46fd-9fc5-b7924c6b8b3d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Apr 04 02:01:57 crc kubenswrapper[4681]: E0404 02:01:57.357405 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-qdvrs" podUID="d83b7914-ed31-46fd-9fc5-b7924c6b8b3d" Apr 04 02:01:57 crc kubenswrapper[4681]: I0404 02:01:57.861256 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-7-crc" event={"ID":"de32363f-da7c-4423-9648-bab862a94c60","Type":"ContainerStarted","Data":"1418eb2f7c558025d3c093377890cfbf7eacdfee9d3d9f4e22f8f39c4fad5af1"} Apr 04 02:01:57 crc kubenswrapper[4681]: I0404 02:01:57.868208 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerStarted","Data":"c85362a63d53f1caf92cf1cf160f8c227b257437b2ac80c0232b940eca17eb43"} Apr 04 02:01:57 crc kubenswrapper[4681]: I0404 02:01:57.873315 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-hjl4b" event={"ID":"36b289c9-56bc-4b1a-ab7d-1777b34bcaf4","Type":"ContainerStarted","Data":"1e2289e61a91f1a81108fbf5cc98b394fdfffbd0728435a05aee0be8051221e4"} Apr 04 02:01:57 crc kubenswrapper[4681]: I0404 02:01:57.874414 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d7b6f8969-zdl55" event={"ID":"e5d40eed-1459-4527-bc5b-cec63456d141","Type":"ContainerStarted","Data":"338f8b553ce29c8a5d0e3c4b7bad760cd27e6fdb4a221df419e562d5a3aef9d2"} Apr 04 02:01:57 crc kubenswrapper[4681]: I0404 02:01:57.874552 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7d7b6f8969-zdl55" podUID="e5d40eed-1459-4527-bc5b-cec63456d141" containerName="route-controller-manager" containerID="cri-o://338f8b553ce29c8a5d0e3c4b7bad760cd27e6fdb4a221df419e562d5a3aef9d2" gracePeriod=30 Apr 04 02:01:57 crc kubenswrapper[4681]: I0404 02:01:57.874642 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d7b6f8969-zdl55" Apr 04 02:01:57 crc kubenswrapper[4681]: I0404 02:01:57.879092 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9t5n9" event={"ID":"f9a818bf-9611-4945-9350-97ca20b42b26","Type":"ContainerStarted","Data":"aa30f9bd2b6d2e34207f8b398c2202a998384cba935d9204049a2e501a3056e0"} Apr 04 02:01:57 crc kubenswrapper[4681]: I0404 02:01:57.887832 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-7-crc" podStartSLOduration=125.88781815 podStartE2EDuration="2m5.88781815s" podCreationTimestamp="2026-04-04 01:59:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:01:57.88635048 +0000 UTC m=+397.552125590" watchObservedRunningTime="2026-04-04 02:01:57.88781815 +0000 UTC m=+397.553593270" Apr 04 02:01:57 crc kubenswrapper[4681]: I0404 02:01:57.891720 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-756b6f6bc6-ntv7l_b3fc9a5b-081d-4321-ac46-42992adcf541/openshift-controller-manager-operator/0.log" Apr 04 02:01:57 crc kubenswrapper[4681]: I0404 02:01:57.891825 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-ntv7l" event={"ID":"b3fc9a5b-081d-4321-ac46-42992adcf541","Type":"ContainerStarted","Data":"10100298997ab59166d0e57a33c0222d1847d4150779f9e5769fa57f2c4e1780"} Apr 04 02:01:57 crc kubenswrapper[4681]: I0404 02:01:57.900811 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-qj9xc" event={"ID":"3c1605aa-6f4d-4754-9e6e-5f2c2d564f73","Type":"ContainerStarted","Data":"84c08f764daefbdd4ef812369af7b203cb8bcb51214fb6a72aaa925f2a947058"} Apr 04 02:01:57 crc kubenswrapper[4681]: I0404 02:01:57.903185 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"afeaf156-4185-4c46-b29d-8c865f90cab3","Type":"ContainerStarted","Data":"c7f3d03327aae12c0c65b9e0ad3f6ac265ee97da818a0603f05f69351573c798"} Apr 04 02:01:57 crc kubenswrapper[4681]: I0404 02:01:57.907093 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57f89ccc88-6hpw6" event={"ID":"16fd856c-870d-4c0b-986c-844ca3a36bbc","Type":"ContainerStarted","Data":"92090438ca0bddd76f7007c0371f194fcacd31d163e6440a4f5468e796849480"} Apr 04 02:01:57 crc kubenswrapper[4681]: I0404 02:01:57.907713 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-57f89ccc88-6hpw6" Apr 04 02:01:57 crc kubenswrapper[4681]: I0404 02:01:57.912080 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" event={"ID":"0cedaefc-2211-4575-8993-8aff39f0d5a3","Type":"ContainerStarted","Data":"6439e40ffa8690b2ee8cce663c3847cbd40694d9f73144835488fd9615a73468"} Apr 04 02:01:57 crc kubenswrapper[4681]: I0404 02:01:57.912617 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" Apr 04 02:01:57 crc kubenswrapper[4681]: I0404 02:01:57.913781 4681 generic.go:334] "Generic (PLEG): container finished" podID="4ab9d863-161a-42f6-a6eb-279b97ad4703" containerID="73eb53252a0b656882da20a53b64a3148df3fcc941ccf9f239bcf2a3864f745a" exitCode=0 Apr 04 02:01:57 crc kubenswrapper[4681]: I0404 02:01:57.913844 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-11-crc" event={"ID":"4ab9d863-161a-42f6-a6eb-279b97ad4703","Type":"ContainerDied","Data":"73eb53252a0b656882da20a53b64a3148df3fcc941ccf9f239bcf2a3864f745a"} Apr 04 02:01:57 crc kubenswrapper[4681]: I0404 02:01:57.939324 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" event={"ID":"434092c3-92f3-4a1f-833a-872828fdd96e","Type":"ContainerStarted","Data":"865eaed61abfd23ed92dd38fcab82fb83585db37b279678b0dbea7a0e9b9fc65"} Apr 04 02:01:57 crc kubenswrapper[4681]: I0404 02:01:57.939388 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-57f89ccc88-6hpw6" Apr 04 02:01:57 crc kubenswrapper[4681]: I0404 02:01:57.940076 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" Apr 04 02:01:57 crc kubenswrapper[4681]: I0404 02:01:57.977901 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tb658" event={"ID":"42fde299-09b3-4bec-83c9-71af1d27475a","Type":"ContainerStarted","Data":"ea433c6e4cfb430927d12af337a2a3138e2fad8b5e84e94d9f711ee8a3d26956"} Apr 04 02:01:57 crc kubenswrapper[4681]: I0404 02:01:57.986026 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8g27v" event={"ID":"867152c5-9f9e-40b4-8623-3437a9793b5d","Type":"ContainerStarted","Data":"4a721446e1336ac4d6e827a46ca3c1095c8e6ff5ddbd2e7f355ddfe02d937c53"} Apr 04 02:01:57 crc kubenswrapper[4681]: I0404 02:01:57.996092 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d7b6f8969-zdl55" podStartSLOduration=147.996075187 podStartE2EDuration="2m27.996075187s" podCreationTimestamp="2026-04-04 01:59:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:01:57.995699417 +0000 UTC m=+397.661474527" watchObservedRunningTime="2026-04-04 02:01:57.996075187 +0000 UTC m=+397.661850307" Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.001767 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-x7dlg" event={"ID":"966e01cf-5149-43ef-8967-517e68e2bbaa","Type":"ContainerStarted","Data":"43ea560a46aaa63a0bfc84248a34708114f159b015d11ccf63f3821b144ad554"} Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.032381 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-ghxgb" event={"ID":"cbede535-d73e-41cf-b483-6f6794647f90","Type":"ContainerStarted","Data":"0bbf8f3014a607f5587a4831d3942fd0056f6562199e491593569bc008752e8d"} Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.049730 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-p6wvt" event={"ID":"1182c93a-3e68-4418-aeb7-8394689b55c2","Type":"ContainerStarted","Data":"47ea0e13fb2ebd0a49478579ef9691852705d70c6f0711902399b75c629f4e72"} Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.050800 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-57f89ccc88-6hpw6" podStartSLOduration=128.050783201 podStartE2EDuration="2m8.050783201s" podCreationTimestamp="2026-04-04 01:59:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:01:58.050483513 +0000 UTC m=+397.716258633" watchObservedRunningTime="2026-04-04 02:01:58.050783201 +0000 UTC m=+397.716558321" Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.064811 4681 generic.go:334] "Generic (PLEG): container finished" podID="6d26f8dc-36cc-47b4-9729-60177c3ca6e1" containerID="ad003577de2480f7f45393bef52ea2d1a876e1270c88a5d57053223c2df309f7" exitCode=0 Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.064900 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587800-g4n9q" event={"ID":"6d26f8dc-36cc-47b4-9729-60177c3ca6e1","Type":"ContainerDied","Data":"ad003577de2480f7f45393bef52ea2d1a876e1270c88a5d57053223c2df309f7"} Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.090360 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-tq7nn_394b01ea-0b57-4565-aa56-96b6c5372a15/console-operator/0.log" Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.090481 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" event={"ID":"394b01ea-0b57-4565-aa56-96b6c5372a15","Type":"ContainerStarted","Data":"46ba05acc571d834cd0a95941dab41feac3ceb5b85eeda0117a85b8c481f5d74"} Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.091243 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.101791 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-tq7nn" Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.102048 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-11-crc" event={"ID":"6c2803d0-196e-4be0-8027-76566b1f53b5","Type":"ContainerStarted","Data":"520497b1cde3a710e01913ef18405eb6963c3c35f6702ba43e057e7b2a36e10c"} Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.108486 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"41fb4dd6c2592e49047e022e5a47354a8dedc99a81bf956a4f345def71a3eca3"} Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.121784 4681 generic.go:334] "Generic (PLEG): container finished" podID="669511ba-ae82-499c-bf4e-a2893e990205" containerID="e0332e8507a10bc2b20f226e068c0c9fdb1e06b9335d39bb86a9b0bfebd7954a" exitCode=0 Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.121847 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"669511ba-ae82-499c-bf4e-a2893e990205","Type":"ContainerDied","Data":"e0332e8507a10bc2b20f226e068c0c9fdb1e06b9335d39bb86a9b0bfebd7954a"} Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.123339 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" event={"ID":"c37dc134-cc6c-4a22-add8-c694808f8bb0","Type":"ContainerStarted","Data":"2b98d03e9b122829274e1995a71c8e035caad50416490d03b4fd2fd811c380db"} Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.124124 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.166421 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-blqhv" event={"ID":"5bb4d019-ae1f-4aa2-b255-a6974c4edf4a","Type":"ContainerStarted","Data":"a198290440574f4b18b6fc7cc88d62c0d83169aee033a1c0308f88cfce4c5398"} Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.178066 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=129.178051519 podStartE2EDuration="2m9.178051519s" podCreationTimestamp="2026-04-04 01:59:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:01:58.144159624 +0000 UTC m=+397.809934744" watchObservedRunningTime="2026-04-04 02:01:58.178051519 +0000 UTC m=+397.843826639" Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.178946 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-dt66q" event={"ID":"559af3cb-f642-4e99-91e1-155840a1629c","Type":"ContainerStarted","Data":"caf3e34760781c6acf9c547ff9a0f4dfd8e2995f4531a938a791f71c1f2b6efb"} Apr 04 02:01:58 crc kubenswrapper[4681]: E0404 02:01:58.180236 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-qdvrs" podUID="d83b7914-ed31-46fd-9fc5-b7924c6b8b3d" Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.201709 4681 patch_prober.go:28] interesting pod/route-controller-manager-7d7b6f8969-zdl55 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": read tcp 10.217.0.2:49782->10.217.0.66:8443: read: connection reset by peer" start-of-body= Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.201854 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7d7b6f8969-zdl55" podUID="e5d40eed-1459-4527-bc5b-cec63456d141" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": read tcp 10.217.0.2:49782->10.217.0.66:8443: read: connection reset by peer" Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.255189 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.273706 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" podStartSLOduration=144.273666003 podStartE2EDuration="2m24.273666003s" podCreationTimestamp="2026-04-04 01:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:01:58.25274559 +0000 UTC m=+397.918520710" watchObservedRunningTime="2026-04-04 02:01:58.273666003 +0000 UTC m=+397.939441123" Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.589665 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-11-crc" podStartSLOduration=119.589650159 podStartE2EDuration="1m59.589650159s" podCreationTimestamp="2026-04-04 01:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:01:58.58682581 +0000 UTC m=+398.252600930" watchObservedRunningTime="2026-04-04 02:01:58.589650159 +0000 UTC m=+398.255425279" Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.776181 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" podStartSLOduration=154.776164526 podStartE2EDuration="2m34.776164526s" podCreationTimestamp="2026-04-04 01:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:01:58.774597713 +0000 UTC m=+398.440372833" watchObservedRunningTime="2026-04-04 02:01:58.776164526 +0000 UTC m=+398.441939646" Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.836853 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-7d7b6f8969-zdl55_e5d40eed-1459-4527-bc5b-cec63456d141/route-controller-manager/0.log" Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.837123 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d7b6f8969-zdl55" Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.919915 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5d40eed-1459-4527-bc5b-cec63456d141-client-ca\") pod \"e5d40eed-1459-4527-bc5b-cec63456d141\" (UID: \"e5d40eed-1459-4527-bc5b-cec63456d141\") " Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.920011 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5d40eed-1459-4527-bc5b-cec63456d141-serving-cert\") pod \"e5d40eed-1459-4527-bc5b-cec63456d141\" (UID: \"e5d40eed-1459-4527-bc5b-cec63456d141\") " Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.920031 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d40eed-1459-4527-bc5b-cec63456d141-config\") pod \"e5d40eed-1459-4527-bc5b-cec63456d141\" (UID: \"e5d40eed-1459-4527-bc5b-cec63456d141\") " Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.920083 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpddn\" (UniqueName: \"kubernetes.io/projected/e5d40eed-1459-4527-bc5b-cec63456d141-kube-api-access-fpddn\") pod \"e5d40eed-1459-4527-bc5b-cec63456d141\" (UID: \"e5d40eed-1459-4527-bc5b-cec63456d141\") " Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.922324 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5d40eed-1459-4527-bc5b-cec63456d141-client-ca" (OuterVolumeSpecName: "client-ca") pod "e5d40eed-1459-4527-bc5b-cec63456d141" (UID: "e5d40eed-1459-4527-bc5b-cec63456d141"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.922844 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5d40eed-1459-4527-bc5b-cec63456d141-config" (OuterVolumeSpecName: "config") pod "e5d40eed-1459-4527-bc5b-cec63456d141" (UID: "e5d40eed-1459-4527-bc5b-cec63456d141"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.926980 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5d40eed-1459-4527-bc5b-cec63456d141-kube-api-access-fpddn" (OuterVolumeSpecName: "kube-api-access-fpddn") pod "e5d40eed-1459-4527-bc5b-cec63456d141" (UID: "e5d40eed-1459-4527-bc5b-cec63456d141"). InnerVolumeSpecName "kube-api-access-fpddn". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:01:58 crc kubenswrapper[4681]: I0404 02:01:58.927207 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5d40eed-1459-4527-bc5b-cec63456d141-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e5d40eed-1459-4527-bc5b-cec63456d141" (UID: "e5d40eed-1459-4527-bc5b-cec63456d141"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.021573 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpddn\" (UniqueName: \"kubernetes.io/projected/e5d40eed-1459-4527-bc5b-cec63456d141-kube-api-access-fpddn\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.021788 4681 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5d40eed-1459-4527-bc5b-cec63456d141-client-ca\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.021854 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5d40eed-1459-4527-bc5b-cec63456d141-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.021913 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5d40eed-1459-4527-bc5b-cec63456d141-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.124444 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-2zcp7"] Apr 04 02:01:59 crc kubenswrapper[4681]: E0404 02:01:59.124801 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b26036bc-4cff-472f-a379-8dc4541cf018" containerName="registry" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.124905 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="b26036bc-4cff-472f-a379-8dc4541cf018" containerName="registry" Apr 04 02:01:59 crc kubenswrapper[4681]: E0404 02:01:59.124992 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd3e2f32-1fb8-4458-938c-25b7e4a3fb33" containerName="installer" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.125062 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd3e2f32-1fb8-4458-938c-25b7e4a3fb33" containerName="installer" Apr 04 02:01:59 crc kubenswrapper[4681]: E0404 02:01:59.125129 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d40eed-1459-4527-bc5b-cec63456d141" containerName="route-controller-manager" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.125187 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d40eed-1459-4527-bc5b-cec63456d141" containerName="route-controller-manager" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.125396 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd3e2f32-1fb8-4458-938c-25b7e4a3fb33" containerName="installer" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.125490 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5d40eed-1459-4527-bc5b-cec63456d141" containerName="route-controller-manager" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.125564 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="b26036bc-4cff-472f-a379-8dc4541cf018" containerName="registry" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.126039 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-2zcp7" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.127927 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.185478 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587800-xd7wt" event={"ID":"9eff11bc-ec44-4492-90f2-c24f4b0438bc","Type":"ContainerStarted","Data":"e7c19c368ce0b475b003861af2dbc2fc182f3be264c38c46aae9c596492d72e1"} Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.187239 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-7d7b6f8969-zdl55_e5d40eed-1459-4527-bc5b-cec63456d141/route-controller-manager/0.log" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.187329 4681 generic.go:334] "Generic (PLEG): container finished" podID="e5d40eed-1459-4527-bc5b-cec63456d141" containerID="338f8b553ce29c8a5d0e3c4b7bad760cd27e6fdb4a221df419e562d5a3aef9d2" exitCode=255 Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.187386 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d7b6f8969-zdl55" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.187441 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d7b6f8969-zdl55" event={"ID":"e5d40eed-1459-4527-bc5b-cec63456d141","Type":"ContainerDied","Data":"338f8b553ce29c8a5d0e3c4b7bad760cd27e6fdb4a221df419e562d5a3aef9d2"} Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.187473 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d7b6f8969-zdl55" event={"ID":"e5d40eed-1459-4527-bc5b-cec63456d141","Type":"ContainerDied","Data":"4f60cc1e80d8a2e0c5b39325e6fe741b506524f5e3f8ba9d4cbc7521750cf40d"} Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.187490 4681 scope.go:117] "RemoveContainer" containerID="338f8b553ce29c8a5d0e3c4b7bad760cd27e6fdb4a221df419e562d5a3aef9d2" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.224223 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9f7c541f-ab5d-462b-bc06-c1eaa50b4e54-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-2zcp7\" (UID: \"9f7c541f-ab5d-462b-bc06-c1eaa50b4e54\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2zcp7" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.224341 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/9f7c541f-ab5d-462b-bc06-c1eaa50b4e54-ready\") pod \"cni-sysctl-allowlist-ds-2zcp7\" (UID: \"9f7c541f-ab5d-462b-bc06-c1eaa50b4e54\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2zcp7" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.224433 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9f7c541f-ab5d-462b-bc06-c1eaa50b4e54-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-2zcp7\" (UID: \"9f7c541f-ab5d-462b-bc06-c1eaa50b4e54\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2zcp7" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.224542 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcl98\" (UniqueName: \"kubernetes.io/projected/9f7c541f-ab5d-462b-bc06-c1eaa50b4e54-kube-api-access-tcl98\") pod \"cni-sysctl-allowlist-ds-2zcp7\" (UID: \"9f7c541f-ab5d-462b-bc06-c1eaa50b4e54\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2zcp7" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.226317 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29587800-xd7wt" podStartSLOduration=117.55770404 podStartE2EDuration="1m59.226299871s" podCreationTimestamp="2026-04-04 02:00:00 +0000 UTC" firstStartedPulling="2026-04-04 02:01:56.419347137 +0000 UTC m=+396.085122257" lastFinishedPulling="2026-04-04 02:01:58.087942968 +0000 UTC m=+397.753718088" observedRunningTime="2026-04-04 02:01:59.221010033 +0000 UTC m=+398.886785153" watchObservedRunningTime="2026-04-04 02:01:59.226299871 +0000 UTC m=+398.892074991" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.236480 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d7b6f8969-zdl55"] Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.243062 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d7b6f8969-zdl55"] Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.257124 4681 scope.go:117] "RemoveContainer" containerID="338f8b553ce29c8a5d0e3c4b7bad760cd27e6fdb4a221df419e562d5a3aef9d2" Apr 04 02:01:59 crc kubenswrapper[4681]: E0404 02:01:59.257917 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"338f8b553ce29c8a5d0e3c4b7bad760cd27e6fdb4a221df419e562d5a3aef9d2\": container with ID starting with 338f8b553ce29c8a5d0e3c4b7bad760cd27e6fdb4a221df419e562d5a3aef9d2 not found: ID does not exist" containerID="338f8b553ce29c8a5d0e3c4b7bad760cd27e6fdb4a221df419e562d5a3aef9d2" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.257960 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"338f8b553ce29c8a5d0e3c4b7bad760cd27e6fdb4a221df419e562d5a3aef9d2"} err="failed to get container status \"338f8b553ce29c8a5d0e3c4b7bad760cd27e6fdb4a221df419e562d5a3aef9d2\": rpc error: code = NotFound desc = could not find container \"338f8b553ce29c8a5d0e3c4b7bad760cd27e6fdb4a221df419e562d5a3aef9d2\": container with ID starting with 338f8b553ce29c8a5d0e3c4b7bad760cd27e6fdb4a221df419e562d5a3aef9d2 not found: ID does not exist" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.326106 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9f7c541f-ab5d-462b-bc06-c1eaa50b4e54-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-2zcp7\" (UID: \"9f7c541f-ab5d-462b-bc06-c1eaa50b4e54\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2zcp7" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.326316 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/9f7c541f-ab5d-462b-bc06-c1eaa50b4e54-ready\") pod \"cni-sysctl-allowlist-ds-2zcp7\" (UID: \"9f7c541f-ab5d-462b-bc06-c1eaa50b4e54\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2zcp7" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.326538 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9f7c541f-ab5d-462b-bc06-c1eaa50b4e54-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-2zcp7\" (UID: \"9f7c541f-ab5d-462b-bc06-c1eaa50b4e54\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2zcp7" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.326628 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9f7c541f-ab5d-462b-bc06-c1eaa50b4e54-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-2zcp7\" (UID: \"9f7c541f-ab5d-462b-bc06-c1eaa50b4e54\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2zcp7" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.327082 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9f7c541f-ab5d-462b-bc06-c1eaa50b4e54-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-2zcp7\" (UID: \"9f7c541f-ab5d-462b-bc06-c1eaa50b4e54\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2zcp7" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.327643 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcl98\" (UniqueName: \"kubernetes.io/projected/9f7c541f-ab5d-462b-bc06-c1eaa50b4e54-kube-api-access-tcl98\") pod \"cni-sysctl-allowlist-ds-2zcp7\" (UID: \"9f7c541f-ab5d-462b-bc06-c1eaa50b4e54\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2zcp7" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.328613 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/9f7c541f-ab5d-462b-bc06-c1eaa50b4e54-ready\") pod \"cni-sysctl-allowlist-ds-2zcp7\" (UID: \"9f7c541f-ab5d-462b-bc06-c1eaa50b4e54\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2zcp7" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.349849 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcl98\" (UniqueName: \"kubernetes.io/projected/9f7c541f-ab5d-462b-bc06-c1eaa50b4e54-kube-api-access-tcl98\") pod \"cni-sysctl-allowlist-ds-2zcp7\" (UID: \"9f7c541f-ab5d-462b-bc06-c1eaa50b4e54\") " pod="openshift-multus/cni-sysctl-allowlist-ds-2zcp7" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.440451 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-2zcp7" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.522177 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587800-g4n9q" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.542507 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.573294 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-11-crc" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.644626 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d26f8dc-36cc-47b4-9729-60177c3ca6e1-secret-volume\") pod \"6d26f8dc-36cc-47b4-9729-60177c3ca6e1\" (UID: \"6d26f8dc-36cc-47b4-9729-60177c3ca6e1\") " Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.644764 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/669511ba-ae82-499c-bf4e-a2893e990205-kubelet-dir\") pod \"669511ba-ae82-499c-bf4e-a2893e990205\" (UID: \"669511ba-ae82-499c-bf4e-a2893e990205\") " Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.644792 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d26f8dc-36cc-47b4-9729-60177c3ca6e1-config-volume\") pod \"6d26f8dc-36cc-47b4-9729-60177c3ca6e1\" (UID: \"6d26f8dc-36cc-47b4-9729-60177c3ca6e1\") " Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.644829 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqk24\" (UniqueName: \"kubernetes.io/projected/6d26f8dc-36cc-47b4-9729-60177c3ca6e1-kube-api-access-pqk24\") pod \"6d26f8dc-36cc-47b4-9729-60177c3ca6e1\" (UID: \"6d26f8dc-36cc-47b4-9729-60177c3ca6e1\") " Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.644860 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/669511ba-ae82-499c-bf4e-a2893e990205-kube-api-access\") pod \"669511ba-ae82-499c-bf4e-a2893e990205\" (UID: \"669511ba-ae82-499c-bf4e-a2893e990205\") " Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.644859 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/669511ba-ae82-499c-bf4e-a2893e990205-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "669511ba-ae82-499c-bf4e-a2893e990205" (UID: "669511ba-ae82-499c-bf4e-a2893e990205"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.645130 4681 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/669511ba-ae82-499c-bf4e-a2893e990205-kubelet-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.645535 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d26f8dc-36cc-47b4-9729-60177c3ca6e1-config-volume" (OuterVolumeSpecName: "config-volume") pod "6d26f8dc-36cc-47b4-9729-60177c3ca6e1" (UID: "6d26f8dc-36cc-47b4-9729-60177c3ca6e1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.649628 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d26f8dc-36cc-47b4-9729-60177c3ca6e1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6d26f8dc-36cc-47b4-9729-60177c3ca6e1" (UID: "6d26f8dc-36cc-47b4-9729-60177c3ca6e1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.649646 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/669511ba-ae82-499c-bf4e-a2893e990205-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "669511ba-ae82-499c-bf4e-a2893e990205" (UID: "669511ba-ae82-499c-bf4e-a2893e990205"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.649664 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d26f8dc-36cc-47b4-9729-60177c3ca6e1-kube-api-access-pqk24" (OuterVolumeSpecName: "kube-api-access-pqk24") pod "6d26f8dc-36cc-47b4-9729-60177c3ca6e1" (UID: "6d26f8dc-36cc-47b4-9729-60177c3ca6e1"). InnerVolumeSpecName "kube-api-access-pqk24". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.746074 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ab9d863-161a-42f6-a6eb-279b97ad4703-kubelet-dir\") pod \"4ab9d863-161a-42f6-a6eb-279b97ad4703\" (UID: \"4ab9d863-161a-42f6-a6eb-279b97ad4703\") " Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.746209 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ab9d863-161a-42f6-a6eb-279b97ad4703-kube-api-access\") pod \"4ab9d863-161a-42f6-a6eb-279b97ad4703\" (UID: \"4ab9d863-161a-42f6-a6eb-279b97ad4703\") " Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.746660 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqk24\" (UniqueName: \"kubernetes.io/projected/6d26f8dc-36cc-47b4-9729-60177c3ca6e1-kube-api-access-pqk24\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.747086 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/669511ba-ae82-499c-bf4e-a2893e990205-kube-api-access\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.747165 4681 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6d26f8dc-36cc-47b4-9729-60177c3ca6e1-secret-volume\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.747181 4681 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d26f8dc-36cc-47b4-9729-60177c3ca6e1-config-volume\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.746748 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ab9d863-161a-42f6-a6eb-279b97ad4703-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4ab9d863-161a-42f6-a6eb-279b97ad4703" (UID: "4ab9d863-161a-42f6-a6eb-279b97ad4703"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.751576 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ab9d863-161a-42f6-a6eb-279b97ad4703-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4ab9d863-161a-42f6-a6eb-279b97ad4703" (UID: "4ab9d863-161a-42f6-a6eb-279b97ad4703"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.848535 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ab9d863-161a-42f6-a6eb-279b97ad4703-kube-api-access\") on node \"crc\" DevicePath \"\"" Apr 04 02:01:59 crc kubenswrapper[4681]: I0404 02:01:59.848938 4681 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ab9d863-161a-42f6-a6eb-279b97ad4703-kubelet-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.127664 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587802-dwq82"] Apr 04 02:02:00 crc kubenswrapper[4681]: E0404 02:02:00.127903 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d26f8dc-36cc-47b4-9729-60177c3ca6e1" containerName="collect-profiles" Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.127915 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d26f8dc-36cc-47b4-9729-60177c3ca6e1" containerName="collect-profiles" Apr 04 02:02:00 crc kubenswrapper[4681]: E0404 02:02:00.127922 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="669511ba-ae82-499c-bf4e-a2893e990205" containerName="pruner" Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.127927 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="669511ba-ae82-499c-bf4e-a2893e990205" containerName="pruner" Apr 04 02:02:00 crc kubenswrapper[4681]: E0404 02:02:00.127941 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ab9d863-161a-42f6-a6eb-279b97ad4703" containerName="pruner" Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.127947 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ab9d863-161a-42f6-a6eb-279b97ad4703" containerName="pruner" Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.128061 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="669511ba-ae82-499c-bf4e-a2893e990205" containerName="pruner" Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.128079 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d26f8dc-36cc-47b4-9729-60177c3ca6e1" containerName="collect-profiles" Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.128088 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ab9d863-161a-42f6-a6eb-279b97ad4703" containerName="pruner" Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.128461 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587802-dwq82" Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.128470 4681 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-nzjzv container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.36:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.128530 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-nzjzv" podUID="b26036bc-4cff-472f-a379-8dc4541cf018" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.36:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.135291 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587802-dwq82"] Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.152409 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pstt\" (UniqueName: \"kubernetes.io/projected/fa5d5179-84c8-46fd-9328-9016b6b13714-kube-api-access-9pstt\") pod \"auto-csr-approver-29587802-dwq82\" (UID: \"fa5d5179-84c8-46fd-9328-9016b6b13714\") " pod="openshift-infra/auto-csr-approver-29587802-dwq82" Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.196587 4681 generic.go:334] "Generic (PLEG): container finished" podID="9eff11bc-ec44-4492-90f2-c24f4b0438bc" containerID="e7c19c368ce0b475b003861af2dbc2fc182f3be264c38c46aae9c596492d72e1" exitCode=0 Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.196636 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587800-xd7wt" event={"ID":"9eff11bc-ec44-4492-90f2-c24f4b0438bc","Type":"ContainerDied","Data":"e7c19c368ce0b475b003861af2dbc2fc182f3be264c38c46aae9c596492d72e1"} Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.198186 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587800-g4n9q" event={"ID":"6d26f8dc-36cc-47b4-9729-60177c3ca6e1","Type":"ContainerDied","Data":"7dab0a1490e54b270d96f532741e7b30983251e38646293d2c97a777c336caff"} Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.198317 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dab0a1490e54b270d96f532741e7b30983251e38646293d2c97a777c336caff" Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.198230 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587800-g4n9q" Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.200788 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-2zcp7" event={"ID":"9f7c541f-ab5d-462b-bc06-c1eaa50b4e54","Type":"ContainerStarted","Data":"eb41cff58d4f5ed66d664d4a448846ace5a88bb9c89cacb0279aa7367733b071"} Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.200824 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-2zcp7" event={"ID":"9f7c541f-ab5d-462b-bc06-c1eaa50b4e54","Type":"ContainerStarted","Data":"3f2881d7547bcf4b838a62779dbe3eb29515b6086a2d88dd8867edbdb25767b1"} Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.201027 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-2zcp7" Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.205172 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.205180 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"669511ba-ae82-499c-bf4e-a2893e990205","Type":"ContainerDied","Data":"bee42f0e988a2d17781091179c96881098621f17fbe01dc2f09fcae2e05002b9"} Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.205289 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bee42f0e988a2d17781091179c96881098621f17fbe01dc2f09fcae2e05002b9" Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.207904 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-11-crc" Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.207939 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-11-crc" event={"ID":"4ab9d863-161a-42f6-a6eb-279b97ad4703","Type":"ContainerDied","Data":"b0347a168433d665ad1ec9319d2f08abe4732bf54b90a804bcb986cf48f92a3f"} Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.207959 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0347a168433d665ad1ec9319d2f08abe4732bf54b90a804bcb986cf48f92a3f" Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.216335 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gwptd" Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.231768 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-2zcp7" Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.231992 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-2zcp7" podStartSLOduration=1.231973706 podStartE2EDuration="1.231973706s" podCreationTimestamp="2026-04-04 02:01:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:02:00.227025558 +0000 UTC m=+399.892800688" watchObservedRunningTime="2026-04-04 02:02:00.231973706 +0000 UTC m=+399.897748826" Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.258363 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pstt\" (UniqueName: \"kubernetes.io/projected/fa5d5179-84c8-46fd-9328-9016b6b13714-kube-api-access-9pstt\") pod \"auto-csr-approver-29587802-dwq82\" (UID: \"fa5d5179-84c8-46fd-9328-9016b6b13714\") " pod="openshift-infra/auto-csr-approver-29587802-dwq82" Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.281223 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pstt\" (UniqueName: \"kubernetes.io/projected/fa5d5179-84c8-46fd-9328-9016b6b13714-kube-api-access-9pstt\") pod \"auto-csr-approver-29587802-dwq82\" (UID: \"fa5d5179-84c8-46fd-9328-9016b6b13714\") " pod="openshift-infra/auto-csr-approver-29587802-dwq82" Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.505408 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587802-dwq82" Apr 04 02:02:00 crc kubenswrapper[4681]: I0404 02:02:00.707911 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587802-dwq82"] Apr 04 02:02:01 crc kubenswrapper[4681]: I0404 02:02:01.130951 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-2zcp7"] Apr 04 02:02:01 crc kubenswrapper[4681]: I0404 02:02:01.215525 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5d40eed-1459-4527-bc5b-cec63456d141" path="/var/lib/kubelet/pods/e5d40eed-1459-4527-bc5b-cec63456d141/volumes" Apr 04 02:02:01 crc kubenswrapper[4681]: I0404 02:02:01.244576 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587802-dwq82" event={"ID":"fa5d5179-84c8-46fd-9328-9016b6b13714","Type":"ContainerStarted","Data":"e5465fdc5b4245881743d17f4436263ba6c699b9651ca70571f6a37c5a1f0426"} Apr 04 02:02:01 crc kubenswrapper[4681]: I0404 02:02:01.505797 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587800-xd7wt" Apr 04 02:02:01 crc kubenswrapper[4681]: I0404 02:02:01.677896 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shz9h\" (UniqueName: \"kubernetes.io/projected/9eff11bc-ec44-4492-90f2-c24f4b0438bc-kube-api-access-shz9h\") pod \"9eff11bc-ec44-4492-90f2-c24f4b0438bc\" (UID: \"9eff11bc-ec44-4492-90f2-c24f4b0438bc\") " Apr 04 02:02:01 crc kubenswrapper[4681]: I0404 02:02:01.686005 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eff11bc-ec44-4492-90f2-c24f4b0438bc-kube-api-access-shz9h" (OuterVolumeSpecName: "kube-api-access-shz9h") pod "9eff11bc-ec44-4492-90f2-c24f4b0438bc" (UID: "9eff11bc-ec44-4492-90f2-c24f4b0438bc"). InnerVolumeSpecName "kube-api-access-shz9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:02:01 crc kubenswrapper[4681]: I0404 02:02:01.779714 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shz9h\" (UniqueName: \"kubernetes.io/projected/9eff11bc-ec44-4492-90f2-c24f4b0438bc-kube-api-access-shz9h\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:02 crc kubenswrapper[4681]: I0404 02:02:02.250518 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587800-xd7wt" event={"ID":"9eff11bc-ec44-4492-90f2-c24f4b0438bc","Type":"ContainerDied","Data":"1e1ab1b3acebcb3ac30917e6449bf843f484cb021c21e00ad3b7beeceb9be924"} Apr 04 02:02:02 crc kubenswrapper[4681]: I0404 02:02:02.250944 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e1ab1b3acebcb3ac30917e6449bf843f484cb021c21e00ad3b7beeceb9be924" Apr 04 02:02:02 crc kubenswrapper[4681]: I0404 02:02:02.250527 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587800-xd7wt" Apr 04 02:02:02 crc kubenswrapper[4681]: I0404 02:02:02.253547 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587802-dwq82" event={"ID":"fa5d5179-84c8-46fd-9328-9016b6b13714","Type":"ContainerStarted","Data":"b0650801cceed6327b0324fea67a6a55d0b3e40b16733b5321987db267f75ec6"} Apr 04 02:02:02 crc kubenswrapper[4681]: I0404 02:02:02.253686 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-2zcp7" podUID="9f7c541f-ab5d-462b-bc06-c1eaa50b4e54" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://eb41cff58d4f5ed66d664d4a448846ace5a88bb9c89cacb0279aa7367733b071" gracePeriod=30 Apr 04 02:02:03 crc kubenswrapper[4681]: I0404 02:02:03.265945 4681 generic.go:334] "Generic (PLEG): container finished" podID="fa5d5179-84c8-46fd-9328-9016b6b13714" containerID="b0650801cceed6327b0324fea67a6a55d0b3e40b16733b5321987db267f75ec6" exitCode=0 Apr 04 02:02:03 crc kubenswrapper[4681]: I0404 02:02:03.266175 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587802-dwq82" event={"ID":"fa5d5179-84c8-46fd-9328-9016b6b13714","Type":"ContainerDied","Data":"b0650801cceed6327b0324fea67a6a55d0b3e40b16733b5321987db267f75ec6"} Apr 04 02:02:04 crc kubenswrapper[4681]: I0404 02:02:04.703914 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6849fdf946-8tp9h"] Apr 04 02:02:04 crc kubenswrapper[4681]: E0404 02:02:04.704511 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eff11bc-ec44-4492-90f2-c24f4b0438bc" containerName="oc" Apr 04 02:02:04 crc kubenswrapper[4681]: I0404 02:02:04.704528 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eff11bc-ec44-4492-90f2-c24f4b0438bc" containerName="oc" Apr 04 02:02:04 crc kubenswrapper[4681]: I0404 02:02:04.704678 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eff11bc-ec44-4492-90f2-c24f4b0438bc" containerName="oc" Apr 04 02:02:04 crc kubenswrapper[4681]: I0404 02:02:04.705118 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6849fdf946-8tp9h" Apr 04 02:02:04 crc kubenswrapper[4681]: I0404 02:02:04.708050 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Apr 04 02:02:04 crc kubenswrapper[4681]: I0404 02:02:04.709769 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Apr 04 02:02:04 crc kubenswrapper[4681]: I0404 02:02:04.709993 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Apr 04 02:02:04 crc kubenswrapper[4681]: I0404 02:02:04.710186 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Apr 04 02:02:04 crc kubenswrapper[4681]: I0404 02:02:04.712521 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Apr 04 02:02:04 crc kubenswrapper[4681]: I0404 02:02:04.713831 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Apr 04 02:02:04 crc kubenswrapper[4681]: I0404 02:02:04.719125 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6849fdf946-8tp9h"] Apr 04 02:02:04 crc kubenswrapper[4681]: I0404 02:02:04.826880 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9f5z\" (UniqueName: \"kubernetes.io/projected/b9f92dd1-63f1-471c-b923-5cbf185137ca-kube-api-access-l9f5z\") pod \"route-controller-manager-6849fdf946-8tp9h\" (UID: \"b9f92dd1-63f1-471c-b923-5cbf185137ca\") " pod="openshift-route-controller-manager/route-controller-manager-6849fdf946-8tp9h" Apr 04 02:02:04 crc kubenswrapper[4681]: I0404 02:02:04.826953 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9f92dd1-63f1-471c-b923-5cbf185137ca-client-ca\") pod \"route-controller-manager-6849fdf946-8tp9h\" (UID: \"b9f92dd1-63f1-471c-b923-5cbf185137ca\") " pod="openshift-route-controller-manager/route-controller-manager-6849fdf946-8tp9h" Apr 04 02:02:04 crc kubenswrapper[4681]: I0404 02:02:04.826985 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9f92dd1-63f1-471c-b923-5cbf185137ca-serving-cert\") pod \"route-controller-manager-6849fdf946-8tp9h\" (UID: \"b9f92dd1-63f1-471c-b923-5cbf185137ca\") " pod="openshift-route-controller-manager/route-controller-manager-6849fdf946-8tp9h" Apr 04 02:02:04 crc kubenswrapper[4681]: I0404 02:02:04.827024 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9f92dd1-63f1-471c-b923-5cbf185137ca-config\") pod \"route-controller-manager-6849fdf946-8tp9h\" (UID: \"b9f92dd1-63f1-471c-b923-5cbf185137ca\") " pod="openshift-route-controller-manager/route-controller-manager-6849fdf946-8tp9h" Apr 04 02:02:04 crc kubenswrapper[4681]: I0404 02:02:04.928502 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9f5z\" (UniqueName: \"kubernetes.io/projected/b9f92dd1-63f1-471c-b923-5cbf185137ca-kube-api-access-l9f5z\") pod \"route-controller-manager-6849fdf946-8tp9h\" (UID: \"b9f92dd1-63f1-471c-b923-5cbf185137ca\") " pod="openshift-route-controller-manager/route-controller-manager-6849fdf946-8tp9h" Apr 04 02:02:04 crc kubenswrapper[4681]: I0404 02:02:04.928603 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9f92dd1-63f1-471c-b923-5cbf185137ca-client-ca\") pod \"route-controller-manager-6849fdf946-8tp9h\" (UID: \"b9f92dd1-63f1-471c-b923-5cbf185137ca\") " pod="openshift-route-controller-manager/route-controller-manager-6849fdf946-8tp9h" Apr 04 02:02:04 crc kubenswrapper[4681]: I0404 02:02:04.928642 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9f92dd1-63f1-471c-b923-5cbf185137ca-serving-cert\") pod \"route-controller-manager-6849fdf946-8tp9h\" (UID: \"b9f92dd1-63f1-471c-b923-5cbf185137ca\") " pod="openshift-route-controller-manager/route-controller-manager-6849fdf946-8tp9h" Apr 04 02:02:04 crc kubenswrapper[4681]: I0404 02:02:04.928692 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9f92dd1-63f1-471c-b923-5cbf185137ca-config\") pod \"route-controller-manager-6849fdf946-8tp9h\" (UID: \"b9f92dd1-63f1-471c-b923-5cbf185137ca\") " pod="openshift-route-controller-manager/route-controller-manager-6849fdf946-8tp9h" Apr 04 02:02:04 crc kubenswrapper[4681]: I0404 02:02:04.930093 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9f92dd1-63f1-471c-b923-5cbf185137ca-config\") pod \"route-controller-manager-6849fdf946-8tp9h\" (UID: \"b9f92dd1-63f1-471c-b923-5cbf185137ca\") " pod="openshift-route-controller-manager/route-controller-manager-6849fdf946-8tp9h" Apr 04 02:02:04 crc kubenswrapper[4681]: I0404 02:02:04.930955 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9f92dd1-63f1-471c-b923-5cbf185137ca-client-ca\") pod \"route-controller-manager-6849fdf946-8tp9h\" (UID: \"b9f92dd1-63f1-471c-b923-5cbf185137ca\") " pod="openshift-route-controller-manager/route-controller-manager-6849fdf946-8tp9h" Apr 04 02:02:04 crc kubenswrapper[4681]: I0404 02:02:04.936276 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9f92dd1-63f1-471c-b923-5cbf185137ca-serving-cert\") pod \"route-controller-manager-6849fdf946-8tp9h\" (UID: \"b9f92dd1-63f1-471c-b923-5cbf185137ca\") " pod="openshift-route-controller-manager/route-controller-manager-6849fdf946-8tp9h" Apr 04 02:02:04 crc kubenswrapper[4681]: I0404 02:02:04.948517 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9f5z\" (UniqueName: \"kubernetes.io/projected/b9f92dd1-63f1-471c-b923-5cbf185137ca-kube-api-access-l9f5z\") pod \"route-controller-manager-6849fdf946-8tp9h\" (UID: \"b9f92dd1-63f1-471c-b923-5cbf185137ca\") " pod="openshift-route-controller-manager/route-controller-manager-6849fdf946-8tp9h" Apr 04 02:02:05 crc kubenswrapper[4681]: I0404 02:02:05.040152 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6849fdf946-8tp9h" Apr 04 02:02:06 crc kubenswrapper[4681]: I0404 02:02:06.925566 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-56c4884fb5-4p4lw" Apr 04 02:02:06 crc kubenswrapper[4681]: I0404 02:02:06.933704 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-56c4884fb5-4p4lw" Apr 04 02:02:07 crc kubenswrapper[4681]: I0404 02:02:07.003596 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-c6ktd"] Apr 04 02:02:07 crc kubenswrapper[4681]: I0404 02:02:07.024802 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587802-dwq82" Apr 04 02:02:07 crc kubenswrapper[4681]: I0404 02:02:07.159960 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pstt\" (UniqueName: \"kubernetes.io/projected/fa5d5179-84c8-46fd-9328-9016b6b13714-kube-api-access-9pstt\") pod \"fa5d5179-84c8-46fd-9328-9016b6b13714\" (UID: \"fa5d5179-84c8-46fd-9328-9016b6b13714\") " Apr 04 02:02:07 crc kubenswrapper[4681]: I0404 02:02:07.166760 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa5d5179-84c8-46fd-9328-9016b6b13714-kube-api-access-9pstt" (OuterVolumeSpecName: "kube-api-access-9pstt") pod "fa5d5179-84c8-46fd-9328-9016b6b13714" (UID: "fa5d5179-84c8-46fd-9328-9016b6b13714"). InnerVolumeSpecName "kube-api-access-9pstt". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:02:07 crc kubenswrapper[4681]: I0404 02:02:07.262346 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pstt\" (UniqueName: \"kubernetes.io/projected/fa5d5179-84c8-46fd-9328-9016b6b13714-kube-api-access-9pstt\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:07 crc kubenswrapper[4681]: I0404 02:02:07.296146 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587802-dwq82" Apr 04 02:02:07 crc kubenswrapper[4681]: I0404 02:02:07.296619 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587802-dwq82" event={"ID":"fa5d5179-84c8-46fd-9328-9016b6b13714","Type":"ContainerDied","Data":"e5465fdc5b4245881743d17f4436263ba6c699b9651ca70571f6a37c5a1f0426"} Apr 04 02:02:07 crc kubenswrapper[4681]: I0404 02:02:07.296647 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5465fdc5b4245881743d17f4436263ba6c699b9651ca70571f6a37c5a1f0426" Apr 04 02:02:07 crc kubenswrapper[4681]: I0404 02:02:07.535165 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6849fdf946-8tp9h"] Apr 04 02:02:08 crc kubenswrapper[4681]: I0404 02:02:08.305798 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6849fdf946-8tp9h" event={"ID":"b9f92dd1-63f1-471c-b923-5cbf185137ca","Type":"ContainerStarted","Data":"078ebec1a4f59189cd33439b2ffcec218bbee0fdfca824943f42c1f481b0ead2"} Apr 04 02:02:08 crc kubenswrapper[4681]: I0404 02:02:08.305867 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6849fdf946-8tp9h" event={"ID":"b9f92dd1-63f1-471c-b923-5cbf185137ca","Type":"ContainerStarted","Data":"c660176f73c992524f61e47457a071e1975accf69b2f3c1a725232eca416456e"} Apr 04 02:02:08 crc kubenswrapper[4681]: I0404 02:02:08.308104 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5qn4m" event={"ID":"f00114dc-2aae-4d37-8143-71336f144be3","Type":"ContainerStarted","Data":"e2c741193f01dc17ca65beff11299aecab86425978ca4f622052f6a5386adb4b"} Apr 04 02:02:09 crc kubenswrapper[4681]: I0404 02:02:09.317385 4681 generic.go:334] "Generic (PLEG): container finished" podID="f00114dc-2aae-4d37-8143-71336f144be3" containerID="e2c741193f01dc17ca65beff11299aecab86425978ca4f622052f6a5386adb4b" exitCode=0 Apr 04 02:02:09 crc kubenswrapper[4681]: I0404 02:02:09.317441 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5qn4m" event={"ID":"f00114dc-2aae-4d37-8143-71336f144be3","Type":"ContainerDied","Data":"e2c741193f01dc17ca65beff11299aecab86425978ca4f622052f6a5386adb4b"} Apr 04 02:02:09 crc kubenswrapper[4681]: I0404 02:02:09.317936 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6849fdf946-8tp9h" Apr 04 02:02:09 crc kubenswrapper[4681]: I0404 02:02:09.326073 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6849fdf946-8tp9h" Apr 04 02:02:09 crc kubenswrapper[4681]: I0404 02:02:09.356773 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6849fdf946-8tp9h" podStartSLOduration=138.356757203 podStartE2EDuration="2m18.356757203s" podCreationTimestamp="2026-04-04 01:59:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:02:09.353927664 +0000 UTC m=+409.019702784" watchObservedRunningTime="2026-04-04 02:02:09.356757203 +0000 UTC m=+409.022532323" Apr 04 02:02:09 crc kubenswrapper[4681]: E0404 02:02:09.444150 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eb41cff58d4f5ed66d664d4a448846ace5a88bb9c89cacb0279aa7367733b071" cmd=["/bin/bash","-c","test -f /ready/ready"] Apr 04 02:02:09 crc kubenswrapper[4681]: E0404 02:02:09.445473 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eb41cff58d4f5ed66d664d4a448846ace5a88bb9c89cacb0279aa7367733b071" cmd=["/bin/bash","-c","test -f /ready/ready"] Apr 04 02:02:09 crc kubenswrapper[4681]: E0404 02:02:09.446861 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eb41cff58d4f5ed66d664d4a448846ace5a88bb9c89cacb0279aa7367733b071" cmd=["/bin/bash","-c","test -f /ready/ready"] Apr 04 02:02:09 crc kubenswrapper[4681]: E0404 02:02:09.446960 4681 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-2zcp7" podUID="9f7c541f-ab5d-462b-bc06-c1eaa50b4e54" containerName="kube-multus-additional-cni-plugins" Apr 04 02:02:10 crc kubenswrapper[4681]: E0404 02:02:10.263043 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-xdpcd" podUID="cbcf0420-aff0-484c-9c2b-134552760373" Apr 04 02:02:10 crc kubenswrapper[4681]: I0404 02:02:10.336529 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8stk" event={"ID":"da41f745-08e9-4d36-ad1d-3b054a4f0a2f","Type":"ContainerStarted","Data":"d1b6fd38b21cfbb62f48f344d9a73c2ab21d3e3a70fa665b9e0515a46846f917"} Apr 04 02:02:11 crc kubenswrapper[4681]: I0404 02:02:11.346342 4681 generic.go:334] "Generic (PLEG): container finished" podID="da41f745-08e9-4d36-ad1d-3b054a4f0a2f" containerID="d1b6fd38b21cfbb62f48f344d9a73c2ab21d3e3a70fa665b9e0515a46846f917" exitCode=0 Apr 04 02:02:11 crc kubenswrapper[4681]: I0404 02:02:11.346467 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8stk" event={"ID":"da41f745-08e9-4d36-ad1d-3b054a4f0a2f","Type":"ContainerDied","Data":"d1b6fd38b21cfbb62f48f344d9a73c2ab21d3e3a70fa665b9e0515a46846f917"} Apr 04 02:02:11 crc kubenswrapper[4681]: I0404 02:02:11.351206 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5qn4m" event={"ID":"f00114dc-2aae-4d37-8143-71336f144be3","Type":"ContainerStarted","Data":"e0de81439966fae3310cd128018f42d8ad943049b2e0bf38e79c551d10557889"} Apr 04 02:02:11 crc kubenswrapper[4681]: I0404 02:02:11.392856 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5qn4m" podStartSLOduration=13.520605321 podStartE2EDuration="3m1.392835344s" podCreationTimestamp="2026-04-04 01:59:10 +0000 UTC" firstStartedPulling="2026-04-04 01:59:22.395389343 +0000 UTC m=+242.061164463" lastFinishedPulling="2026-04-04 02:02:10.267619346 +0000 UTC m=+409.933394486" observedRunningTime="2026-04-04 02:02:11.391283301 +0000 UTC m=+411.057058451" watchObservedRunningTime="2026-04-04 02:02:11.392835344 +0000 UTC m=+411.058610474" Apr 04 02:02:12 crc kubenswrapper[4681]: E0404 02:02:12.715338 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-kmgrn" podUID="c99a24fb-60ac-48a9-9158-40827f6e3737" Apr 04 02:02:14 crc kubenswrapper[4681]: I0404 02:02:14.369775 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8stk" event={"ID":"da41f745-08e9-4d36-ad1d-3b054a4f0a2f","Type":"ContainerStarted","Data":"40e51735094189321912b95707e729c950d33622e0f4bcc2d1b0cea4c9b3f59a"} Apr 04 02:02:14 crc kubenswrapper[4681]: I0404 02:02:14.371507 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvhf7" event={"ID":"b0cbd40c-5c8c-451b-af65-fb67ba867ced","Type":"ContainerStarted","Data":"a26d30b52bf9560a1dec69af55c9e93136e5d473550f0d67dc658a6b78f7a9fe"} Apr 04 02:02:14 crc kubenswrapper[4681]: I0404 02:02:14.376051 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sv8f4" event={"ID":"72699dc0-10a9-45c2-9be8-e7a48b8f4edb","Type":"ContainerStarted","Data":"3a4f40eeb33fc880bb4c27013fdcad9382a81b41a7b0635096aa6262dd7b2032"} Apr 04 02:02:14 crc kubenswrapper[4681]: I0404 02:02:14.381754 4681 generic.go:334] "Generic (PLEG): container finished" podID="1b3e95cc-25d6-4efd-8828-894657c29bcb" containerID="56c885ee7f48cda2dbcef6372e3d6eae4c77ec8ef4564570159f151d31dffe0a" exitCode=0 Apr 04 02:02:14 crc kubenswrapper[4681]: I0404 02:02:14.381792 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w2zrg" event={"ID":"1b3e95cc-25d6-4efd-8828-894657c29bcb","Type":"ContainerDied","Data":"56c885ee7f48cda2dbcef6372e3d6eae4c77ec8ef4564570159f151d31dffe0a"} Apr 04 02:02:15 crc kubenswrapper[4681]: I0404 02:02:15.308625 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" Apr 04 02:02:15 crc kubenswrapper[4681]: I0404 02:02:15.389979 4681 generic.go:334] "Generic (PLEG): container finished" podID="72699dc0-10a9-45c2-9be8-e7a48b8f4edb" containerID="3a4f40eeb33fc880bb4c27013fdcad9382a81b41a7b0635096aa6262dd7b2032" exitCode=0 Apr 04 02:02:15 crc kubenswrapper[4681]: I0404 02:02:15.390019 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sv8f4" event={"ID":"72699dc0-10a9-45c2-9be8-e7a48b8f4edb","Type":"ContainerDied","Data":"3a4f40eeb33fc880bb4c27013fdcad9382a81b41a7b0635096aa6262dd7b2032"} Apr 04 02:02:15 crc kubenswrapper[4681]: I0404 02:02:15.409238 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-59f479d65-sqzp5"] Apr 04 02:02:16 crc kubenswrapper[4681]: I0404 02:02:16.403732 4681 generic.go:334] "Generic (PLEG): container finished" podID="b0cbd40c-5c8c-451b-af65-fb67ba867ced" containerID="a26d30b52bf9560a1dec69af55c9e93136e5d473550f0d67dc658a6b78f7a9fe" exitCode=0 Apr 04 02:02:16 crc kubenswrapper[4681]: I0404 02:02:16.404718 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvhf7" event={"ID":"b0cbd40c-5c8c-451b-af65-fb67ba867ced","Type":"ContainerDied","Data":"a26d30b52bf9560a1dec69af55c9e93136e5d473550f0d67dc658a6b78f7a9fe"} Apr 04 02:02:16 crc kubenswrapper[4681]: I0404 02:02:16.431029 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m8stk" podStartSLOduration=14.852985948 podStartE2EDuration="3m5.431010867s" podCreationTimestamp="2026-04-04 01:59:11 +0000 UTC" firstStartedPulling="2026-04-04 01:59:22.394573189 +0000 UTC m=+242.060348309" lastFinishedPulling="2026-04-04 02:02:12.972598108 +0000 UTC m=+412.638373228" observedRunningTime="2026-04-04 02:02:16.419764283 +0000 UTC m=+416.085539403" watchObservedRunningTime="2026-04-04 02:02:16.431010867 +0000 UTC m=+416.096785987" Apr 04 02:02:17 crc kubenswrapper[4681]: I0404 02:02:17.207333 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-67445d46b-m2v67"] Apr 04 02:02:19 crc kubenswrapper[4681]: E0404 02:02:19.444169 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eb41cff58d4f5ed66d664d4a448846ace5a88bb9c89cacb0279aa7367733b071" cmd=["/bin/bash","-c","test -f /ready/ready"] Apr 04 02:02:19 crc kubenswrapper[4681]: E0404 02:02:19.445657 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eb41cff58d4f5ed66d664d4a448846ace5a88bb9c89cacb0279aa7367733b071" cmd=["/bin/bash","-c","test -f /ready/ready"] Apr 04 02:02:19 crc kubenswrapper[4681]: E0404 02:02:19.446947 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eb41cff58d4f5ed66d664d4a448846ace5a88bb9c89cacb0279aa7367733b071" cmd=["/bin/bash","-c","test -f /ready/ready"] Apr 04 02:02:19 crc kubenswrapper[4681]: E0404 02:02:19.447015 4681 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-2zcp7" podUID="9f7c541f-ab5d-462b-bc06-c1eaa50b4e54" containerName="kube-multus-additional-cni-plugins" Apr 04 02:02:21 crc kubenswrapper[4681]: I0404 02:02:21.287415 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5qn4m" Apr 04 02:02:21 crc kubenswrapper[4681]: I0404 02:02:21.287698 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5qn4m" Apr 04 02:02:21 crc kubenswrapper[4681]: I0404 02:02:21.528391 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m8stk" Apr 04 02:02:21 crc kubenswrapper[4681]: I0404 02:02:21.528790 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m8stk" Apr 04 02:02:22 crc kubenswrapper[4681]: I0404 02:02:22.484029 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5qn4m" Apr 04 02:02:22 crc kubenswrapper[4681]: I0404 02:02:22.484720 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m8stk" Apr 04 02:02:22 crc kubenswrapper[4681]: I0404 02:02:22.547953 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5qn4m" Apr 04 02:02:23 crc kubenswrapper[4681]: I0404 02:02:23.514249 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m8stk" Apr 04 02:02:28 crc kubenswrapper[4681]: I0404 02:02:28.274458 4681 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Apr 04 02:02:28 crc kubenswrapper[4681]: I0404 02:02:28.275381 4681 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Apr 04 02:02:28 crc kubenswrapper[4681]: I0404 02:02:28.275442 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" containerID="cri-o://ce18a007870156d09b31ab38dd939beba4fc47418b8e07d70945eb4838f93130" gracePeriod=30 Apr 04 02:02:28 crc kubenswrapper[4681]: E0404 02:02:28.275609 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="wait-for-host-port" Apr 04 02:02:28 crc kubenswrapper[4681]: I0404 02:02:28.275623 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="wait-for-host-port" Apr 04 02:02:28 crc kubenswrapper[4681]: E0404 02:02:28.275632 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler-cert-syncer" Apr 04 02:02:28 crc kubenswrapper[4681]: I0404 02:02:28.275638 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler-cert-syncer" Apr 04 02:02:28 crc kubenswrapper[4681]: E0404 02:02:28.275645 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" Apr 04 02:02:28 crc kubenswrapper[4681]: I0404 02:02:28.275650 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" Apr 04 02:02:28 crc kubenswrapper[4681]: E0404 02:02:28.275662 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa5d5179-84c8-46fd-9328-9016b6b13714" containerName="oc" Apr 04 02:02:28 crc kubenswrapper[4681]: I0404 02:02:28.275668 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa5d5179-84c8-46fd-9328-9016b6b13714" containerName="oc" Apr 04 02:02:28 crc kubenswrapper[4681]: E0404 02:02:28.275675 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler-recovery-controller" Apr 04 02:02:28 crc kubenswrapper[4681]: I0404 02:02:28.275681 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler-recovery-controller" Apr 04 02:02:28 crc kubenswrapper[4681]: I0404 02:02:28.275783 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa5d5179-84c8-46fd-9328-9016b6b13714" containerName="oc" Apr 04 02:02:28 crc kubenswrapper[4681]: I0404 02:02:28.275574 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler-cert-syncer" containerID="cri-o://2d9640effcf3619f733c3f5082251165355776fabe5d8c3ade7f06409ec459d0" gracePeriod=30 Apr 04 02:02:28 crc kubenswrapper[4681]: I0404 02:02:28.275794 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler-cert-syncer" Apr 04 02:02:28 crc kubenswrapper[4681]: I0404 02:02:28.275923 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" Apr 04 02:02:28 crc kubenswrapper[4681]: I0404 02:02:28.275950 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler-recovery-controller" Apr 04 02:02:28 crc kubenswrapper[4681]: I0404 02:02:28.275595 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler-recovery-controller" containerID="cri-o://0a4d1e6ca047390c262fe05912f5de2fb5d4fa0f6d51f4462d9b602901aa99dc" gracePeriod=30 Apr 04 02:02:28 crc kubenswrapper[4681]: I0404 02:02:28.362058 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/815516d0756bb9282f4d0a28cef72670-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"815516d0756bb9282f4d0a28cef72670\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 02:02:28 crc kubenswrapper[4681]: I0404 02:02:28.362214 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/815516d0756bb9282f4d0a28cef72670-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"815516d0756bb9282f4d0a28cef72670\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 02:02:28 crc kubenswrapper[4681]: I0404 02:02:28.463224 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/815516d0756bb9282f4d0a28cef72670-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"815516d0756bb9282f4d0a28cef72670\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 02:02:28 crc kubenswrapper[4681]: I0404 02:02:28.463310 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/815516d0756bb9282f4d0a28cef72670-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"815516d0756bb9282f4d0a28cef72670\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 02:02:28 crc kubenswrapper[4681]: I0404 02:02:28.463445 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/815516d0756bb9282f4d0a28cef72670-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"815516d0756bb9282f4d0a28cef72670\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 02:02:28 crc kubenswrapper[4681]: I0404 02:02:28.463491 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/815516d0756bb9282f4d0a28cef72670-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"815516d0756bb9282f4d0a28cef72670\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 02:02:29 crc kubenswrapper[4681]: E0404 02:02:29.443407 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eb41cff58d4f5ed66d664d4a448846ace5a88bb9c89cacb0279aa7367733b071" cmd=["/bin/bash","-c","test -f /ready/ready"] Apr 04 02:02:29 crc kubenswrapper[4681]: E0404 02:02:29.444547 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eb41cff58d4f5ed66d664d4a448846ace5a88bb9c89cacb0279aa7367733b071" cmd=["/bin/bash","-c","test -f /ready/ready"] Apr 04 02:02:29 crc kubenswrapper[4681]: E0404 02:02:29.445805 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="eb41cff58d4f5ed66d664d4a448846ace5a88bb9c89cacb0279aa7367733b071" cmd=["/bin/bash","-c","test -f /ready/ready"] Apr 04 02:02:29 crc kubenswrapper[4681]: E0404 02:02:29.445836 4681 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-2zcp7" podUID="9f7c541f-ab5d-462b-bc06-c1eaa50b4e54" containerName="kube-multus-additional-cni-plugins" Apr 04 02:02:29 crc kubenswrapper[4681]: I0404 02:02:29.490542 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_3dcd261975c3d6b9a6ad6367fd4facd3/kube-scheduler-cert-syncer/0.log" Apr 04 02:02:29 crc kubenswrapper[4681]: I0404 02:02:29.491057 4681 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="2d9640effcf3619f733c3f5082251165355776fabe5d8c3ade7f06409ec459d0" exitCode=2 Apr 04 02:02:29 crc kubenswrapper[4681]: I0404 02:02:29.669832 4681 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Apr 04 02:02:29 crc kubenswrapper[4681]: I0404 02:02:29.670230 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://9a52ccf1a40f8ddec3fbfd455dfaae3a08af9e38f488b774ae61283eb539214d" gracePeriod=30 Apr 04 02:02:29 crc kubenswrapper[4681]: I0404 02:02:29.670504 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://4f7eb0459a3d135d52a1c4168c3def80940997d4a4286ffa23d2abdde13b7df7" gracePeriod=30 Apr 04 02:02:29 crc kubenswrapper[4681]: I0404 02:02:29.670592 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://7096a59f168cc8b9e7c61688d8ae41239ce4d62f3f0aa859bdb8586c85afe752" gracePeriod=30 Apr 04 02:02:29 crc kubenswrapper[4681]: I0404 02:02:29.670655 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://2f6196c5b3a05c942055f25e31a350843406ea37406e1af62c782dc9f8a4f1c2" gracePeriod=30 Apr 04 02:02:29 crc kubenswrapper[4681]: I0404 02:02:29.675467 4681 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Apr 04 02:02:29 crc kubenswrapper[4681]: E0404 02:02:29.675784 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager-recovery-controller" Apr 04 02:02:29 crc kubenswrapper[4681]: I0404 02:02:29.675816 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager-recovery-controller" Apr 04 02:02:29 crc kubenswrapper[4681]: E0404 02:02:29.675846 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" Apr 04 02:02:29 crc kubenswrapper[4681]: I0404 02:02:29.675860 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" Apr 04 02:02:29 crc kubenswrapper[4681]: E0404 02:02:29.675880 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" Apr 04 02:02:29 crc kubenswrapper[4681]: I0404 02:02:29.675892 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" Apr 04 02:02:29 crc kubenswrapper[4681]: E0404 02:02:29.675913 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager-cert-syncer" Apr 04 02:02:29 crc kubenswrapper[4681]: I0404 02:02:29.675927 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager-cert-syncer" Apr 04 02:02:29 crc kubenswrapper[4681]: E0404 02:02:29.675958 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" Apr 04 02:02:29 crc kubenswrapper[4681]: I0404 02:02:29.675972 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" Apr 04 02:02:29 crc kubenswrapper[4681]: I0404 02:02:29.676150 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager-recovery-controller" Apr 04 02:02:29 crc kubenswrapper[4681]: I0404 02:02:29.676170 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager-cert-syncer" Apr 04 02:02:29 crc kubenswrapper[4681]: I0404 02:02:29.676187 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" Apr 04 02:02:29 crc kubenswrapper[4681]: I0404 02:02:29.676204 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" Apr 04 02:02:29 crc kubenswrapper[4681]: I0404 02:02:29.676223 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" Apr 04 02:02:29 crc kubenswrapper[4681]: I0404 02:02:29.808913 4681 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Apr 04 02:02:29 crc kubenswrapper[4681]: I0404 02:02:29.808995 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Apr 04 02:02:29 crc kubenswrapper[4681]: I0404 02:02:29.815973 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/235e9295064844132a05dc40ef3a886a-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"235e9295064844132a05dc40ef3a886a\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:02:29 crc kubenswrapper[4681]: I0404 02:02:29.816301 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/235e9295064844132a05dc40ef3a886a-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"235e9295064844132a05dc40ef3a886a\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:02:29 crc kubenswrapper[4681]: I0404 02:02:29.917650 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/235e9295064844132a05dc40ef3a886a-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"235e9295064844132a05dc40ef3a886a\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:02:29 crc kubenswrapper[4681]: I0404 02:02:29.917710 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/235e9295064844132a05dc40ef3a886a-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"235e9295064844132a05dc40ef3a886a\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:02:29 crc kubenswrapper[4681]: I0404 02:02:29.917792 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/235e9295064844132a05dc40ef3a886a-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"235e9295064844132a05dc40ef3a886a\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:02:29 crc kubenswrapper[4681]: I0404 02:02:29.917797 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/235e9295064844132a05dc40ef3a886a-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"235e9295064844132a05dc40ef3a886a\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:02:30 crc kubenswrapper[4681]: I0404 02:02:30.380449 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-10-crc"] Apr 04 02:02:30 crc kubenswrapper[4681]: I0404 02:02:30.381862 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-crc" Apr 04 02:02:30 crc kubenswrapper[4681]: I0404 02:02:30.386618 4681 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-crc" oldPodUID="f614b9022728cf315e60c057852e563e" podUID="235e9295064844132a05dc40ef3a886a" Apr 04 02:02:30 crc kubenswrapper[4681]: I0404 02:02:30.391702 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-10-crc"] Apr 04 02:02:30 crc kubenswrapper[4681]: I0404 02:02:30.499083 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_3dcd261975c3d6b9a6ad6367fd4facd3/kube-scheduler-cert-syncer/0.log" Apr 04 02:02:30 crc kubenswrapper[4681]: I0404 02:02:30.499783 4681 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="0a4d1e6ca047390c262fe05912f5de2fb5d4fa0f6d51f4462d9b602901aa99dc" exitCode=0 Apr 04 02:02:30 crc kubenswrapper[4681]: I0404 02:02:30.499799 4681 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="ce18a007870156d09b31ab38dd939beba4fc47418b8e07d70945eb4838f93130" exitCode=0 Apr 04 02:02:30 crc kubenswrapper[4681]: I0404 02:02:30.502933 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager-cert-syncer/0.log" Apr 04 02:02:30 crc kubenswrapper[4681]: I0404 02:02:30.503474 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Apr 04 02:02:30 crc kubenswrapper[4681]: I0404 02:02:30.503824 4681 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="2f6196c5b3a05c942055f25e31a350843406ea37406e1af62c782dc9f8a4f1c2" exitCode=2 Apr 04 02:02:30 crc kubenswrapper[4681]: I0404 02:02:30.505196 4681 generic.go:334] "Generic (PLEG): container finished" podID="de32363f-da7c-4423-9648-bab862a94c60" containerID="1418eb2f7c558025d3c093377890cfbf7eacdfee9d3d9f4e22f8f39c4fad5af1" exitCode=0 Apr 04 02:02:30 crc kubenswrapper[4681]: I0404 02:02:30.505231 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-7-crc" event={"ID":"de32363f-da7c-4423-9648-bab862a94c60","Type":"ContainerDied","Data":"1418eb2f7c558025d3c093377890cfbf7eacdfee9d3d9f4e22f8f39c4fad5af1"} Apr 04 02:02:30 crc kubenswrapper[4681]: I0404 02:02:30.524948 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/114ff8c6-d9f1-4a37-9054-89ceef4ea3b2-kubelet-dir\") pod \"revision-pruner-10-crc\" (UID: \"114ff8c6-d9f1-4a37-9054-89ceef4ea3b2\") " pod="openshift-kube-apiserver/revision-pruner-10-crc" Apr 04 02:02:30 crc kubenswrapper[4681]: I0404 02:02:30.525042 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/114ff8c6-d9f1-4a37-9054-89ceef4ea3b2-kube-api-access\") pod \"revision-pruner-10-crc\" (UID: \"114ff8c6-d9f1-4a37-9054-89ceef4ea3b2\") " pod="openshift-kube-apiserver/revision-pruner-10-crc" Apr 04 02:02:30 crc kubenswrapper[4681]: I0404 02:02:30.626296 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/114ff8c6-d9f1-4a37-9054-89ceef4ea3b2-kube-api-access\") pod \"revision-pruner-10-crc\" (UID: \"114ff8c6-d9f1-4a37-9054-89ceef4ea3b2\") " pod="openshift-kube-apiserver/revision-pruner-10-crc" Apr 04 02:02:30 crc kubenswrapper[4681]: I0404 02:02:30.626627 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/114ff8c6-d9f1-4a37-9054-89ceef4ea3b2-kubelet-dir\") pod \"revision-pruner-10-crc\" (UID: \"114ff8c6-d9f1-4a37-9054-89ceef4ea3b2\") " pod="openshift-kube-apiserver/revision-pruner-10-crc" Apr 04 02:02:30 crc kubenswrapper[4681]: I0404 02:02:30.626710 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/114ff8c6-d9f1-4a37-9054-89ceef4ea3b2-kubelet-dir\") pod \"revision-pruner-10-crc\" (UID: \"114ff8c6-d9f1-4a37-9054-89ceef4ea3b2\") " pod="openshift-kube-apiserver/revision-pruner-10-crc" Apr 04 02:02:30 crc kubenswrapper[4681]: I0404 02:02:30.646530 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/114ff8c6-d9f1-4a37-9054-89ceef4ea3b2-kube-api-access\") pod \"revision-pruner-10-crc\" (UID: \"114ff8c6-d9f1-4a37-9054-89ceef4ea3b2\") " pod="openshift-kube-apiserver/revision-pruner-10-crc" Apr 04 02:02:30 crc kubenswrapper[4681]: I0404 02:02:30.716519 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-crc" Apr 04 02:02:32 crc kubenswrapper[4681]: I0404 02:02:32.056765 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-c6ktd" podUID="1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798" containerName="console" containerID="cri-o://be2ecda88d01e3d2d5a2743a7cf1005605b8427a5b9dfb0fefee04ff5de9f9b0" gracePeriod=15 Apr 04 02:02:32 crc kubenswrapper[4681]: I0404 02:02:32.409931 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-7-crc" Apr 04 02:02:32 crc kubenswrapper[4681]: I0404 02:02:32.521876 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager-cert-syncer/0.log" Apr 04 02:02:32 crc kubenswrapper[4681]: I0404 02:02:32.522428 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Apr 04 02:02:32 crc kubenswrapper[4681]: I0404 02:02:32.522881 4681 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="4f7eb0459a3d135d52a1c4168c3def80940997d4a4286ffa23d2abdde13b7df7" exitCode=0 Apr 04 02:02:32 crc kubenswrapper[4681]: I0404 02:02:32.522908 4681 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="7096a59f168cc8b9e7c61688d8ae41239ce4d62f3f0aa859bdb8586c85afe752" exitCode=0 Apr 04 02:02:32 crc kubenswrapper[4681]: I0404 02:02:32.522916 4681 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="9a52ccf1a40f8ddec3fbfd455dfaae3a08af9e38f488b774ae61283eb539214d" exitCode=0 Apr 04 02:02:32 crc kubenswrapper[4681]: I0404 02:02:32.522963 4681 scope.go:117] "RemoveContainer" containerID="b5fc946678318218c1dc93f59aea7fc760172225cdb94efee9ea779b0a3ff53c" Apr 04 02:02:32 crc kubenswrapper[4681]: I0404 02:02:32.525421 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-c6ktd_1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798/console/0.log" Apr 04 02:02:32 crc kubenswrapper[4681]: I0404 02:02:32.525449 4681 generic.go:334] "Generic (PLEG): container finished" podID="1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798" containerID="be2ecda88d01e3d2d5a2743a7cf1005605b8427a5b9dfb0fefee04ff5de9f9b0" exitCode=2 Apr 04 02:02:32 crc kubenswrapper[4681]: I0404 02:02:32.525484 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c6ktd" event={"ID":"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798","Type":"ContainerDied","Data":"be2ecda88d01e3d2d5a2743a7cf1005605b8427a5b9dfb0fefee04ff5de9f9b0"} Apr 04 02:02:32 crc kubenswrapper[4681]: I0404 02:02:32.526889 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-7-crc" event={"ID":"de32363f-da7c-4423-9648-bab862a94c60","Type":"ContainerDied","Data":"d579369e2c58eac774be11ecbde5845746b9216b0788e8fdee6166f86bba9f1e"} Apr 04 02:02:32 crc kubenswrapper[4681]: I0404 02:02:32.526910 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d579369e2c58eac774be11ecbde5845746b9216b0788e8fdee6166f86bba9f1e" Apr 04 02:02:32 crc kubenswrapper[4681]: I0404 02:02:32.526960 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-7-crc" Apr 04 02:02:32 crc kubenswrapper[4681]: I0404 02:02:32.566866 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/de32363f-da7c-4423-9648-bab862a94c60-kubelet-dir\") pod \"de32363f-da7c-4423-9648-bab862a94c60\" (UID: \"de32363f-da7c-4423-9648-bab862a94c60\") " Apr 04 02:02:32 crc kubenswrapper[4681]: I0404 02:02:32.566938 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/de32363f-da7c-4423-9648-bab862a94c60-var-lock\") pod \"de32363f-da7c-4423-9648-bab862a94c60\" (UID: \"de32363f-da7c-4423-9648-bab862a94c60\") " Apr 04 02:02:32 crc kubenswrapper[4681]: I0404 02:02:32.567049 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de32363f-da7c-4423-9648-bab862a94c60-kube-api-access\") pod \"de32363f-da7c-4423-9648-bab862a94c60\" (UID: \"de32363f-da7c-4423-9648-bab862a94c60\") " Apr 04 02:02:32 crc kubenswrapper[4681]: I0404 02:02:32.567457 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de32363f-da7c-4423-9648-bab862a94c60-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "de32363f-da7c-4423-9648-bab862a94c60" (UID: "de32363f-da7c-4423-9648-bab862a94c60"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:02:32 crc kubenswrapper[4681]: I0404 02:02:32.567516 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de32363f-da7c-4423-9648-bab862a94c60-var-lock" (OuterVolumeSpecName: "var-lock") pod "de32363f-da7c-4423-9648-bab862a94c60" (UID: "de32363f-da7c-4423-9648-bab862a94c60"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:02:32 crc kubenswrapper[4681]: I0404 02:02:32.572050 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de32363f-da7c-4423-9648-bab862a94c60-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "de32363f-da7c-4423-9648-bab862a94c60" (UID: "de32363f-da7c-4423-9648-bab862a94c60"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:02:32 crc kubenswrapper[4681]: I0404 02:02:32.668903 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/de32363f-da7c-4423-9648-bab862a94c60-kube-api-access\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:32 crc kubenswrapper[4681]: I0404 02:02:32.668946 4681 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/de32363f-da7c-4423-9648-bab862a94c60-kubelet-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:32 crc kubenswrapper[4681]: I0404 02:02:32.668961 4681 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/de32363f-da7c-4423-9648-bab862a94c60-var-lock\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:33 crc kubenswrapper[4681]: I0404 02:02:33.184088 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Apr 04 02:02:33 crc kubenswrapper[4681]: I0404 02:02:33.184416 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-9-crc" podUID="afeaf156-4185-4c46-b29d-8c865f90cab3" containerName="installer" containerID="cri-o://c7f3d03327aae12c0c65b9e0ad3f6ac265ee97da818a0603f05f69351573c798" gracePeriod=30 Apr 04 02:02:33 crc kubenswrapper[4681]: I0404 02:02:33.537832 4681 generic.go:334] "Generic (PLEG): container finished" podID="6c2803d0-196e-4be0-8027-76566b1f53b5" containerID="520497b1cde3a710e01913ef18405eb6963c3c35f6702ba43e057e7b2a36e10c" exitCode=0 Apr 04 02:02:33 crc kubenswrapper[4681]: I0404 02:02:33.537960 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-11-crc" event={"ID":"6c2803d0-196e-4be0-8027-76566b1f53b5","Type":"ContainerDied","Data":"520497b1cde3a710e01913ef18405eb6963c3c35f6702ba43e057e7b2a36e10c"} Apr 04 02:02:33 crc kubenswrapper[4681]: I0404 02:02:33.541999 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-2zcp7_9f7c541f-ab5d-462b-bc06-c1eaa50b4e54/kube-multus-additional-cni-plugins/0.log" Apr 04 02:02:33 crc kubenswrapper[4681]: I0404 02:02:33.542075 4681 generic.go:334] "Generic (PLEG): container finished" podID="9f7c541f-ab5d-462b-bc06-c1eaa50b4e54" containerID="eb41cff58d4f5ed66d664d4a448846ace5a88bb9c89cacb0279aa7367733b071" exitCode=137 Apr 04 02:02:33 crc kubenswrapper[4681]: I0404 02:02:33.542117 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-2zcp7" event={"ID":"9f7c541f-ab5d-462b-bc06-c1eaa50b4e54","Type":"ContainerDied","Data":"eb41cff58d4f5ed66d664d4a448846ace5a88bb9c89cacb0279aa7367733b071"} Apr 04 02:02:34 crc kubenswrapper[4681]: E0404 02:02:34.988957 4681 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-podafeaf156_4185_4c46_b29d_8c865f90cab3.slice/crio-conmon-c7f3d03327aae12c0c65b9e0ad3f6ac265ee97da818a0603f05f69351573c798.scope\": RecentStats: unable to find data in memory cache]" Apr 04 02:02:35 crc kubenswrapper[4681]: I0404 02:02:35.558009 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-9-crc_afeaf156-4185-4c46-b29d-8c865f90cab3/installer/0.log" Apr 04 02:02:35 crc kubenswrapper[4681]: I0404 02:02:35.558536 4681 generic.go:334] "Generic (PLEG): container finished" podID="afeaf156-4185-4c46-b29d-8c865f90cab3" containerID="c7f3d03327aae12c0c65b9e0ad3f6ac265ee97da818a0603f05f69351573c798" exitCode=1 Apr 04 02:02:35 crc kubenswrapper[4681]: I0404 02:02:35.558648 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"afeaf156-4185-4c46-b29d-8c865f90cab3","Type":"ContainerDied","Data":"c7f3d03327aae12c0c65b9e0ad3f6ac265ee97da818a0603f05f69351573c798"} Apr 04 02:02:36 crc kubenswrapper[4681]: I0404 02:02:36.601124 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-10-crc"] Apr 04 02:02:36 crc kubenswrapper[4681]: E0404 02:02:36.601526 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de32363f-da7c-4423-9648-bab862a94c60" containerName="installer" Apr 04 02:02:36 crc kubenswrapper[4681]: I0404 02:02:36.601545 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="de32363f-da7c-4423-9648-bab862a94c60" containerName="installer" Apr 04 02:02:36 crc kubenswrapper[4681]: I0404 02:02:36.601696 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="de32363f-da7c-4423-9648-bab862a94c60" containerName="installer" Apr 04 02:02:36 crc kubenswrapper[4681]: I0404 02:02:36.602332 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-crc" Apr 04 02:02:36 crc kubenswrapper[4681]: I0404 02:02:36.608063 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-10-crc"] Apr 04 02:02:36 crc kubenswrapper[4681]: I0404 02:02:36.731518 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd7db560-7870-4920-b2eb-70225807e946-kubelet-dir\") pod \"installer-10-crc\" (UID: \"cd7db560-7870-4920-b2eb-70225807e946\") " pod="openshift-kube-apiserver/installer-10-crc" Apr 04 02:02:36 crc kubenswrapper[4681]: I0404 02:02:36.731846 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cd7db560-7870-4920-b2eb-70225807e946-var-lock\") pod \"installer-10-crc\" (UID: \"cd7db560-7870-4920-b2eb-70225807e946\") " pod="openshift-kube-apiserver/installer-10-crc" Apr 04 02:02:36 crc kubenswrapper[4681]: I0404 02:02:36.732062 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd7db560-7870-4920-b2eb-70225807e946-kube-api-access\") pod \"installer-10-crc\" (UID: \"cd7db560-7870-4920-b2eb-70225807e946\") " pod="openshift-kube-apiserver/installer-10-crc" Apr 04 02:02:36 crc kubenswrapper[4681]: I0404 02:02:36.833557 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd7db560-7870-4920-b2eb-70225807e946-kubelet-dir\") pod \"installer-10-crc\" (UID: \"cd7db560-7870-4920-b2eb-70225807e946\") " pod="openshift-kube-apiserver/installer-10-crc" Apr 04 02:02:36 crc kubenswrapper[4681]: I0404 02:02:36.833642 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cd7db560-7870-4920-b2eb-70225807e946-var-lock\") pod \"installer-10-crc\" (UID: \"cd7db560-7870-4920-b2eb-70225807e946\") " pod="openshift-kube-apiserver/installer-10-crc" Apr 04 02:02:36 crc kubenswrapper[4681]: I0404 02:02:36.833747 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd7db560-7870-4920-b2eb-70225807e946-kubelet-dir\") pod \"installer-10-crc\" (UID: \"cd7db560-7870-4920-b2eb-70225807e946\") " pod="openshift-kube-apiserver/installer-10-crc" Apr 04 02:02:36 crc kubenswrapper[4681]: I0404 02:02:36.833823 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cd7db560-7870-4920-b2eb-70225807e946-var-lock\") pod \"installer-10-crc\" (UID: \"cd7db560-7870-4920-b2eb-70225807e946\") " pod="openshift-kube-apiserver/installer-10-crc" Apr 04 02:02:36 crc kubenswrapper[4681]: I0404 02:02:36.833843 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd7db560-7870-4920-b2eb-70225807e946-kube-api-access\") pod \"installer-10-crc\" (UID: \"cd7db560-7870-4920-b2eb-70225807e946\") " pod="openshift-kube-apiserver/installer-10-crc" Apr 04 02:02:36 crc kubenswrapper[4681]: I0404 02:02:36.867376 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd7db560-7870-4920-b2eb-70225807e946-kube-api-access\") pod \"installer-10-crc\" (UID: \"cd7db560-7870-4920-b2eb-70225807e946\") " pod="openshift-kube-apiserver/installer-10-crc" Apr 04 02:02:36 crc kubenswrapper[4681]: I0404 02:02:36.910437 4681 patch_prober.go:28] interesting pod/console-f9d7485db-c6ktd container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 02:02:36 crc kubenswrapper[4681]: I0404 02:02:36.910514 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f9d7485db-c6ktd" podUID="1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 02:02:36 crc kubenswrapper[4681]: I0404 02:02:36.931821 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-crc" Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.439874 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_3dcd261975c3d6b9a6ad6367fd4facd3/kube-scheduler-cert-syncer/0.log" Apr 04 02:02:39 crc kubenswrapper[4681]: E0404 02:02:39.441561 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eb41cff58d4f5ed66d664d4a448846ace5a88bb9c89cacb0279aa7367733b071 is running failed: container process not found" containerID="eb41cff58d4f5ed66d664d4a448846ace5a88bb9c89cacb0279aa7367733b071" cmd=["/bin/bash","-c","test -f /ready/ready"] Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.441820 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 02:02:39 crc kubenswrapper[4681]: E0404 02:02:39.441859 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eb41cff58d4f5ed66d664d4a448846ace5a88bb9c89cacb0279aa7367733b071 is running failed: container process not found" containerID="eb41cff58d4f5ed66d664d4a448846ace5a88bb9c89cacb0279aa7367733b071" cmd=["/bin/bash","-c","test -f /ready/ready"] Apr 04 02:02:39 crc kubenswrapper[4681]: E0404 02:02:39.442137 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eb41cff58d4f5ed66d664d4a448846ace5a88bb9c89cacb0279aa7367733b071 is running failed: container process not found" containerID="eb41cff58d4f5ed66d664d4a448846ace5a88bb9c89cacb0279aa7367733b071" cmd=["/bin/bash","-c","test -f /ready/ready"] Apr 04 02:02:39 crc kubenswrapper[4681]: E0404 02:02:39.442174 4681 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eb41cff58d4f5ed66d664d4a448846ace5a88bb9c89cacb0279aa7367733b071 is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-2zcp7" podUID="9f7c541f-ab5d-462b-bc06-c1eaa50b4e54" containerName="kube-multus-additional-cni-plugins" Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.446157 4681 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" oldPodUID="3dcd261975c3d6b9a6ad6367fd4facd3" podUID="815516d0756bb9282f4d0a28cef72670" Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.447162 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager-cert-syncer/0.log" Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.448082 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.450129 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-11-crc" Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.451358 4681 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-crc" oldPodUID="f614b9022728cf315e60c057852e563e" podUID="235e9295064844132a05dc40ef3a886a" Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.473328 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"f614b9022728cf315e60c057852e563e\" (UID: \"f614b9022728cf315e60c057852e563e\") " Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.473494 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f614b9022728cf315e60c057852e563e" (UID: "f614b9022728cf315e60c057852e563e"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.473872 4681 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.574287 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6c2803d0-196e-4be0-8027-76566b1f53b5-var-lock\") pod \"6c2803d0-196e-4be0-8027-76566b1f53b5\" (UID: \"6c2803d0-196e-4be0-8027-76566b1f53b5\") " Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.574344 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"3dcd261975c3d6b9a6ad6367fd4facd3\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.574365 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c2803d0-196e-4be0-8027-76566b1f53b5-kubelet-dir\") pod \"6c2803d0-196e-4be0-8027-76566b1f53b5\" (UID: \"6c2803d0-196e-4be0-8027-76566b1f53b5\") " Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.574378 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"f614b9022728cf315e60c057852e563e\" (UID: \"f614b9022728cf315e60c057852e563e\") " Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.574401 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"3dcd261975c3d6b9a6ad6367fd4facd3\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.574454 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c2803d0-196e-4be0-8027-76566b1f53b5-kube-api-access\") pod \"6c2803d0-196e-4be0-8027-76566b1f53b5\" (UID: \"6c2803d0-196e-4be0-8027-76566b1f53b5\") " Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.575314 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c2803d0-196e-4be0-8027-76566b1f53b5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6c2803d0-196e-4be0-8027-76566b1f53b5" (UID: "6c2803d0-196e-4be0-8027-76566b1f53b5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.575393 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c2803d0-196e-4be0-8027-76566b1f53b5-var-lock" (OuterVolumeSpecName: "var-lock") pod "6c2803d0-196e-4be0-8027-76566b1f53b5" (UID: "6c2803d0-196e-4be0-8027-76566b1f53b5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.575493 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3dcd261975c3d6b9a6ad6367fd4facd3" (UID: "3dcd261975c3d6b9a6ad6367fd4facd3"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.575528 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f614b9022728cf315e60c057852e563e" (UID: "f614b9022728cf315e60c057852e563e"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.575549 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3dcd261975c3d6b9a6ad6367fd4facd3" (UID: "3dcd261975c3d6b9a6ad6367fd4facd3"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.581074 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c2803d0-196e-4be0-8027-76566b1f53b5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6c2803d0-196e-4be0-8027-76566b1f53b5" (UID: "6c2803d0-196e-4be0-8027-76566b1f53b5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.587460 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-11-crc" Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.587751 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-11-crc" event={"ID":"6c2803d0-196e-4be0-8027-76566b1f53b5","Type":"ContainerDied","Data":"88decdfafcb60bfc0dd79c5d41da4f343befb6a20d9f6edf579069d0fd64e5cc"} Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.587894 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88decdfafcb60bfc0dd79c5d41da4f343befb6a20d9f6edf579069d0fd64e5cc" Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.589886 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_3dcd261975c3d6b9a6ad6367fd4facd3/kube-scheduler-cert-syncer/0.log" Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.590807 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.593775 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager-cert-syncer/0.log" Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.594172 4681 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" oldPodUID="3dcd261975c3d6b9a6ad6367fd4facd3" podUID="815516d0756bb9282f4d0a28cef72670" Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.594815 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.599092 4681 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-crc" oldPodUID="f614b9022728cf315e60c057852e563e" podUID="235e9295064844132a05dc40ef3a886a" Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.616470 4681 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-crc" oldPodUID="f614b9022728cf315e60c057852e563e" podUID="235e9295064844132a05dc40ef3a886a" Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.618810 4681 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" oldPodUID="3dcd261975c3d6b9a6ad6367fd4facd3" podUID="815516d0756bb9282f4d0a28cef72670" Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.676389 4681 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6c2803d0-196e-4be0-8027-76566b1f53b5-var-lock\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.676433 4681 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.676472 4681 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c2803d0-196e-4be0-8027-76566b1f53b5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.676484 4681 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.676495 4681 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:39 crc kubenswrapper[4681]: I0404 02:02:39.676506 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c2803d0-196e-4be0-8027-76566b1f53b5-kube-api-access\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:40 crc kubenswrapper[4681]: I0404 02:02:40.465670 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-59f479d65-sqzp5" podUID="752e5159-8e65-48e4-9c60-8eef98e9b792" containerName="registry" containerID="cri-o://7377aac9196bc2cd47c23b0b9ea0ea2d905772ca6ba9e9272883a0acbe33dcec" gracePeriod=30 Apr 04 02:02:41 crc kubenswrapper[4681]: I0404 02:02:41.202804 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 02:02:41 crc kubenswrapper[4681]: I0404 02:02:41.216162 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" path="/var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/volumes" Apr 04 02:02:41 crc kubenswrapper[4681]: I0404 02:02:41.218247 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f614b9022728cf315e60c057852e563e" path="/var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/volumes" Apr 04 02:02:41 crc kubenswrapper[4681]: I0404 02:02:41.228480 4681 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="21fa3376-6f50-4819-b268-579c7500c923" Apr 04 02:02:41 crc kubenswrapper[4681]: I0404 02:02:41.228511 4681 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="21fa3376-6f50-4819-b268-579c7500c923" Apr 04 02:02:41 crc kubenswrapper[4681]: I0404 02:02:41.239920 4681 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 02:02:41 crc kubenswrapper[4681]: I0404 02:02:41.240971 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Apr 04 02:02:41 crc kubenswrapper[4681]: I0404 02:02:41.250789 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Apr 04 02:02:41 crc kubenswrapper[4681]: I0404 02:02:41.256480 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 02:02:41 crc kubenswrapper[4681]: I0404 02:02:41.259508 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Apr 04 02:02:42 crc kubenswrapper[4681]: I0404 02:02:42.229340 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" podUID="c37dc134-cc6c-4a22-add8-c694808f8bb0" containerName="oauth-openshift" containerID="cri-o://2b98d03e9b122829274e1995a71c8e035caad50416490d03b4fd2fd811c380db" gracePeriod=15 Apr 04 02:02:42 crc kubenswrapper[4681]: I0404 02:02:42.620691 4681 generic.go:334] "Generic (PLEG): container finished" podID="752e5159-8e65-48e4-9c60-8eef98e9b792" containerID="7377aac9196bc2cd47c23b0b9ea0ea2d905772ca6ba9e9272883a0acbe33dcec" exitCode=0 Apr 04 02:02:42 crc kubenswrapper[4681]: I0404 02:02:42.620752 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-59f479d65-sqzp5" event={"ID":"752e5159-8e65-48e4-9c60-8eef98e9b792","Type":"ContainerDied","Data":"7377aac9196bc2cd47c23b0b9ea0ea2d905772ca6ba9e9272883a0acbe33dcec"} Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.148769 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-c6ktd_1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798/console/0.log" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.149305 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c6ktd" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.156466 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-2zcp7_9f7c541f-ab5d-462b-bc06-c1eaa50b4e54/kube-multus-additional-cni-plugins/0.log" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.156565 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-2zcp7" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.184674 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-9-crc_afeaf156-4185-4c46-b29d-8c865f90cab3/installer/0.log" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.184759 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.199912 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.238383 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-oauth-serving-cert\") pod \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\" (UID: \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\") " Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.238449 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-trusted-ca-bundle\") pod \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\" (UID: \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\") " Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.238486 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/afeaf156-4185-4c46-b29d-8c865f90cab3-kubelet-dir\") pod \"afeaf156-4185-4c46-b29d-8c865f90cab3\" (UID: \"afeaf156-4185-4c46-b29d-8c865f90cab3\") " Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.238537 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9f7c541f-ab5d-462b-bc06-c1eaa50b4e54-cni-sysctl-allowlist\") pod \"9f7c541f-ab5d-462b-bc06-c1eaa50b4e54\" (UID: \"9f7c541f-ab5d-462b-bc06-c1eaa50b4e54\") " Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.238725 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9f7c541f-ab5d-462b-bc06-c1eaa50b4e54-tuning-conf-dir\") pod \"9f7c541f-ab5d-462b-bc06-c1eaa50b4e54\" (UID: \"9f7c541f-ab5d-462b-bc06-c1eaa50b4e54\") " Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.238784 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-console-serving-cert\") pod \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\" (UID: \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\") " Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.238941 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afeaf156-4185-4c46-b29d-8c865f90cab3-kube-api-access\") pod \"afeaf156-4185-4c46-b29d-8c865f90cab3\" (UID: \"afeaf156-4185-4c46-b29d-8c865f90cab3\") " Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.238992 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-console-config\") pod \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\" (UID: \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\") " Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.239119 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-console-oauth-config\") pod \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\" (UID: \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\") " Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.239161 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/afeaf156-4185-4c46-b29d-8c865f90cab3-var-lock\") pod \"afeaf156-4185-4c46-b29d-8c865f90cab3\" (UID: \"afeaf156-4185-4c46-b29d-8c865f90cab3\") " Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.239200 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/9f7c541f-ab5d-462b-bc06-c1eaa50b4e54-ready\") pod \"9f7c541f-ab5d-462b-bc06-c1eaa50b4e54\" (UID: \"9f7c541f-ab5d-462b-bc06-c1eaa50b4e54\") " Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.239234 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tks5p\" (UniqueName: \"kubernetes.io/projected/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-kube-api-access-tks5p\") pod \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\" (UID: \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\") " Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.239291 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tcl98\" (UniqueName: \"kubernetes.io/projected/9f7c541f-ab5d-462b-bc06-c1eaa50b4e54-kube-api-access-tcl98\") pod \"9f7c541f-ab5d-462b-bc06-c1eaa50b4e54\" (UID: \"9f7c541f-ab5d-462b-bc06-c1eaa50b4e54\") " Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.239329 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-service-ca\") pod \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\" (UID: \"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798\") " Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.241800 4681 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="6f8dda2d-d2e0-4669-8a9d-f3ae21eaa22e" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.241845 4681 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="6f8dda2d-d2e0-4669-8a9d-f3ae21eaa22e" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.241860 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-console-config" (OuterVolumeSpecName: "console-config") pod "1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798" (UID: "1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.242254 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afeaf156-4185-4c46-b29d-8c865f90cab3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "afeaf156-4185-4c46-b29d-8c865f90cab3" (UID: "afeaf156-4185-4c46-b29d-8c865f90cab3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.242463 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f7c541f-ab5d-462b-bc06-c1eaa50b4e54-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "9f7c541f-ab5d-462b-bc06-c1eaa50b4e54" (UID: "9f7c541f-ab5d-462b-bc06-c1eaa50b4e54"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.242845 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9f7c541f-ab5d-462b-bc06-c1eaa50b4e54-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "9f7c541f-ab5d-462b-bc06-c1eaa50b4e54" (UID: "9f7c541f-ab5d-462b-bc06-c1eaa50b4e54"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.242966 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afeaf156-4185-4c46-b29d-8c865f90cab3-var-lock" (OuterVolumeSpecName: "var-lock") pod "afeaf156-4185-4c46-b29d-8c865f90cab3" (UID: "afeaf156-4185-4c46-b29d-8c865f90cab3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.243675 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798" (UID: "1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.243775 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798" (UID: "1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.244032 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f7c541f-ab5d-462b-bc06-c1eaa50b4e54-ready" (OuterVolumeSpecName: "ready") pod "9f7c541f-ab5d-462b-bc06-c1eaa50b4e54" (UID: "9f7c541f-ab5d-462b-bc06-c1eaa50b4e54"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.243701 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-service-ca" (OuterVolumeSpecName: "service-ca") pod "1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798" (UID: "1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.250567 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afeaf156-4185-4c46-b29d-8c865f90cab3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "afeaf156-4185-4c46-b29d-8c865f90cab3" (UID: "afeaf156-4185-4c46-b29d-8c865f90cab3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.254739 4681 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.258477 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798" (UID: "1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.259560 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f7c541f-ab5d-462b-bc06-c1eaa50b4e54-kube-api-access-tcl98" (OuterVolumeSpecName: "kube-api-access-tcl98") pod "9f7c541f-ab5d-462b-bc06-c1eaa50b4e54" (UID: "9f7c541f-ab5d-462b-bc06-c1eaa50b4e54"). InnerVolumeSpecName "kube-api-access-tcl98". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.259737 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-kube-api-access-tks5p" (OuterVolumeSpecName: "kube-api-access-tks5p") pod "1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798" (UID: "1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798"). InnerVolumeSpecName "kube-api-access-tks5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.261581 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798" (UID: "1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.263701 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.268385 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.276695 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.281159 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.340936 4681 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.340973 4681 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.340986 4681 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/afeaf156-4185-4c46-b29d-8c865f90cab3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.340999 4681 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9f7c541f-ab5d-462b-bc06-c1eaa50b4e54-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.341011 4681 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9f7c541f-ab5d-462b-bc06-c1eaa50b4e54-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.341022 4681 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-console-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.341042 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afeaf156-4185-4c46-b29d-8c865f90cab3-kube-api-access\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.341066 4681 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-console-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.341079 4681 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-console-oauth-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.341090 4681 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/afeaf156-4185-4c46-b29d-8c865f90cab3-var-lock\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.341101 4681 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/9f7c541f-ab5d-462b-bc06-c1eaa50b4e54-ready\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.341112 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tks5p\" (UniqueName: \"kubernetes.io/projected/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-kube-api-access-tks5p\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.341125 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tcl98\" (UniqueName: \"kubernetes.io/projected/9f7c541f-ab5d-462b-bc06-c1eaa50b4e54-kube-api-access-tcl98\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.341135 4681 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798-service-ca\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.652799 4681 generic.go:334] "Generic (PLEG): container finished" podID="c37dc134-cc6c-4a22-add8-c694808f8bb0" containerID="2b98d03e9b122829274e1995a71c8e035caad50416490d03b4fd2fd811c380db" exitCode=0 Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.652866 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" event={"ID":"c37dc134-cc6c-4a22-add8-c694808f8bb0","Type":"ContainerDied","Data":"2b98d03e9b122829274e1995a71c8e035caad50416490d03b4fd2fd811c380db"} Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.655183 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-c6ktd_1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798/console/0.log" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.655513 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c6ktd" event={"ID":"1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798","Type":"ContainerDied","Data":"fb66f684fcb12fc8bf6dfd6dc2f7f92e1acc9a42c1878c52bf543bcb51c94850"} Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.655539 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c6ktd" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.660024 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-9-crc_afeaf156-4185-4c46-b29d-8c865f90cab3/installer/0.log" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.660166 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"afeaf156-4185-4c46-b29d-8c865f90cab3","Type":"ContainerDied","Data":"1c956bf5a737e8896fd15a3bab9f65c8e7bd9d376f15d7c66e1415a22d3e2a12"} Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.660235 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.662235 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-2zcp7_9f7c541f-ab5d-462b-bc06-c1eaa50b4e54/kube-multus-additional-cni-plugins/0.log" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.662315 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-2zcp7" event={"ID":"9f7c541f-ab5d-462b-bc06-c1eaa50b4e54","Type":"ContainerDied","Data":"3f2881d7547bcf4b838a62779dbe3eb29515b6086a2d88dd8867edbdb25767b1"} Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.662383 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-2zcp7" Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.713624 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-c6ktd"] Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.721568 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-c6ktd"] Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.737419 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.745235 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.750990 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-2zcp7"] Apr 04 02:02:43 crc kubenswrapper[4681]: I0404 02:02:43.754873 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-2zcp7"] Apr 04 02:02:44 crc kubenswrapper[4681]: I0404 02:02:44.941930 4681 scope.go:117] "RemoveContainer" containerID="0a4d1e6ca047390c262fe05912f5de2fb5d4fa0f6d51f4462d9b602901aa99dc" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.210877 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798" path="/var/lib/kubelet/pods/1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798/volumes" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.211679 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f7c541f-ab5d-462b-bc06-c1eaa50b4e54" path="/var/lib/kubelet/pods/9f7c541f-ab5d-462b-bc06-c1eaa50b4e54/volumes" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.212316 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afeaf156-4185-4c46-b29d-8c865f90cab3" path="/var/lib/kubelet/pods/afeaf156-4185-4c46-b29d-8c865f90cab3/volumes" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.218669 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-59f479d65-sqzp5" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.227534 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.368822 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/752e5159-8e65-48e4-9c60-8eef98e9b792-trusted-ca\") pod \"752e5159-8e65-48e4-9c60-8eef98e9b792\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.368876 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-ocp-branding-template\") pod \"c37dc134-cc6c-4a22-add8-c694808f8bb0\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.368916 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/752e5159-8e65-48e4-9c60-8eef98e9b792-installation-pull-secrets\") pod \"752e5159-8e65-48e4-9c60-8eef98e9b792\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.368937 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmcgb\" (UniqueName: \"kubernetes.io/projected/c37dc134-cc6c-4a22-add8-c694808f8bb0-kube-api-access-tmcgb\") pod \"c37dc134-cc6c-4a22-add8-c694808f8bb0\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.368958 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c37dc134-cc6c-4a22-add8-c694808f8bb0-audit-dir\") pod \"c37dc134-cc6c-4a22-add8-c694808f8bb0\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.369412 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c37dc134-cc6c-4a22-add8-c694808f8bb0-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "c37dc134-cc6c-4a22-add8-c694808f8bb0" (UID: "c37dc134-cc6c-4a22-add8-c694808f8bb0"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.369955 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/752e5159-8e65-48e4-9c60-8eef98e9b792-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "752e5159-8e65-48e4-9c60-8eef98e9b792" (UID: "752e5159-8e65-48e4-9c60-8eef98e9b792"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.370001 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-user-template-error\") pod \"c37dc134-cc6c-4a22-add8-c694808f8bb0\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.370320 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-cliconfig\") pod \"c37dc134-cc6c-4a22-add8-c694808f8bb0\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.370366 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-trusted-ca-bundle\") pod \"c37dc134-cc6c-4a22-add8-c694808f8bb0\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.370479 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"752e5159-8e65-48e4-9c60-8eef98e9b792\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.370508 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-session\") pod \"c37dc134-cc6c-4a22-add8-c694808f8bb0\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.370540 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/752e5159-8e65-48e4-9c60-8eef98e9b792-bound-sa-token\") pod \"752e5159-8e65-48e4-9c60-8eef98e9b792\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.370566 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c37dc134-cc6c-4a22-add8-c694808f8bb0-audit-policies\") pod \"c37dc134-cc6c-4a22-add8-c694808f8bb0\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.370596 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-user-template-provider-selection\") pod \"c37dc134-cc6c-4a22-add8-c694808f8bb0\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.370624 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-router-certs\") pod \"c37dc134-cc6c-4a22-add8-c694808f8bb0\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.370649 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-service-ca\") pod \"c37dc134-cc6c-4a22-add8-c694808f8bb0\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.370672 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/752e5159-8e65-48e4-9c60-8eef98e9b792-registry-certificates\") pod \"752e5159-8e65-48e4-9c60-8eef98e9b792\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.370700 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-serving-cert\") pod \"c37dc134-cc6c-4a22-add8-c694808f8bb0\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.370732 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/752e5159-8e65-48e4-9c60-8eef98e9b792-ca-trust-extracted\") pod \"752e5159-8e65-48e4-9c60-8eef98e9b792\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.371052 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fx8hc\" (UniqueName: \"kubernetes.io/projected/752e5159-8e65-48e4-9c60-8eef98e9b792-kube-api-access-fx8hc\") pod \"752e5159-8e65-48e4-9c60-8eef98e9b792\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.371477 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "c37dc134-cc6c-4a22-add8-c694808f8bb0" (UID: "c37dc134-cc6c-4a22-add8-c694808f8bb0"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.371489 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c37dc134-cc6c-4a22-add8-c694808f8bb0-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "c37dc134-cc6c-4a22-add8-c694808f8bb0" (UID: "c37dc134-cc6c-4a22-add8-c694808f8bb0"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.371602 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "c37dc134-cc6c-4a22-add8-c694808f8bb0" (UID: "c37dc134-cc6c-4a22-add8-c694808f8bb0"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.375159 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "c37dc134-cc6c-4a22-add8-c694808f8bb0" (UID: "c37dc134-cc6c-4a22-add8-c694808f8bb0"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.371084 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/752e5159-8e65-48e4-9c60-8eef98e9b792-registry-tls\") pod \"752e5159-8e65-48e4-9c60-8eef98e9b792\" (UID: \"752e5159-8e65-48e4-9c60-8eef98e9b792\") " Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.376068 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-user-idp-0-file-data\") pod \"c37dc134-cc6c-4a22-add8-c694808f8bb0\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.376153 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-user-template-login\") pod \"c37dc134-cc6c-4a22-add8-c694808f8bb0\" (UID: \"c37dc134-cc6c-4a22-add8-c694808f8bb0\") " Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.377506 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/752e5159-8e65-48e4-9c60-8eef98e9b792-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "752e5159-8e65-48e4-9c60-8eef98e9b792" (UID: "752e5159-8e65-48e4-9c60-8eef98e9b792"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.378516 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "c37dc134-cc6c-4a22-add8-c694808f8bb0" (UID: "c37dc134-cc6c-4a22-add8-c694808f8bb0"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.380987 4681 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/752e5159-8e65-48e4-9c60-8eef98e9b792-trusted-ca\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.381026 4681 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c37dc134-cc6c-4a22-add8-c694808f8bb0-audit-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.381045 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.381075 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.381098 4681 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c37dc134-cc6c-4a22-add8-c694808f8bb0-audit-policies\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.381115 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.381132 4681 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/752e5159-8e65-48e4-9c60-8eef98e9b792-registry-tls\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.381154 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.382859 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/752e5159-8e65-48e4-9c60-8eef98e9b792-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "752e5159-8e65-48e4-9c60-8eef98e9b792" (UID: "752e5159-8e65-48e4-9c60-8eef98e9b792"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.398543 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "c37dc134-cc6c-4a22-add8-c694808f8bb0" (UID: "c37dc134-cc6c-4a22-add8-c694808f8bb0"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.398772 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "c37dc134-cc6c-4a22-add8-c694808f8bb0" (UID: "c37dc134-cc6c-4a22-add8-c694808f8bb0"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.398911 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "c37dc134-cc6c-4a22-add8-c694808f8bb0" (UID: "c37dc134-cc6c-4a22-add8-c694808f8bb0"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.399938 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/752e5159-8e65-48e4-9c60-8eef98e9b792-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "752e5159-8e65-48e4-9c60-8eef98e9b792" (UID: "752e5159-8e65-48e4-9c60-8eef98e9b792"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.400002 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "c37dc134-cc6c-4a22-add8-c694808f8bb0" (UID: "c37dc134-cc6c-4a22-add8-c694808f8bb0"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.400054 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "c37dc134-cc6c-4a22-add8-c694808f8bb0" (UID: "c37dc134-cc6c-4a22-add8-c694808f8bb0"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.400200 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c37dc134-cc6c-4a22-add8-c694808f8bb0-kube-api-access-tmcgb" (OuterVolumeSpecName: "kube-api-access-tmcgb") pod "c37dc134-cc6c-4a22-add8-c694808f8bb0" (UID: "c37dc134-cc6c-4a22-add8-c694808f8bb0"). InnerVolumeSpecName "kube-api-access-tmcgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.400773 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/752e5159-8e65-48e4-9c60-8eef98e9b792-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "752e5159-8e65-48e4-9c60-8eef98e9b792" (UID: "752e5159-8e65-48e4-9c60-8eef98e9b792"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.400828 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/752e5159-8e65-48e4-9c60-8eef98e9b792-kube-api-access-fx8hc" (OuterVolumeSpecName: "kube-api-access-fx8hc") pod "752e5159-8e65-48e4-9c60-8eef98e9b792" (UID: "752e5159-8e65-48e4-9c60-8eef98e9b792"). InnerVolumeSpecName "kube-api-access-fx8hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.402434 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "752e5159-8e65-48e4-9c60-8eef98e9b792" (UID: "752e5159-8e65-48e4-9c60-8eef98e9b792"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.402531 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "c37dc134-cc6c-4a22-add8-c694808f8bb0" (UID: "c37dc134-cc6c-4a22-add8-c694808f8bb0"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.412112 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/752e5159-8e65-48e4-9c60-8eef98e9b792-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "752e5159-8e65-48e4-9c60-8eef98e9b792" (UID: "752e5159-8e65-48e4-9c60-8eef98e9b792"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.412177 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "c37dc134-cc6c-4a22-add8-c694808f8bb0" (UID: "c37dc134-cc6c-4a22-add8-c694808f8bb0"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.482752 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.482798 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmcgb\" (UniqueName: \"kubernetes.io/projected/c37dc134-cc6c-4a22-add8-c694808f8bb0-kube-api-access-tmcgb\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.482816 4681 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/752e5159-8e65-48e4-9c60-8eef98e9b792-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.482830 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.482876 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.482898 4681 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/752e5159-8e65-48e4-9c60-8eef98e9b792-bound-sa-token\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.482968 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.482989 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.483003 4681 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/752e5159-8e65-48e4-9c60-8eef98e9b792-registry-certificates\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.483016 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.483029 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fx8hc\" (UniqueName: \"kubernetes.io/projected/752e5159-8e65-48e4-9c60-8eef98e9b792-kube-api-access-fx8hc\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.483044 4681 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/752e5159-8e65-48e4-9c60-8eef98e9b792-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.483062 4681 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c37dc134-cc6c-4a22-add8-c694808f8bb0-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.643575 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-10-crc"] Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.652557 4681 scope.go:117] "RemoveContainer" containerID="2d9640effcf3619f733c3f5082251165355776fabe5d8c3ade7f06409ec459d0" Apr 04 02:02:45 crc kubenswrapper[4681]: W0404 02:02:45.678241 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod114ff8c6_d9f1_4a37_9054_89ceef4ea3b2.slice/crio-177a7e96cb3293e233c07b7f7b285230f59e2108947da79821185229b264bcf4 WatchSource:0}: Error finding container 177a7e96cb3293e233c07b7f7b285230f59e2108947da79821185229b264bcf4: Status 404 returned error can't find the container with id 177a7e96cb3293e233c07b7f7b285230f59e2108947da79821185229b264bcf4 Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.691608 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-59f479d65-sqzp5" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.691608 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-59f479d65-sqzp5" event={"ID":"752e5159-8e65-48e4-9c60-8eef98e9b792","Type":"ContainerDied","Data":"fcbe01930201d305b064246bca54e3687a22649c2773802654becbc0d2996dea"} Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.693522 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" event={"ID":"c37dc134-cc6c-4a22-add8-c694808f8bb0","Type":"ContainerDied","Data":"fc02541be394d74ef0c6766e6af7305774c54f4ff3cee8649859850830103691"} Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.693588 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-67445d46b-m2v67" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.696495 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager-cert-syncer/0.log" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.730941 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-59f479d65-sqzp5"] Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.735371 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-59f479d65-sqzp5"] Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.738253 4681 scope.go:117] "RemoveContainer" containerID="ce18a007870156d09b31ab38dd939beba4fc47418b8e07d70945eb4838f93130" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.746645 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-67445d46b-m2v67"] Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.752426 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-67445d46b-m2v67"] Apr 04 02:02:45 crc kubenswrapper[4681]: W0404 02:02:45.772597 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod815516d0756bb9282f4d0a28cef72670.slice/crio-10b858b87ba1180e79968530ac4ae316e2c000d89ed109885b82be541d58bedb WatchSource:0}: Error finding container 10b858b87ba1180e79968530ac4ae316e2c000d89ed109885b82be541d58bedb: Status 404 returned error can't find the container with id 10b858b87ba1180e79968530ac4ae316e2c000d89ed109885b82be541d58bedb Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.783560 4681 scope.go:117] "RemoveContainer" containerID="aa9740509eb24a1a879ff6baa5c00114bdf2b626e53dce80afcb611b70c43509" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.834579 4681 scope.go:117] "RemoveContainer" containerID="4f7eb0459a3d135d52a1c4168c3def80940997d4a4286ffa23d2abdde13b7df7" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.878007 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-10-crc"] Apr 04 02:02:45 crc kubenswrapper[4681]: W0404 02:02:45.890433 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podcd7db560_7870_4920_b2eb_70225807e946.slice/crio-f1fd1669a48e00cd1bdb5a29e7cbd8fa3659f6f3b7da79983dd69176475c0b2c WatchSource:0}: Error finding container f1fd1669a48e00cd1bdb5a29e7cbd8fa3659f6f3b7da79983dd69176475c0b2c: Status 404 returned error can't find the container with id f1fd1669a48e00cd1bdb5a29e7cbd8fa3659f6f3b7da79983dd69176475c0b2c Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.896516 4681 scope.go:117] "RemoveContainer" containerID="7096a59f168cc8b9e7c61688d8ae41239ce4d62f3f0aa859bdb8586c85afe752" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.921733 4681 scope.go:117] "RemoveContainer" containerID="2f6196c5b3a05c942055f25e31a350843406ea37406e1af62c782dc9f8a4f1c2" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.961120 4681 scope.go:117] "RemoveContainer" containerID="b5fc946678318218c1dc93f59aea7fc760172225cdb94efee9ea779b0a3ff53c" Apr 04 02:02:45 crc kubenswrapper[4681]: E0404 02:02:45.961640 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5fc946678318218c1dc93f59aea7fc760172225cdb94efee9ea779b0a3ff53c\": container with ID starting with b5fc946678318218c1dc93f59aea7fc760172225cdb94efee9ea779b0a3ff53c not found: ID does not exist" containerID="b5fc946678318218c1dc93f59aea7fc760172225cdb94efee9ea779b0a3ff53c" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.961677 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5fc946678318218c1dc93f59aea7fc760172225cdb94efee9ea779b0a3ff53c"} err="failed to get container status \"b5fc946678318218c1dc93f59aea7fc760172225cdb94efee9ea779b0a3ff53c\": rpc error: code = NotFound desc = could not find container \"b5fc946678318218c1dc93f59aea7fc760172225cdb94efee9ea779b0a3ff53c\": container with ID starting with b5fc946678318218c1dc93f59aea7fc760172225cdb94efee9ea779b0a3ff53c not found: ID does not exist" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.961703 4681 scope.go:117] "RemoveContainer" containerID="9a52ccf1a40f8ddec3fbfd455dfaae3a08af9e38f488b774ae61283eb539214d" Apr 04 02:02:45 crc kubenswrapper[4681]: I0404 02:02:45.976435 4681 scope.go:117] "RemoveContainer" containerID="be2ecda88d01e3d2d5a2743a7cf1005605b8427a5b9dfb0fefee04ff5de9f9b0" Apr 04 02:02:46 crc kubenswrapper[4681]: I0404 02:02:46.000482 4681 scope.go:117] "RemoveContainer" containerID="c7f3d03327aae12c0c65b9e0ad3f6ac265ee97da818a0603f05f69351573c798" Apr 04 02:02:46 crc kubenswrapper[4681]: I0404 02:02:46.024786 4681 scope.go:117] "RemoveContainer" containerID="eb41cff58d4f5ed66d664d4a448846ace5a88bb9c89cacb0279aa7367733b071" Apr 04 02:02:46 crc kubenswrapper[4681]: I0404 02:02:46.044743 4681 scope.go:117] "RemoveContainer" containerID="7377aac9196bc2cd47c23b0b9ea0ea2d905772ca6ba9e9272883a0acbe33dcec" Apr 04 02:02:46 crc kubenswrapper[4681]: I0404 02:02:46.072725 4681 scope.go:117] "RemoveContainer" containerID="2b98d03e9b122829274e1995a71c8e035caad50416490d03b4fd2fd811c380db" Apr 04 02:02:46 crc kubenswrapper[4681]: I0404 02:02:46.708662 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-10-crc" event={"ID":"114ff8c6-d9f1-4a37-9054-89ceef4ea3b2","Type":"ContainerStarted","Data":"177a7e96cb3293e233c07b7f7b285230f59e2108947da79821185229b264bcf4"} Apr 04 02:02:46 crc kubenswrapper[4681]: I0404 02:02:46.710096 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-10-crc" event={"ID":"cd7db560-7870-4920-b2eb-70225807e946","Type":"ContainerStarted","Data":"f1fd1669a48e00cd1bdb5a29e7cbd8fa3659f6f3b7da79983dd69176475c0b2c"} Apr 04 02:02:46 crc kubenswrapper[4681]: I0404 02:02:46.716370 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"235e9295064844132a05dc40ef3a886a","Type":"ContainerStarted","Data":"91a559b4704cd2c5ffd95db768d36f5d833cfce9f042d0bb7c956586b618dc60"} Apr 04 02:02:46 crc kubenswrapper[4681]: I0404 02:02:46.717380 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"815516d0756bb9282f4d0a28cef72670","Type":"ContainerStarted","Data":"10b858b87ba1180e79968530ac4ae316e2c000d89ed109885b82be541d58bedb"} Apr 04 02:02:47 crc kubenswrapper[4681]: I0404 02:02:47.208128 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="752e5159-8e65-48e4-9c60-8eef98e9b792" path="/var/lib/kubelet/pods/752e5159-8e65-48e4-9c60-8eef98e9b792/volumes" Apr 04 02:02:47 crc kubenswrapper[4681]: I0404 02:02:47.209184 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c37dc134-cc6c-4a22-add8-c694808f8bb0" path="/var/lib/kubelet/pods/c37dc134-cc6c-4a22-add8-c694808f8bb0/volumes" Apr 04 02:02:47 crc kubenswrapper[4681]: I0404 02:02:47.724497 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-10-crc" event={"ID":"114ff8c6-d9f1-4a37-9054-89ceef4ea3b2","Type":"ContainerStarted","Data":"3a74638c94d97d47be7b9e04af2c9a032c51ab32dec52de5e58dabb8598de3fc"} Apr 04 02:02:47 crc kubenswrapper[4681]: I0404 02:02:47.725922 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-10-crc" event={"ID":"cd7db560-7870-4920-b2eb-70225807e946","Type":"ContainerStarted","Data":"1bf3e7db8ff31c862a43a9859ea12616ac17315fe579a44dca31a34b20a05fbb"} Apr 04 02:02:47 crc kubenswrapper[4681]: I0404 02:02:47.727926 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvhf7" event={"ID":"b0cbd40c-5c8c-451b-af65-fb67ba867ced","Type":"ContainerStarted","Data":"eb030fc38f85a67b03f98de155bb65fe11812a38b3a790fe6470f05ed35fa7e7"} Apr 04 02:02:47 crc kubenswrapper[4681]: I0404 02:02:47.729217 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"235e9295064844132a05dc40ef3a886a","Type":"ContainerStarted","Data":"e98d2361c12342c873e47f714e2b40aea0c84ab52b71bd8be3e23ff62fd0cd58"} Apr 04 02:02:47 crc kubenswrapper[4681]: I0404 02:02:47.730985 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"815516d0756bb9282f4d0a28cef72670","Type":"ContainerStarted","Data":"246e89242f8191d98ded35cd67c72567f0754ffb51ac295a462acf01eaf8b85f"} Apr 04 02:02:47 crc kubenswrapper[4681]: I0404 02:02:47.733849 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sv8f4" event={"ID":"72699dc0-10a9-45c2-9be8-e7a48b8f4edb","Type":"ContainerStarted","Data":"3c03f92d63fba6500eb513055a6f75613cfec06ee7fc01c6c2cf9ff207a74a04"} Apr 04 02:02:47 crc kubenswrapper[4681]: I0404 02:02:47.735805 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w2zrg" event={"ID":"1b3e95cc-25d6-4efd-8828-894657c29bcb","Type":"ContainerStarted","Data":"8a01211c8e821b88a6220ad8ba9c9eeb340e74df254c8585fc16ab4b0d3b01e5"} Apr 04 02:02:47 crc kubenswrapper[4681]: I0404 02:02:47.737568 4681 generic.go:334] "Generic (PLEG): container finished" podID="d83b7914-ed31-46fd-9fc5-b7924c6b8b3d" containerID="fc0b8780a8621ee82752ea154cde66f3b1c8691e3cff3e322bcfed2470ee6a25" exitCode=0 Apr 04 02:02:47 crc kubenswrapper[4681]: I0404 02:02:47.737621 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdvrs" event={"ID":"d83b7914-ed31-46fd-9fc5-b7924c6b8b3d","Type":"ContainerDied","Data":"fc0b8780a8621ee82752ea154cde66f3b1c8691e3cff3e322bcfed2470ee6a25"} Apr 04 02:02:47 crc kubenswrapper[4681]: I0404 02:02:47.750502 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dvhf7" podStartSLOduration=11.021091767 podStartE2EDuration="3m33.750485246s" podCreationTimestamp="2026-04-04 01:59:14 +0000 UTC" firstStartedPulling="2026-04-04 01:59:22.395133936 +0000 UTC m=+242.060909056" lastFinishedPulling="2026-04-04 02:02:45.124527385 +0000 UTC m=+444.790302535" observedRunningTime="2026-04-04 02:02:47.747630876 +0000 UTC m=+447.413405996" watchObservedRunningTime="2026-04-04 02:02:47.750485246 +0000 UTC m=+447.416260366" Apr 04 02:02:48 crc kubenswrapper[4681]: I0404 02:02:48.745635 4681 generic.go:334] "Generic (PLEG): container finished" podID="114ff8c6-d9f1-4a37-9054-89ceef4ea3b2" containerID="3a74638c94d97d47be7b9e04af2c9a032c51ab32dec52de5e58dabb8598de3fc" exitCode=0 Apr 04 02:02:48 crc kubenswrapper[4681]: I0404 02:02:48.745712 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-10-crc" event={"ID":"114ff8c6-d9f1-4a37-9054-89ceef4ea3b2","Type":"ContainerDied","Data":"3a74638c94d97d47be7b9e04af2c9a032c51ab32dec52de5e58dabb8598de3fc"} Apr 04 02:02:48 crc kubenswrapper[4681]: I0404 02:02:48.748399 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"235e9295064844132a05dc40ef3a886a","Type":"ContainerStarted","Data":"0934466cb993c126f425e71e93e3598d91ed1cb12056623df2eb3d173564bef2"} Apr 04 02:02:48 crc kubenswrapper[4681]: I0404 02:02:48.797067 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sv8f4" podStartSLOduration=15.622963024 podStartE2EDuration="3m37.79704734s" podCreationTimestamp="2026-04-04 01:59:11 +0000 UTC" firstStartedPulling="2026-04-04 01:59:22.394486927 +0000 UTC m=+242.060262057" lastFinishedPulling="2026-04-04 02:02:44.568571213 +0000 UTC m=+444.234346373" observedRunningTime="2026-04-04 02:02:48.796994669 +0000 UTC m=+448.462769799" watchObservedRunningTime="2026-04-04 02:02:48.79704734 +0000 UTC m=+448.462822460" Apr 04 02:02:48 crc kubenswrapper[4681]: I0404 02:02:48.845803 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w2zrg" podStartSLOduration=15.176182111 podStartE2EDuration="3m35.845784879s" podCreationTimestamp="2026-04-04 01:59:13 +0000 UTC" firstStartedPulling="2026-04-04 01:59:22.395569298 +0000 UTC m=+242.061344418" lastFinishedPulling="2026-04-04 02:02:43.065172026 +0000 UTC m=+442.730947186" observedRunningTime="2026-04-04 02:02:48.841729805 +0000 UTC m=+448.507504925" watchObservedRunningTime="2026-04-04 02:02:48.845784879 +0000 UTC m=+448.511559999" Apr 04 02:02:49 crc kubenswrapper[4681]: I0404 02:02:49.757064 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"235e9295064844132a05dc40ef3a886a","Type":"ContainerStarted","Data":"f7d46b3977fe622d3792a4163862ec755d995bc680053ceaf1044528676f2ae0"} Apr 04 02:02:49 crc kubenswrapper[4681]: I0404 02:02:49.760735 4681 generic.go:334] "Generic (PLEG): container finished" podID="815516d0756bb9282f4d0a28cef72670" containerID="246e89242f8191d98ded35cd67c72567f0754ffb51ac295a462acf01eaf8b85f" exitCode=0 Apr 04 02:02:49 crc kubenswrapper[4681]: I0404 02:02:49.760877 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"815516d0756bb9282f4d0a28cef72670","Type":"ContainerDied","Data":"246e89242f8191d98ded35cd67c72567f0754ffb51ac295a462acf01eaf8b85f"} Apr 04 02:02:49 crc kubenswrapper[4681]: I0404 02:02:49.786950 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-10-crc" podStartSLOduration=13.786932096 podStartE2EDuration="13.786932096s" podCreationTimestamp="2026-04-04 02:02:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:02:48.88673371 +0000 UTC m=+448.552508840" watchObservedRunningTime="2026-04-04 02:02:49.786932096 +0000 UTC m=+449.452707216" Apr 04 02:02:50 crc kubenswrapper[4681]: I0404 02:02:50.064888 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-crc" Apr 04 02:02:50 crc kubenswrapper[4681]: I0404 02:02:50.184552 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/114ff8c6-d9f1-4a37-9054-89ceef4ea3b2-kubelet-dir\") pod \"114ff8c6-d9f1-4a37-9054-89ceef4ea3b2\" (UID: \"114ff8c6-d9f1-4a37-9054-89ceef4ea3b2\") " Apr 04 02:02:50 crc kubenswrapper[4681]: I0404 02:02:50.184647 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/114ff8c6-d9f1-4a37-9054-89ceef4ea3b2-kube-api-access\") pod \"114ff8c6-d9f1-4a37-9054-89ceef4ea3b2\" (UID: \"114ff8c6-d9f1-4a37-9054-89ceef4ea3b2\") " Apr 04 02:02:50 crc kubenswrapper[4681]: I0404 02:02:50.185433 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/114ff8c6-d9f1-4a37-9054-89ceef4ea3b2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "114ff8c6-d9f1-4a37-9054-89ceef4ea3b2" (UID: "114ff8c6-d9f1-4a37-9054-89ceef4ea3b2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:02:50 crc kubenswrapper[4681]: I0404 02:02:50.199095 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/114ff8c6-d9f1-4a37-9054-89ceef4ea3b2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "114ff8c6-d9f1-4a37-9054-89ceef4ea3b2" (UID: "114ff8c6-d9f1-4a37-9054-89ceef4ea3b2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:02:50 crc kubenswrapper[4681]: I0404 02:02:50.286329 4681 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/114ff8c6-d9f1-4a37-9054-89ceef4ea3b2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:50 crc kubenswrapper[4681]: I0404 02:02:50.286357 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/114ff8c6-d9f1-4a37-9054-89ceef4ea3b2-kube-api-access\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:50 crc kubenswrapper[4681]: I0404 02:02:50.767812 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-10-crc" event={"ID":"114ff8c6-d9f1-4a37-9054-89ceef4ea3b2","Type":"ContainerDied","Data":"177a7e96cb3293e233c07b7f7b285230f59e2108947da79821185229b264bcf4"} Apr 04 02:02:50 crc kubenswrapper[4681]: I0404 02:02:50.768041 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="177a7e96cb3293e233c07b7f7b285230f59e2108947da79821185229b264bcf4" Apr 04 02:02:50 crc kubenswrapper[4681]: I0404 02:02:50.767885 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-10-crc" Apr 04 02:02:51 crc kubenswrapper[4681]: I0404 02:02:51.886924 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sv8f4" Apr 04 02:02:51 crc kubenswrapper[4681]: I0404 02:02:51.887993 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sv8f4" Apr 04 02:02:51 crc kubenswrapper[4681]: I0404 02:02:51.957454 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sv8f4" Apr 04 02:02:52 crc kubenswrapper[4681]: I0404 02:02:52.860926 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sv8f4" Apr 04 02:02:52 crc kubenswrapper[4681]: I0404 02:02:52.965429 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sv8f4"] Apr 04 02:02:53 crc kubenswrapper[4681]: I0404 02:02:53.474906 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w2zrg" Apr 04 02:02:53 crc kubenswrapper[4681]: I0404 02:02:53.474979 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w2zrg" Apr 04 02:02:53 crc kubenswrapper[4681]: I0404 02:02:53.535091 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w2zrg" Apr 04 02:02:53 crc kubenswrapper[4681]: I0404 02:02:53.866169 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w2zrg" Apr 04 02:02:54 crc kubenswrapper[4681]: I0404 02:02:54.479001 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dvhf7" Apr 04 02:02:54 crc kubenswrapper[4681]: I0404 02:02:54.479370 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dvhf7" Apr 04 02:02:54 crc kubenswrapper[4681]: I0404 02:02:54.538813 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dvhf7" Apr 04 02:02:54 crc kubenswrapper[4681]: I0404 02:02:54.811226 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sv8f4" podUID="72699dc0-10a9-45c2-9be8-e7a48b8f4edb" containerName="registry-server" containerID="cri-o://3c03f92d63fba6500eb513055a6f75613cfec06ee7fc01c6c2cf9ff207a74a04" gracePeriod=2 Apr 04 02:02:54 crc kubenswrapper[4681]: I0404 02:02:54.862561 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dvhf7" Apr 04 02:02:56 crc kubenswrapper[4681]: I0404 02:02:56.825905 4681 generic.go:334] "Generic (PLEG): container finished" podID="72699dc0-10a9-45c2-9be8-e7a48b8f4edb" containerID="3c03f92d63fba6500eb513055a6f75613cfec06ee7fc01c6c2cf9ff207a74a04" exitCode=0 Apr 04 02:02:56 crc kubenswrapper[4681]: I0404 02:02:56.825967 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sv8f4" event={"ID":"72699dc0-10a9-45c2-9be8-e7a48b8f4edb","Type":"ContainerDied","Data":"3c03f92d63fba6500eb513055a6f75613cfec06ee7fc01c6c2cf9ff207a74a04"} Apr 04 02:02:59 crc kubenswrapper[4681]: I0404 02:02:59.132229 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sv8f4" Apr 04 02:02:59 crc kubenswrapper[4681]: I0404 02:02:59.228566 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72699dc0-10a9-45c2-9be8-e7a48b8f4edb-catalog-content\") pod \"72699dc0-10a9-45c2-9be8-e7a48b8f4edb\" (UID: \"72699dc0-10a9-45c2-9be8-e7a48b8f4edb\") " Apr 04 02:02:59 crc kubenswrapper[4681]: I0404 02:02:59.228711 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4jdr\" (UniqueName: \"kubernetes.io/projected/72699dc0-10a9-45c2-9be8-e7a48b8f4edb-kube-api-access-s4jdr\") pod \"72699dc0-10a9-45c2-9be8-e7a48b8f4edb\" (UID: \"72699dc0-10a9-45c2-9be8-e7a48b8f4edb\") " Apr 04 02:02:59 crc kubenswrapper[4681]: I0404 02:02:59.228821 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72699dc0-10a9-45c2-9be8-e7a48b8f4edb-utilities\") pod \"72699dc0-10a9-45c2-9be8-e7a48b8f4edb\" (UID: \"72699dc0-10a9-45c2-9be8-e7a48b8f4edb\") " Apr 04 02:02:59 crc kubenswrapper[4681]: I0404 02:02:59.230095 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72699dc0-10a9-45c2-9be8-e7a48b8f4edb-utilities" (OuterVolumeSpecName: "utilities") pod "72699dc0-10a9-45c2-9be8-e7a48b8f4edb" (UID: "72699dc0-10a9-45c2-9be8-e7a48b8f4edb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:02:59 crc kubenswrapper[4681]: I0404 02:02:59.237107 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72699dc0-10a9-45c2-9be8-e7a48b8f4edb-kube-api-access-s4jdr" (OuterVolumeSpecName: "kube-api-access-s4jdr") pod "72699dc0-10a9-45c2-9be8-e7a48b8f4edb" (UID: "72699dc0-10a9-45c2-9be8-e7a48b8f4edb"). InnerVolumeSpecName "kube-api-access-s4jdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:02:59 crc kubenswrapper[4681]: I0404 02:02:59.296205 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72699dc0-10a9-45c2-9be8-e7a48b8f4edb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72699dc0-10a9-45c2-9be8-e7a48b8f4edb" (UID: "72699dc0-10a9-45c2-9be8-e7a48b8f4edb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:02:59 crc kubenswrapper[4681]: I0404 02:02:59.330824 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72699dc0-10a9-45c2-9be8-e7a48b8f4edb-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:59 crc kubenswrapper[4681]: I0404 02:02:59.330875 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72699dc0-10a9-45c2-9be8-e7a48b8f4edb-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:59 crc kubenswrapper[4681]: I0404 02:02:59.330897 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4jdr\" (UniqueName: \"kubernetes.io/projected/72699dc0-10a9-45c2-9be8-e7a48b8f4edb-kube-api-access-s4jdr\") on node \"crc\" DevicePath \"\"" Apr 04 02:02:59 crc kubenswrapper[4681]: I0404 02:02:59.851487 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sv8f4" event={"ID":"72699dc0-10a9-45c2-9be8-e7a48b8f4edb","Type":"ContainerDied","Data":"0687ff25679556832c1335870c08e5631727bf4ce589b61c94bd23bb4110338d"} Apr 04 02:02:59 crc kubenswrapper[4681]: I0404 02:02:59.851555 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sv8f4" Apr 04 02:02:59 crc kubenswrapper[4681]: I0404 02:02:59.851560 4681 scope.go:117] "RemoveContainer" containerID="3c03f92d63fba6500eb513055a6f75613cfec06ee7fc01c6c2cf9ff207a74a04" Apr 04 02:02:59 crc kubenswrapper[4681]: I0404 02:02:59.887671 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sv8f4"] Apr 04 02:02:59 crc kubenswrapper[4681]: I0404 02:02:59.890610 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sv8f4"] Apr 04 02:03:01 crc kubenswrapper[4681]: I0404 02:03:01.214000 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72699dc0-10a9-45c2-9be8-e7a48b8f4edb" path="/var/lib/kubelet/pods/72699dc0-10a9-45c2-9be8-e7a48b8f4edb/volumes" Apr 04 02:03:02 crc kubenswrapper[4681]: I0404 02:03:02.218255 4681 scope.go:117] "RemoveContainer" containerID="3a4f40eeb33fc880bb4c27013fdcad9382a81b41a7b0635096aa6262dd7b2032" Apr 04 02:03:02 crc kubenswrapper[4681]: I0404 02:03:02.307907 4681 scope.go:117] "RemoveContainer" containerID="39c7c4d448639f0eebb9aaffcd5954959f90a2cd5f977733bc1d331082bf6b1e" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.649391 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7b6477c865-bjlxv"] Apr 04 02:03:03 crc kubenswrapper[4681]: E0404 02:03:03.649939 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f7c541f-ab5d-462b-bc06-c1eaa50b4e54" containerName="kube-multus-additional-cni-plugins" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.649958 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7c541f-ab5d-462b-bc06-c1eaa50b4e54" containerName="kube-multus-additional-cni-plugins" Apr 04 02:03:03 crc kubenswrapper[4681]: E0404 02:03:03.649973 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="752e5159-8e65-48e4-9c60-8eef98e9b792" containerName="registry" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.649981 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="752e5159-8e65-48e4-9c60-8eef98e9b792" containerName="registry" Apr 04 02:03:03 crc kubenswrapper[4681]: E0404 02:03:03.649991 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c2803d0-196e-4be0-8027-76566b1f53b5" containerName="installer" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.649998 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c2803d0-196e-4be0-8027-76566b1f53b5" containerName="installer" Apr 04 02:03:03 crc kubenswrapper[4681]: E0404 02:03:03.650013 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72699dc0-10a9-45c2-9be8-e7a48b8f4edb" containerName="registry-server" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.650022 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="72699dc0-10a9-45c2-9be8-e7a48b8f4edb" containerName="registry-server" Apr 04 02:03:03 crc kubenswrapper[4681]: E0404 02:03:03.650036 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afeaf156-4185-4c46-b29d-8c865f90cab3" containerName="installer" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.650044 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="afeaf156-4185-4c46-b29d-8c865f90cab3" containerName="installer" Apr 04 02:03:03 crc kubenswrapper[4681]: E0404 02:03:03.650056 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798" containerName="console" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.650063 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798" containerName="console" Apr 04 02:03:03 crc kubenswrapper[4681]: E0404 02:03:03.650078 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72699dc0-10a9-45c2-9be8-e7a48b8f4edb" containerName="extract-utilities" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.650089 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="72699dc0-10a9-45c2-9be8-e7a48b8f4edb" containerName="extract-utilities" Apr 04 02:03:03 crc kubenswrapper[4681]: E0404 02:03:03.650099 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72699dc0-10a9-45c2-9be8-e7a48b8f4edb" containerName="extract-content" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.650108 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="72699dc0-10a9-45c2-9be8-e7a48b8f4edb" containerName="extract-content" Apr 04 02:03:03 crc kubenswrapper[4681]: E0404 02:03:03.650117 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="114ff8c6-d9f1-4a37-9054-89ceef4ea3b2" containerName="pruner" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.650125 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="114ff8c6-d9f1-4a37-9054-89ceef4ea3b2" containerName="pruner" Apr 04 02:03:03 crc kubenswrapper[4681]: E0404 02:03:03.650136 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c37dc134-cc6c-4a22-add8-c694808f8bb0" containerName="oauth-openshift" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.650143 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="c37dc134-cc6c-4a22-add8-c694808f8bb0" containerName="oauth-openshift" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.650283 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ddf35f8-67ba-4e4c-ad0e-5d20c24b5798" containerName="console" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.650300 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="752e5159-8e65-48e4-9c60-8eef98e9b792" containerName="registry" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.650308 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="114ff8c6-d9f1-4a37-9054-89ceef4ea3b2" containerName="pruner" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.650316 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f7c541f-ab5d-462b-bc06-c1eaa50b4e54" containerName="kube-multus-additional-cni-plugins" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.650326 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="72699dc0-10a9-45c2-9be8-e7a48b8f4edb" containerName="registry-server" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.650336 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="c37dc134-cc6c-4a22-add8-c694808f8bb0" containerName="oauth-openshift" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.650346 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="afeaf156-4185-4c46-b29d-8c865f90cab3" containerName="installer" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.650359 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c2803d0-196e-4be0-8027-76566b1f53b5" containerName="installer" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.650825 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.654231 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.654512 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.654677 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.654933 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.655161 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.656528 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.656696 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.656702 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.656882 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.657339 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.657355 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.660303 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.661977 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7b6477c865-bjlxv"] Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.664518 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.669649 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.686770 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.688100 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.688212 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-system-session\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.688239 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5e49e3bb-a179-458c-b61b-2466efd3e39a-audit-policies\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.688310 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.688339 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5e49e3bb-a179-458c-b61b-2466efd3e39a-audit-dir\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.688384 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-user-template-login\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.688432 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-user-template-error\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.688506 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.688582 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mt9v\" (UniqueName: \"kubernetes.io/projected/5e49e3bb-a179-458c-b61b-2466efd3e39a-kube-api-access-8mt9v\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.688659 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.688716 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.688758 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.688784 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.688850 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.790352 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.790829 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.791145 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.791402 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.791687 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-system-session\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.791927 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5e49e3bb-a179-458c-b61b-2466efd3e39a-audit-policies\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.792132 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.792341 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5e49e3bb-a179-458c-b61b-2466efd3e39a-audit-dir\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.792526 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-user-template-login\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.792715 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-user-template-error\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.792954 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.793148 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mt9v\" (UniqueName: \"kubernetes.io/projected/5e49e3bb-a179-458c-b61b-2466efd3e39a-kube-api-access-8mt9v\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.793418 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.793617 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.792397 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5e49e3bb-a179-458c-b61b-2466efd3e39a-audit-dir\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.792466 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5e49e3bb-a179-458c-b61b-2466efd3e39a-audit-policies\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.794066 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.794217 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-system-service-ca\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.794495 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.796281 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-user-template-error\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.796283 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.796401 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.796459 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-user-template-login\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.797757 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.798112 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-system-router-certs\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.806943 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-system-session\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.807562 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5e49e3bb-a179-458c-b61b-2466efd3e39a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.813000 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mt9v\" (UniqueName: \"kubernetes.io/projected/5e49e3bb-a179-458c-b61b-2466efd3e39a-kube-api-access-8mt9v\") pod \"oauth-openshift-7b6477c865-bjlxv\" (UID: \"5e49e3bb-a179-458c-b61b-2466efd3e39a\") " pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.908838 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdvrs" event={"ID":"d83b7914-ed31-46fd-9fc5-b7924c6b8b3d","Type":"ContainerStarted","Data":"7ac427fdf1d993d55e2577a49a62814c8b5df57360fab2e677bb6c1c1c983cf6"} Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.913415 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmgrn" event={"ID":"c99a24fb-60ac-48a9-9158-40827f6e3737","Type":"ContainerDied","Data":"0e4cd051eaf41ce6ad2b27c0c2164fc249a9bc4eb9a7e4c3ebf066905321840d"} Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.911126 4681 generic.go:334] "Generic (PLEG): container finished" podID="c99a24fb-60ac-48a9-9158-40827f6e3737" containerID="0e4cd051eaf41ce6ad2b27c0c2164fc249a9bc4eb9a7e4c3ebf066905321840d" exitCode=0 Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.929649 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"235e9295064844132a05dc40ef3a886a","Type":"ContainerStarted","Data":"868f34c3df6ae6d9391df0ea11fce0cef7174893730a39b413545dc27f95de5a"} Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.933972 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"815516d0756bb9282f4d0a28cef72670","Type":"ContainerStarted","Data":"22263690eb4d10133ad56476528d9c6f8145ae364a06c6223dd2feb0e10112f7"} Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.934045 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"815516d0756bb9282f4d0a28cef72670","Type":"ContainerStarted","Data":"32fb2fe503d9f87d5ab9fe349e8bd659ee722bf28615eced1cd647933ba65151"} Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.934060 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"815516d0756bb9282f4d0a28cef72670","Type":"ContainerStarted","Data":"012e418fce41527250e1c1400c7bf40551f1ac955e2eb3fd80ae37a130e0a833"} Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.935068 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.939550 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdpcd" event={"ID":"cbcf0420-aff0-484c-9c2b-134552760373","Type":"ContainerStarted","Data":"07e7d1c1ed177a3043f8cae2afd39060645a379450d7a1567df8b38ebf0ebd4a"} Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.960353 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qdvrs" podStartSLOduration=11.176270831 podStartE2EDuration="3m50.960322595s" podCreationTimestamp="2026-04-04 01:59:13 +0000 UTC" firstStartedPulling="2026-04-04 01:59:22.395820325 +0000 UTC m=+242.061595445" lastFinishedPulling="2026-04-04 02:03:02.179872049 +0000 UTC m=+461.845647209" observedRunningTime="2026-04-04 02:03:03.952076286 +0000 UTC m=+463.617851416" watchObservedRunningTime="2026-04-04 02:03:03.960322595 +0000 UTC m=+463.626097715" Apr 04 02:03:03 crc kubenswrapper[4681]: I0404 02:03:03.976597 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:04 crc kubenswrapper[4681]: I0404 02:03:04.008021 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=23.007997573 podStartE2EDuration="23.007997573s" podCreationTimestamp="2026-04-04 02:02:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:03:04.004780058 +0000 UTC m=+463.670555178" watchObservedRunningTime="2026-04-04 02:03:04.007997573 +0000 UTC m=+463.673772693" Apr 04 02:03:04 crc kubenswrapper[4681]: I0404 02:03:04.023974 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=21.023959388 podStartE2EDuration="21.023959388s" podCreationTimestamp="2026-04-04 02:02:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:03:04.019629813 +0000 UTC m=+463.685404933" watchObservedRunningTime="2026-04-04 02:03:04.023959388 +0000 UTC m=+463.689734508" Apr 04 02:03:04 crc kubenswrapper[4681]: I0404 02:03:04.415729 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7b6477c865-bjlxv"] Apr 04 02:03:04 crc kubenswrapper[4681]: W0404 02:03:04.422857 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e49e3bb_a179_458c_b61b_2466efd3e39a.slice/crio-fdcd8d0de837cbfbcbdb193abf1040f16a7d3ade137f22e4bd13e3d3a76725f2 WatchSource:0}: Error finding container fdcd8d0de837cbfbcbdb193abf1040f16a7d3ade137f22e4bd13e3d3a76725f2: Status 404 returned error can't find the container with id fdcd8d0de837cbfbcbdb193abf1040f16a7d3ade137f22e4bd13e3d3a76725f2 Apr 04 02:03:04 crc kubenswrapper[4681]: I0404 02:03:04.947231 4681 generic.go:334] "Generic (PLEG): container finished" podID="cbcf0420-aff0-484c-9c2b-134552760373" containerID="07e7d1c1ed177a3043f8cae2afd39060645a379450d7a1567df8b38ebf0ebd4a" exitCode=0 Apr 04 02:03:04 crc kubenswrapper[4681]: I0404 02:03:04.947311 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdpcd" event={"ID":"cbcf0420-aff0-484c-9c2b-134552760373","Type":"ContainerDied","Data":"07e7d1c1ed177a3043f8cae2afd39060645a379450d7a1567df8b38ebf0ebd4a"} Apr 04 02:03:04 crc kubenswrapper[4681]: I0404 02:03:04.949689 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" event={"ID":"5e49e3bb-a179-458c-b61b-2466efd3e39a","Type":"ContainerStarted","Data":"6be99efcfc0bf5946a872955bc91c69fb1dc21376be9ae9943f6d8cc05fe39b4"} Apr 04 02:03:04 crc kubenswrapper[4681]: I0404 02:03:04.949731 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" event={"ID":"5e49e3bb-a179-458c-b61b-2466efd3e39a","Type":"ContainerStarted","Data":"fdcd8d0de837cbfbcbdb193abf1040f16a7d3ade137f22e4bd13e3d3a76725f2"} Apr 04 02:03:04 crc kubenswrapper[4681]: I0404 02:03:04.949913 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:04 crc kubenswrapper[4681]: I0404 02:03:04.957337 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" Apr 04 02:03:04 crc kubenswrapper[4681]: I0404 02:03:04.958363 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmgrn" event={"ID":"c99a24fb-60ac-48a9-9158-40827f6e3737","Type":"ContainerStarted","Data":"04ff8b79d6d3dc9cc933490b38fa6b86aab9ac18a602316acf1b0bd46f1f9a83"} Apr 04 02:03:04 crc kubenswrapper[4681]: I0404 02:03:04.988902 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7b6477c865-bjlxv" podStartSLOduration=47.988880138 podStartE2EDuration="47.988880138s" podCreationTimestamp="2026-04-04 02:02:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:03:04.986384681 +0000 UTC m=+464.652159821" watchObservedRunningTime="2026-04-04 02:03:04.988880138 +0000 UTC m=+464.654655248" Apr 04 02:03:05 crc kubenswrapper[4681]: I0404 02:03:05.003726 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kmgrn" podStartSLOduration=11.827838896 podStartE2EDuration="3m54.003711712s" podCreationTimestamp="2026-04-04 01:59:11 +0000 UTC" firstStartedPulling="2026-04-04 01:59:22.39492838 +0000 UTC m=+242.060703500" lastFinishedPulling="2026-04-04 02:03:04.570801196 +0000 UTC m=+464.236576316" observedRunningTime="2026-04-04 02:03:05.002916061 +0000 UTC m=+464.668691181" watchObservedRunningTime="2026-04-04 02:03:05.003711712 +0000 UTC m=+464.669486822" Apr 04 02:03:06 crc kubenswrapper[4681]: I0404 02:03:06.972728 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdpcd" event={"ID":"cbcf0420-aff0-484c-9c2b-134552760373","Type":"ContainerStarted","Data":"ee12241471363ae4e43ae7efcb78a5642b06c4b4ec5a9aa66ad624db12659cac"} Apr 04 02:03:06 crc kubenswrapper[4681]: I0404 02:03:06.989480 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xdpcd" podStartSLOduration=9.25986644 podStartE2EDuration="3m52.989458179s" podCreationTimestamp="2026-04-04 01:59:14 +0000 UTC" firstStartedPulling="2026-04-04 01:59:22.395255129 +0000 UTC m=+242.061030249" lastFinishedPulling="2026-04-04 02:03:06.124846868 +0000 UTC m=+465.790621988" observedRunningTime="2026-04-04 02:03:06.98649474 +0000 UTC m=+466.652269860" watchObservedRunningTime="2026-04-04 02:03:06.989458179 +0000 UTC m=+466.655233299" Apr 04 02:03:11 crc kubenswrapper[4681]: I0404 02:03:11.683091 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kmgrn" Apr 04 02:03:11 crc kubenswrapper[4681]: I0404 02:03:11.683720 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kmgrn" Apr 04 02:03:11 crc kubenswrapper[4681]: I0404 02:03:11.755795 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kmgrn" Apr 04 02:03:12 crc kubenswrapper[4681]: I0404 02:03:12.062305 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kmgrn" Apr 04 02:03:12 crc kubenswrapper[4681]: I0404 02:03:12.112836 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kmgrn"] Apr 04 02:03:13 crc kubenswrapper[4681]: I0404 02:03:13.278132 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:03:13 crc kubenswrapper[4681]: I0404 02:03:13.278212 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:03:13 crc kubenswrapper[4681]: I0404 02:03:13.278228 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:03:13 crc kubenswrapper[4681]: I0404 02:03:13.278243 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:03:13 crc kubenswrapper[4681]: I0404 02:03:13.284706 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:03:13 crc kubenswrapper[4681]: I0404 02:03:13.285301 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:03:13 crc kubenswrapper[4681]: I0404 02:03:13.890433 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qdvrs" Apr 04 02:03:13 crc kubenswrapper[4681]: I0404 02:03:13.890769 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qdvrs" Apr 04 02:03:13 crc kubenswrapper[4681]: I0404 02:03:13.960447 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qdvrs" Apr 04 02:03:14 crc kubenswrapper[4681]: I0404 02:03:14.022636 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kmgrn" podUID="c99a24fb-60ac-48a9-9158-40827f6e3737" containerName="registry-server" containerID="cri-o://04ff8b79d6d3dc9cc933490b38fa6b86aab9ac18a602316acf1b0bd46f1f9a83" gracePeriod=2 Apr 04 02:03:14 crc kubenswrapper[4681]: I0404 02:03:14.031186 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:03:14 crc kubenswrapper[4681]: I0404 02:03:14.032734 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:03:14 crc kubenswrapper[4681]: I0404 02:03:14.096946 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qdvrs" Apr 04 02:03:14 crc kubenswrapper[4681]: I0404 02:03:14.997777 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xdpcd" Apr 04 02:03:14 crc kubenswrapper[4681]: I0404 02:03:14.997905 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xdpcd" Apr 04 02:03:15 crc kubenswrapper[4681]: I0404 02:03:15.064516 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xdpcd" Apr 04 02:03:15 crc kubenswrapper[4681]: I0404 02:03:15.132869 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xdpcd" Apr 04 02:03:15 crc kubenswrapper[4681]: I0404 02:03:15.527462 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-8-crc"] Apr 04 02:03:15 crc kubenswrapper[4681]: I0404 02:03:15.528192 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-8-crc" Apr 04 02:03:15 crc kubenswrapper[4681]: I0404 02:03:15.530119 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Apr 04 02:03:15 crc kubenswrapper[4681]: I0404 02:03:15.530354 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-5vhrm" Apr 04 02:03:15 crc kubenswrapper[4681]: I0404 02:03:15.534989 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-8-crc"] Apr 04 02:03:15 crc kubenswrapper[4681]: E0404 02:03:15.630137 4681 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc99a24fb_60ac_48a9_9158_40827f6e3737.slice/crio-conmon-04ff8b79d6d3dc9cc933490b38fa6b86aab9ac18a602316acf1b0bd46f1f9a83.scope\": RecentStats: unable to find data in memory cache]" Apr 04 02:03:15 crc kubenswrapper[4681]: I0404 02:03:15.672097 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65d7cc73-a6ac-469b-8805-38b805286045-kubelet-dir\") pod \"installer-8-crc\" (UID: \"65d7cc73-a6ac-469b-8805-38b805286045\") " pod="openshift-kube-scheduler/installer-8-crc" Apr 04 02:03:15 crc kubenswrapper[4681]: I0404 02:03:15.672149 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/65d7cc73-a6ac-469b-8805-38b805286045-var-lock\") pod \"installer-8-crc\" (UID: \"65d7cc73-a6ac-469b-8805-38b805286045\") " pod="openshift-kube-scheduler/installer-8-crc" Apr 04 02:03:15 crc kubenswrapper[4681]: I0404 02:03:15.672209 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65d7cc73-a6ac-469b-8805-38b805286045-kube-api-access\") pod \"installer-8-crc\" (UID: \"65d7cc73-a6ac-469b-8805-38b805286045\") " pod="openshift-kube-scheduler/installer-8-crc" Apr 04 02:03:15 crc kubenswrapper[4681]: I0404 02:03:15.773599 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65d7cc73-a6ac-469b-8805-38b805286045-kube-api-access\") pod \"installer-8-crc\" (UID: \"65d7cc73-a6ac-469b-8805-38b805286045\") " pod="openshift-kube-scheduler/installer-8-crc" Apr 04 02:03:15 crc kubenswrapper[4681]: I0404 02:03:15.773739 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65d7cc73-a6ac-469b-8805-38b805286045-kubelet-dir\") pod \"installer-8-crc\" (UID: \"65d7cc73-a6ac-469b-8805-38b805286045\") " pod="openshift-kube-scheduler/installer-8-crc" Apr 04 02:03:15 crc kubenswrapper[4681]: I0404 02:03:15.773823 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/65d7cc73-a6ac-469b-8805-38b805286045-var-lock\") pod \"installer-8-crc\" (UID: \"65d7cc73-a6ac-469b-8805-38b805286045\") " pod="openshift-kube-scheduler/installer-8-crc" Apr 04 02:03:15 crc kubenswrapper[4681]: I0404 02:03:15.773998 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/65d7cc73-a6ac-469b-8805-38b805286045-var-lock\") pod \"installer-8-crc\" (UID: \"65d7cc73-a6ac-469b-8805-38b805286045\") " pod="openshift-kube-scheduler/installer-8-crc" Apr 04 02:03:15 crc kubenswrapper[4681]: I0404 02:03:15.774473 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65d7cc73-a6ac-469b-8805-38b805286045-kubelet-dir\") pod \"installer-8-crc\" (UID: \"65d7cc73-a6ac-469b-8805-38b805286045\") " pod="openshift-kube-scheduler/installer-8-crc" Apr 04 02:03:15 crc kubenswrapper[4681]: I0404 02:03:15.794030 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65d7cc73-a6ac-469b-8805-38b805286045-kube-api-access\") pod \"installer-8-crc\" (UID: \"65d7cc73-a6ac-469b-8805-38b805286045\") " pod="openshift-kube-scheduler/installer-8-crc" Apr 04 02:03:15 crc kubenswrapper[4681]: I0404 02:03:15.924982 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-8-crc" Apr 04 02:03:16 crc kubenswrapper[4681]: I0404 02:03:16.036191 4681 generic.go:334] "Generic (PLEG): container finished" podID="c99a24fb-60ac-48a9-9158-40827f6e3737" containerID="04ff8b79d6d3dc9cc933490b38fa6b86aab9ac18a602316acf1b0bd46f1f9a83" exitCode=0 Apr 04 02:03:16 crc kubenswrapper[4681]: I0404 02:03:16.036326 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmgrn" event={"ID":"c99a24fb-60ac-48a9-9158-40827f6e3737","Type":"ContainerDied","Data":"04ff8b79d6d3dc9cc933490b38fa6b86aab9ac18a602316acf1b0bd46f1f9a83"} Apr 04 02:03:16 crc kubenswrapper[4681]: I0404 02:03:16.146645 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-8-crc"] Apr 04 02:03:16 crc kubenswrapper[4681]: W0404 02:03:16.151687 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod65d7cc73_a6ac_469b_8805_38b805286045.slice/crio-7bf139da63805c01d0722b9bd93fcde1a01ab802b97d9eedbf2d6fd2a0820ee0 WatchSource:0}: Error finding container 7bf139da63805c01d0722b9bd93fcde1a01ab802b97d9eedbf2d6fd2a0820ee0: Status 404 returned error can't find the container with id 7bf139da63805c01d0722b9bd93fcde1a01ab802b97d9eedbf2d6fd2a0820ee0 Apr 04 02:03:16 crc kubenswrapper[4681]: I0404 02:03:16.393618 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdvrs"] Apr 04 02:03:16 crc kubenswrapper[4681]: I0404 02:03:16.942017 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kmgrn" Apr 04 02:03:17 crc kubenswrapper[4681]: I0404 02:03:17.044481 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kmgrn" event={"ID":"c99a24fb-60ac-48a9-9158-40827f6e3737","Type":"ContainerDied","Data":"88a471454d9c64934c6d23c2120096fda91594aaa9ada110a86684d35e3786b8"} Apr 04 02:03:17 crc kubenswrapper[4681]: I0404 02:03:17.044540 4681 scope.go:117] "RemoveContainer" containerID="04ff8b79d6d3dc9cc933490b38fa6b86aab9ac18a602316acf1b0bd46f1f9a83" Apr 04 02:03:17 crc kubenswrapper[4681]: I0404 02:03:17.044545 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kmgrn" Apr 04 02:03:17 crc kubenswrapper[4681]: I0404 02:03:17.045998 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-8-crc" event={"ID":"65d7cc73-a6ac-469b-8805-38b805286045","Type":"ContainerStarted","Data":"7bf139da63805c01d0722b9bd93fcde1a01ab802b97d9eedbf2d6fd2a0820ee0"} Apr 04 02:03:17 crc kubenswrapper[4681]: I0404 02:03:17.046226 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qdvrs" podUID="d83b7914-ed31-46fd-9fc5-b7924c6b8b3d" containerName="registry-server" containerID="cri-o://7ac427fdf1d993d55e2577a49a62814c8b5df57360fab2e677bb6c1c1c983cf6" gracePeriod=2 Apr 04 02:03:17 crc kubenswrapper[4681]: I0404 02:03:17.092191 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c99a24fb-60ac-48a9-9158-40827f6e3737-catalog-content\") pod \"c99a24fb-60ac-48a9-9158-40827f6e3737\" (UID: \"c99a24fb-60ac-48a9-9158-40827f6e3737\") " Apr 04 02:03:17 crc kubenswrapper[4681]: I0404 02:03:17.092724 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c99a24fb-60ac-48a9-9158-40827f6e3737-utilities\") pod \"c99a24fb-60ac-48a9-9158-40827f6e3737\" (UID: \"c99a24fb-60ac-48a9-9158-40827f6e3737\") " Apr 04 02:03:17 crc kubenswrapper[4681]: I0404 02:03:17.092921 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k9ng\" (UniqueName: \"kubernetes.io/projected/c99a24fb-60ac-48a9-9158-40827f6e3737-kube-api-access-9k9ng\") pod \"c99a24fb-60ac-48a9-9158-40827f6e3737\" (UID: \"c99a24fb-60ac-48a9-9158-40827f6e3737\") " Apr 04 02:03:17 crc kubenswrapper[4681]: I0404 02:03:17.093460 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c99a24fb-60ac-48a9-9158-40827f6e3737-utilities" (OuterVolumeSpecName: "utilities") pod "c99a24fb-60ac-48a9-9158-40827f6e3737" (UID: "c99a24fb-60ac-48a9-9158-40827f6e3737"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:03:17 crc kubenswrapper[4681]: I0404 02:03:17.093536 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c99a24fb-60ac-48a9-9158-40827f6e3737-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 02:03:17 crc kubenswrapper[4681]: I0404 02:03:17.101065 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c99a24fb-60ac-48a9-9158-40827f6e3737-kube-api-access-9k9ng" (OuterVolumeSpecName: "kube-api-access-9k9ng") pod "c99a24fb-60ac-48a9-9158-40827f6e3737" (UID: "c99a24fb-60ac-48a9-9158-40827f6e3737"). InnerVolumeSpecName "kube-api-access-9k9ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:03:17 crc kubenswrapper[4681]: I0404 02:03:17.155968 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c99a24fb-60ac-48a9-9158-40827f6e3737-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c99a24fb-60ac-48a9-9158-40827f6e3737" (UID: "c99a24fb-60ac-48a9-9158-40827f6e3737"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:03:17 crc kubenswrapper[4681]: I0404 02:03:17.195138 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9k9ng\" (UniqueName: \"kubernetes.io/projected/c99a24fb-60ac-48a9-9158-40827f6e3737-kube-api-access-9k9ng\") on node \"crc\" DevicePath \"\"" Apr 04 02:03:17 crc kubenswrapper[4681]: I0404 02:03:17.195168 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c99a24fb-60ac-48a9-9158-40827f6e3737-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 02:03:17 crc kubenswrapper[4681]: I0404 02:03:17.233937 4681 scope.go:117] "RemoveContainer" containerID="0e4cd051eaf41ce6ad2b27c0c2164fc249a9bc4eb9a7e4c3ebf066905321840d" Apr 04 02:03:17 crc kubenswrapper[4681]: I0404 02:03:17.287950 4681 scope.go:117] "RemoveContainer" containerID="6aab3f4dc3df2a20b9b272243d33d40f6acebf13e06ffdf1fd3aeaeb30d006cb" Apr 04 02:03:17 crc kubenswrapper[4681]: I0404 02:03:17.371507 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kmgrn"] Apr 04 02:03:17 crc kubenswrapper[4681]: I0404 02:03:17.375742 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kmgrn"] Apr 04 02:03:18 crc kubenswrapper[4681]: I0404 02:03:18.059420 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-8-crc" event={"ID":"65d7cc73-a6ac-469b-8805-38b805286045","Type":"ContainerStarted","Data":"fe215279d7dcb42c9132de16b3a18fabf154ca594cba65a9227859ebe9f3c978"} Apr 04 02:03:18 crc kubenswrapper[4681]: I0404 02:03:18.795945 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xdpcd"] Apr 04 02:03:18 crc kubenswrapper[4681]: I0404 02:03:18.797025 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xdpcd" podUID="cbcf0420-aff0-484c-9c2b-134552760373" containerName="registry-server" containerID="cri-o://ee12241471363ae4e43ae7efcb78a5642b06c4b4ec5a9aa66ad624db12659cac" gracePeriod=2 Apr 04 02:03:19 crc kubenswrapper[4681]: I0404 02:03:19.070108 4681 generic.go:334] "Generic (PLEG): container finished" podID="d83b7914-ed31-46fd-9fc5-b7924c6b8b3d" containerID="7ac427fdf1d993d55e2577a49a62814c8b5df57360fab2e677bb6c1c1c983cf6" exitCode=0 Apr 04 02:03:19 crc kubenswrapper[4681]: I0404 02:03:19.071226 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdvrs" event={"ID":"d83b7914-ed31-46fd-9fc5-b7924c6b8b3d","Type":"ContainerDied","Data":"7ac427fdf1d993d55e2577a49a62814c8b5df57360fab2e677bb6c1c1c983cf6"} Apr 04 02:03:19 crc kubenswrapper[4681]: I0404 02:03:19.092306 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-8-crc" podStartSLOduration=4.092240411 podStartE2EDuration="4.092240411s" podCreationTimestamp="2026-04-04 02:03:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:03:19.090888004 +0000 UTC m=+478.756663154" watchObservedRunningTime="2026-04-04 02:03:19.092240411 +0000 UTC m=+478.758015571" Apr 04 02:03:19 crc kubenswrapper[4681]: I0404 02:03:19.211459 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c99a24fb-60ac-48a9-9158-40827f6e3737" path="/var/lib/kubelet/pods/c99a24fb-60ac-48a9-9158-40827f6e3737/volumes" Apr 04 02:03:19 crc kubenswrapper[4681]: I0404 02:03:19.815200 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdvrs" Apr 04 02:03:19 crc kubenswrapper[4681]: I0404 02:03:19.819467 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xdpcd" Apr 04 02:03:19 crc kubenswrapper[4681]: I0404 02:03:19.934519 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbcf0420-aff0-484c-9c2b-134552760373-utilities\") pod \"cbcf0420-aff0-484c-9c2b-134552760373\" (UID: \"cbcf0420-aff0-484c-9c2b-134552760373\") " Apr 04 02:03:19 crc kubenswrapper[4681]: I0404 02:03:19.934567 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdgzk\" (UniqueName: \"kubernetes.io/projected/cbcf0420-aff0-484c-9c2b-134552760373-kube-api-access-rdgzk\") pod \"cbcf0420-aff0-484c-9c2b-134552760373\" (UID: \"cbcf0420-aff0-484c-9c2b-134552760373\") " Apr 04 02:03:19 crc kubenswrapper[4681]: I0404 02:03:19.934628 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rfcz\" (UniqueName: \"kubernetes.io/projected/d83b7914-ed31-46fd-9fc5-b7924c6b8b3d-kube-api-access-5rfcz\") pod \"d83b7914-ed31-46fd-9fc5-b7924c6b8b3d\" (UID: \"d83b7914-ed31-46fd-9fc5-b7924c6b8b3d\") " Apr 04 02:03:19 crc kubenswrapper[4681]: I0404 02:03:19.934690 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbcf0420-aff0-484c-9c2b-134552760373-catalog-content\") pod \"cbcf0420-aff0-484c-9c2b-134552760373\" (UID: \"cbcf0420-aff0-484c-9c2b-134552760373\") " Apr 04 02:03:19 crc kubenswrapper[4681]: I0404 02:03:19.934723 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d83b7914-ed31-46fd-9fc5-b7924c6b8b3d-catalog-content\") pod \"d83b7914-ed31-46fd-9fc5-b7924c6b8b3d\" (UID: \"d83b7914-ed31-46fd-9fc5-b7924c6b8b3d\") " Apr 04 02:03:19 crc kubenswrapper[4681]: I0404 02:03:19.934765 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d83b7914-ed31-46fd-9fc5-b7924c6b8b3d-utilities\") pod \"d83b7914-ed31-46fd-9fc5-b7924c6b8b3d\" (UID: \"d83b7914-ed31-46fd-9fc5-b7924c6b8b3d\") " Apr 04 02:03:19 crc kubenswrapper[4681]: I0404 02:03:19.935593 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d83b7914-ed31-46fd-9fc5-b7924c6b8b3d-utilities" (OuterVolumeSpecName: "utilities") pod "d83b7914-ed31-46fd-9fc5-b7924c6b8b3d" (UID: "d83b7914-ed31-46fd-9fc5-b7924c6b8b3d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:03:19 crc kubenswrapper[4681]: I0404 02:03:19.936026 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbcf0420-aff0-484c-9c2b-134552760373-utilities" (OuterVolumeSpecName: "utilities") pod "cbcf0420-aff0-484c-9c2b-134552760373" (UID: "cbcf0420-aff0-484c-9c2b-134552760373"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:03:19 crc kubenswrapper[4681]: I0404 02:03:19.939753 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d83b7914-ed31-46fd-9fc5-b7924c6b8b3d-kube-api-access-5rfcz" (OuterVolumeSpecName: "kube-api-access-5rfcz") pod "d83b7914-ed31-46fd-9fc5-b7924c6b8b3d" (UID: "d83b7914-ed31-46fd-9fc5-b7924c6b8b3d"). InnerVolumeSpecName "kube-api-access-5rfcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:03:19 crc kubenswrapper[4681]: I0404 02:03:19.943322 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbcf0420-aff0-484c-9c2b-134552760373-kube-api-access-rdgzk" (OuterVolumeSpecName: "kube-api-access-rdgzk") pod "cbcf0420-aff0-484c-9c2b-134552760373" (UID: "cbcf0420-aff0-484c-9c2b-134552760373"). InnerVolumeSpecName "kube-api-access-rdgzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:03:19 crc kubenswrapper[4681]: I0404 02:03:19.959172 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d83b7914-ed31-46fd-9fc5-b7924c6b8b3d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d83b7914-ed31-46fd-9fc5-b7924c6b8b3d" (UID: "d83b7914-ed31-46fd-9fc5-b7924c6b8b3d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:03:20 crc kubenswrapper[4681]: I0404 02:03:20.036022 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rfcz\" (UniqueName: \"kubernetes.io/projected/d83b7914-ed31-46fd-9fc5-b7924c6b8b3d-kube-api-access-5rfcz\") on node \"crc\" DevicePath \"\"" Apr 04 02:03:20 crc kubenswrapper[4681]: I0404 02:03:20.036056 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d83b7914-ed31-46fd-9fc5-b7924c6b8b3d-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 02:03:20 crc kubenswrapper[4681]: I0404 02:03:20.036069 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d83b7914-ed31-46fd-9fc5-b7924c6b8b3d-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 02:03:20 crc kubenswrapper[4681]: I0404 02:03:20.036080 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdgzk\" (UniqueName: \"kubernetes.io/projected/cbcf0420-aff0-484c-9c2b-134552760373-kube-api-access-rdgzk\") on node \"crc\" DevicePath \"\"" Apr 04 02:03:20 crc kubenswrapper[4681]: I0404 02:03:20.036090 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbcf0420-aff0-484c-9c2b-134552760373-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 02:03:20 crc kubenswrapper[4681]: I0404 02:03:20.075149 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbcf0420-aff0-484c-9c2b-134552760373-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cbcf0420-aff0-484c-9c2b-134552760373" (UID: "cbcf0420-aff0-484c-9c2b-134552760373"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:03:20 crc kubenswrapper[4681]: I0404 02:03:20.077381 4681 generic.go:334] "Generic (PLEG): container finished" podID="cbcf0420-aff0-484c-9c2b-134552760373" containerID="ee12241471363ae4e43ae7efcb78a5642b06c4b4ec5a9aa66ad624db12659cac" exitCode=0 Apr 04 02:03:20 crc kubenswrapper[4681]: I0404 02:03:20.077425 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdpcd" event={"ID":"cbcf0420-aff0-484c-9c2b-134552760373","Type":"ContainerDied","Data":"ee12241471363ae4e43ae7efcb78a5642b06c4b4ec5a9aa66ad624db12659cac"} Apr 04 02:03:20 crc kubenswrapper[4681]: I0404 02:03:20.077464 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xdpcd" event={"ID":"cbcf0420-aff0-484c-9c2b-134552760373","Type":"ContainerDied","Data":"df9c8774db905c1de830ddce922656b58d5c91478f57c5b9e8db101af1436380"} Apr 04 02:03:20 crc kubenswrapper[4681]: I0404 02:03:20.077470 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xdpcd" Apr 04 02:03:20 crc kubenswrapper[4681]: I0404 02:03:20.077488 4681 scope.go:117] "RemoveContainer" containerID="ee12241471363ae4e43ae7efcb78a5642b06c4b4ec5a9aa66ad624db12659cac" Apr 04 02:03:20 crc kubenswrapper[4681]: I0404 02:03:20.079763 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qdvrs" event={"ID":"d83b7914-ed31-46fd-9fc5-b7924c6b8b3d","Type":"ContainerDied","Data":"2bf6ce6d9c5ea83ae8f94a0c26f5b2ceedac134fd52e2f44e0cbc7f2a49343ec"} Apr 04 02:03:20 crc kubenswrapper[4681]: I0404 02:03:20.079826 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qdvrs" Apr 04 02:03:20 crc kubenswrapper[4681]: I0404 02:03:20.097881 4681 scope.go:117] "RemoveContainer" containerID="07e7d1c1ed177a3043f8cae2afd39060645a379450d7a1567df8b38ebf0ebd4a" Apr 04 02:03:20 crc kubenswrapper[4681]: I0404 02:03:20.117746 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xdpcd"] Apr 04 02:03:20 crc kubenswrapper[4681]: I0404 02:03:20.129740 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xdpcd"] Apr 04 02:03:20 crc kubenswrapper[4681]: I0404 02:03:20.137657 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbcf0420-aff0-484c-9c2b-134552760373-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 02:03:20 crc kubenswrapper[4681]: I0404 02:03:20.137740 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdvrs"] Apr 04 02:03:20 crc kubenswrapper[4681]: I0404 02:03:20.138241 4681 scope.go:117] "RemoveContainer" containerID="b859e8db31e126665f88aa1f626b4d817dc249b7ec77d57ace8fd5d7bee3ba16" Apr 04 02:03:20 crc kubenswrapper[4681]: I0404 02:03:20.144932 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qdvrs"] Apr 04 02:03:20 crc kubenswrapper[4681]: I0404 02:03:20.168467 4681 scope.go:117] "RemoveContainer" containerID="ee12241471363ae4e43ae7efcb78a5642b06c4b4ec5a9aa66ad624db12659cac" Apr 04 02:03:20 crc kubenswrapper[4681]: E0404 02:03:20.168948 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee12241471363ae4e43ae7efcb78a5642b06c4b4ec5a9aa66ad624db12659cac\": container with ID starting with ee12241471363ae4e43ae7efcb78a5642b06c4b4ec5a9aa66ad624db12659cac not found: ID does not exist" containerID="ee12241471363ae4e43ae7efcb78a5642b06c4b4ec5a9aa66ad624db12659cac" Apr 04 02:03:20 crc kubenswrapper[4681]: I0404 02:03:20.168991 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee12241471363ae4e43ae7efcb78a5642b06c4b4ec5a9aa66ad624db12659cac"} err="failed to get container status \"ee12241471363ae4e43ae7efcb78a5642b06c4b4ec5a9aa66ad624db12659cac\": rpc error: code = NotFound desc = could not find container \"ee12241471363ae4e43ae7efcb78a5642b06c4b4ec5a9aa66ad624db12659cac\": container with ID starting with ee12241471363ae4e43ae7efcb78a5642b06c4b4ec5a9aa66ad624db12659cac not found: ID does not exist" Apr 04 02:03:20 crc kubenswrapper[4681]: I0404 02:03:20.169019 4681 scope.go:117] "RemoveContainer" containerID="07e7d1c1ed177a3043f8cae2afd39060645a379450d7a1567df8b38ebf0ebd4a" Apr 04 02:03:20 crc kubenswrapper[4681]: E0404 02:03:20.169447 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07e7d1c1ed177a3043f8cae2afd39060645a379450d7a1567df8b38ebf0ebd4a\": container with ID starting with 07e7d1c1ed177a3043f8cae2afd39060645a379450d7a1567df8b38ebf0ebd4a not found: ID does not exist" containerID="07e7d1c1ed177a3043f8cae2afd39060645a379450d7a1567df8b38ebf0ebd4a" Apr 04 02:03:20 crc kubenswrapper[4681]: I0404 02:03:20.169476 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07e7d1c1ed177a3043f8cae2afd39060645a379450d7a1567df8b38ebf0ebd4a"} err="failed to get container status \"07e7d1c1ed177a3043f8cae2afd39060645a379450d7a1567df8b38ebf0ebd4a\": rpc error: code = NotFound desc = could not find container \"07e7d1c1ed177a3043f8cae2afd39060645a379450d7a1567df8b38ebf0ebd4a\": container with ID starting with 07e7d1c1ed177a3043f8cae2afd39060645a379450d7a1567df8b38ebf0ebd4a not found: ID does not exist" Apr 04 02:03:20 crc kubenswrapper[4681]: I0404 02:03:20.169499 4681 scope.go:117] "RemoveContainer" containerID="b859e8db31e126665f88aa1f626b4d817dc249b7ec77d57ace8fd5d7bee3ba16" Apr 04 02:03:20 crc kubenswrapper[4681]: E0404 02:03:20.169750 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b859e8db31e126665f88aa1f626b4d817dc249b7ec77d57ace8fd5d7bee3ba16\": container with ID starting with b859e8db31e126665f88aa1f626b4d817dc249b7ec77d57ace8fd5d7bee3ba16 not found: ID does not exist" containerID="b859e8db31e126665f88aa1f626b4d817dc249b7ec77d57ace8fd5d7bee3ba16" Apr 04 02:03:20 crc kubenswrapper[4681]: I0404 02:03:20.169778 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b859e8db31e126665f88aa1f626b4d817dc249b7ec77d57ace8fd5d7bee3ba16"} err="failed to get container status \"b859e8db31e126665f88aa1f626b4d817dc249b7ec77d57ace8fd5d7bee3ba16\": rpc error: code = NotFound desc = could not find container \"b859e8db31e126665f88aa1f626b4d817dc249b7ec77d57ace8fd5d7bee3ba16\": container with ID starting with b859e8db31e126665f88aa1f626b4d817dc249b7ec77d57ace8fd5d7bee3ba16 not found: ID does not exist" Apr 04 02:03:20 crc kubenswrapper[4681]: I0404 02:03:20.169795 4681 scope.go:117] "RemoveContainer" containerID="7ac427fdf1d993d55e2577a49a62814c8b5df57360fab2e677bb6c1c1c983cf6" Apr 04 02:03:20 crc kubenswrapper[4681]: I0404 02:03:20.184199 4681 scope.go:117] "RemoveContainer" containerID="fc0b8780a8621ee82752ea154cde66f3b1c8691e3cff3e322bcfed2470ee6a25" Apr 04 02:03:20 crc kubenswrapper[4681]: I0404 02:03:20.222922 4681 scope.go:117] "RemoveContainer" containerID="391b0dadeab0a17ffb841b690258afdcb0692f62abd514deb8137fc42f4cb404" Apr 04 02:03:21 crc kubenswrapper[4681]: I0404 02:03:21.217699 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbcf0420-aff0-484c-9c2b-134552760373" path="/var/lib/kubelet/pods/cbcf0420-aff0-484c-9c2b-134552760373/volumes" Apr 04 02:03:21 crc kubenswrapper[4681]: I0404 02:03:21.219825 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d83b7914-ed31-46fd-9fc5-b7924c6b8b3d" path="/var/lib/kubelet/pods/d83b7914-ed31-46fd-9fc5-b7924c6b8b3d/volumes" Apr 04 02:03:22 crc kubenswrapper[4681]: I0404 02:03:22.179420 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-12-crc"] Apr 04 02:03:22 crc kubenswrapper[4681]: E0404 02:03:22.180043 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c99a24fb-60ac-48a9-9158-40827f6e3737" containerName="extract-content" Apr 04 02:03:22 crc kubenswrapper[4681]: I0404 02:03:22.180079 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="c99a24fb-60ac-48a9-9158-40827f6e3737" containerName="extract-content" Apr 04 02:03:22 crc kubenswrapper[4681]: E0404 02:03:22.180098 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbcf0420-aff0-484c-9c2b-134552760373" containerName="registry-server" Apr 04 02:03:22 crc kubenswrapper[4681]: I0404 02:03:22.180113 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbcf0420-aff0-484c-9c2b-134552760373" containerName="registry-server" Apr 04 02:03:22 crc kubenswrapper[4681]: E0404 02:03:22.180133 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbcf0420-aff0-484c-9c2b-134552760373" containerName="extract-utilities" Apr 04 02:03:22 crc kubenswrapper[4681]: I0404 02:03:22.180149 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbcf0420-aff0-484c-9c2b-134552760373" containerName="extract-utilities" Apr 04 02:03:22 crc kubenswrapper[4681]: E0404 02:03:22.180167 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c99a24fb-60ac-48a9-9158-40827f6e3737" containerName="registry-server" Apr 04 02:03:22 crc kubenswrapper[4681]: I0404 02:03:22.180180 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="c99a24fb-60ac-48a9-9158-40827f6e3737" containerName="registry-server" Apr 04 02:03:22 crc kubenswrapper[4681]: E0404 02:03:22.180209 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c99a24fb-60ac-48a9-9158-40827f6e3737" containerName="extract-utilities" Apr 04 02:03:22 crc kubenswrapper[4681]: I0404 02:03:22.180222 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="c99a24fb-60ac-48a9-9158-40827f6e3737" containerName="extract-utilities" Apr 04 02:03:22 crc kubenswrapper[4681]: E0404 02:03:22.180241 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83b7914-ed31-46fd-9fc5-b7924c6b8b3d" containerName="registry-server" Apr 04 02:03:22 crc kubenswrapper[4681]: I0404 02:03:22.180256 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83b7914-ed31-46fd-9fc5-b7924c6b8b3d" containerName="registry-server" Apr 04 02:03:22 crc kubenswrapper[4681]: E0404 02:03:22.180319 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83b7914-ed31-46fd-9fc5-b7924c6b8b3d" containerName="extract-content" Apr 04 02:03:22 crc kubenswrapper[4681]: I0404 02:03:22.180337 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83b7914-ed31-46fd-9fc5-b7924c6b8b3d" containerName="extract-content" Apr 04 02:03:22 crc kubenswrapper[4681]: E0404 02:03:22.180360 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83b7914-ed31-46fd-9fc5-b7924c6b8b3d" containerName="extract-utilities" Apr 04 02:03:22 crc kubenswrapper[4681]: I0404 02:03:22.180374 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83b7914-ed31-46fd-9fc5-b7924c6b8b3d" containerName="extract-utilities" Apr 04 02:03:22 crc kubenswrapper[4681]: E0404 02:03:22.180401 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbcf0420-aff0-484c-9c2b-134552760373" containerName="extract-content" Apr 04 02:03:22 crc kubenswrapper[4681]: I0404 02:03:22.180414 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbcf0420-aff0-484c-9c2b-134552760373" containerName="extract-content" Apr 04 02:03:22 crc kubenswrapper[4681]: I0404 02:03:22.180604 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="c99a24fb-60ac-48a9-9158-40827f6e3737" containerName="registry-server" Apr 04 02:03:22 crc kubenswrapper[4681]: I0404 02:03:22.180637 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d83b7914-ed31-46fd-9fc5-b7924c6b8b3d" containerName="registry-server" Apr 04 02:03:22 crc kubenswrapper[4681]: I0404 02:03:22.180684 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbcf0420-aff0-484c-9c2b-134552760373" containerName="registry-server" Apr 04 02:03:22 crc kubenswrapper[4681]: I0404 02:03:22.181481 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-12-crc" Apr 04 02:03:22 crc kubenswrapper[4681]: I0404 02:03:22.185854 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-12-crc"] Apr 04 02:03:22 crc kubenswrapper[4681]: I0404 02:03:22.186754 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Apr 04 02:03:22 crc kubenswrapper[4681]: I0404 02:03:22.187010 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Apr 04 02:03:22 crc kubenswrapper[4681]: I0404 02:03:22.263286 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e944745a-174e-4a7c-8c2a-50f43dc95d55-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"e944745a-174e-4a7c-8c2a-50f43dc95d55\") " pod="openshift-kube-controller-manager/revision-pruner-12-crc" Apr 04 02:03:22 crc kubenswrapper[4681]: I0404 02:03:22.263573 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e944745a-174e-4a7c-8c2a-50f43dc95d55-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"e944745a-174e-4a7c-8c2a-50f43dc95d55\") " pod="openshift-kube-controller-manager/revision-pruner-12-crc" Apr 04 02:03:22 crc kubenswrapper[4681]: I0404 02:03:22.365432 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e944745a-174e-4a7c-8c2a-50f43dc95d55-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"e944745a-174e-4a7c-8c2a-50f43dc95d55\") " pod="openshift-kube-controller-manager/revision-pruner-12-crc" Apr 04 02:03:22 crc kubenswrapper[4681]: I0404 02:03:22.365701 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e944745a-174e-4a7c-8c2a-50f43dc95d55-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"e944745a-174e-4a7c-8c2a-50f43dc95d55\") " pod="openshift-kube-controller-manager/revision-pruner-12-crc" Apr 04 02:03:22 crc kubenswrapper[4681]: I0404 02:03:22.365825 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e944745a-174e-4a7c-8c2a-50f43dc95d55-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"e944745a-174e-4a7c-8c2a-50f43dc95d55\") " pod="openshift-kube-controller-manager/revision-pruner-12-crc" Apr 04 02:03:22 crc kubenswrapper[4681]: I0404 02:03:22.394979 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e944745a-174e-4a7c-8c2a-50f43dc95d55-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"e944745a-174e-4a7c-8c2a-50f43dc95d55\") " pod="openshift-kube-controller-manager/revision-pruner-12-crc" Apr 04 02:03:22 crc kubenswrapper[4681]: I0404 02:03:22.502200 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-12-crc" Apr 04 02:03:23 crc kubenswrapper[4681]: I0404 02:03:22.919124 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-12-crc"] Apr 04 02:03:23 crc kubenswrapper[4681]: I0404 02:03:23.108057 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-12-crc" event={"ID":"e944745a-174e-4a7c-8c2a-50f43dc95d55","Type":"ContainerStarted","Data":"9aa44d15197d720686792047dc85b7d29ff72b9c913272d6de000988b40306ef"} Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.117334 4681 generic.go:334] "Generic (PLEG): container finished" podID="e944745a-174e-4a7c-8c2a-50f43dc95d55" containerID="1e943d672f2a569b0b1f4d4c12d9f791de97e4336f41b9a3417f905fb9ec6c02" exitCode=0 Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.117381 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-12-crc" event={"ID":"e944745a-174e-4a7c-8c2a-50f43dc95d55","Type":"ContainerDied","Data":"1e943d672f2a569b0b1f4d4c12d9f791de97e4336f41b9a3417f905fb9ec6c02"} Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.598218 4681 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.599111 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.599193 4681 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.599701 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba" gracePeriod=15 Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.599750 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725" gracePeriod=15 Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.599721 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878" gracePeriod=15 Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.599726 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c" gracePeriod=15 Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.599735 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d" gracePeriod=15 Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.601475 4681 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Apr 04 02:03:24 crc kubenswrapper[4681]: E0404 02:03:24.601621 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.601642 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Apr 04 02:03:24 crc kubenswrapper[4681]: E0404 02:03:24.601652 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.601658 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Apr 04 02:03:24 crc kubenswrapper[4681]: E0404 02:03:24.601665 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.601671 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Apr 04 02:03:24 crc kubenswrapper[4681]: E0404 02:03:24.601681 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.601686 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Apr 04 02:03:24 crc kubenswrapper[4681]: E0404 02:03:24.601694 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.601700 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Apr 04 02:03:24 crc kubenswrapper[4681]: E0404 02:03:24.601710 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.601715 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Apr 04 02:03:24 crc kubenswrapper[4681]: E0404 02:03:24.601722 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.601727 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Apr 04 02:03:24 crc kubenswrapper[4681]: E0404 02:03:24.601738 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.601745 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Apr 04 02:03:24 crc kubenswrapper[4681]: E0404 02:03:24.601753 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.601761 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Apr 04 02:03:24 crc kubenswrapper[4681]: E0404 02:03:24.601770 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.601776 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.601867 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.601876 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.601885 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.601892 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.601899 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.601907 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.601916 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.601924 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.602090 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Apr 04 02:03:24 crc kubenswrapper[4681]: E0404 02:03:24.634802 4681 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.71:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.696216 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/be484bf35d3aabad50f6e4a86d258a31-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"be484bf35d3aabad50f6e4a86d258a31\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.696289 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/be484bf35d3aabad50f6e4a86d258a31-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"be484bf35d3aabad50f6e4a86d258a31\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.696338 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4e6039c7a12c5a0c0ef5917dc7ee5582-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"4e6039c7a12c5a0c0ef5917dc7ee5582\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.696356 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/be484bf35d3aabad50f6e4a86d258a31-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"be484bf35d3aabad50f6e4a86d258a31\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.696372 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/be484bf35d3aabad50f6e4a86d258a31-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"be484bf35d3aabad50f6e4a86d258a31\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.696529 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/be484bf35d3aabad50f6e4a86d258a31-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"be484bf35d3aabad50f6e4a86d258a31\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.696574 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4e6039c7a12c5a0c0ef5917dc7ee5582-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"4e6039c7a12c5a0c0ef5917dc7ee5582\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.696645 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4e6039c7a12c5a0c0ef5917dc7ee5582-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"4e6039c7a12c5a0c0ef5917dc7ee5582\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.797484 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4e6039c7a12c5a0c0ef5917dc7ee5582-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"4e6039c7a12c5a0c0ef5917dc7ee5582\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.797553 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/be484bf35d3aabad50f6e4a86d258a31-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"be484bf35d3aabad50f6e4a86d258a31\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.797587 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/be484bf35d3aabad50f6e4a86d258a31-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"be484bf35d3aabad50f6e4a86d258a31\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.797638 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/be484bf35d3aabad50f6e4a86d258a31-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"be484bf35d3aabad50f6e4a86d258a31\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.797664 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4e6039c7a12c5a0c0ef5917dc7ee5582-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"4e6039c7a12c5a0c0ef5917dc7ee5582\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.797682 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/be484bf35d3aabad50f6e4a86d258a31-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"be484bf35d3aabad50f6e4a86d258a31\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.797707 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4e6039c7a12c5a0c0ef5917dc7ee5582-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"4e6039c7a12c5a0c0ef5917dc7ee5582\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.797665 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/be484bf35d3aabad50f6e4a86d258a31-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"be484bf35d3aabad50f6e4a86d258a31\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.797634 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4e6039c7a12c5a0c0ef5917dc7ee5582-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"4e6039c7a12c5a0c0ef5917dc7ee5582\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.797749 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4e6039c7a12c5a0c0ef5917dc7ee5582-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"4e6039c7a12c5a0c0ef5917dc7ee5582\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.797634 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/be484bf35d3aabad50f6e4a86d258a31-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"be484bf35d3aabad50f6e4a86d258a31\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.797792 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4e6039c7a12c5a0c0ef5917dc7ee5582-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"4e6039c7a12c5a0c0ef5917dc7ee5582\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.797830 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/be484bf35d3aabad50f6e4a86d258a31-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"be484bf35d3aabad50f6e4a86d258a31\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.797865 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/be484bf35d3aabad50f6e4a86d258a31-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"be484bf35d3aabad50f6e4a86d258a31\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.797898 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/be484bf35d3aabad50f6e4a86d258a31-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"be484bf35d3aabad50f6e4a86d258a31\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.797926 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/be484bf35d3aabad50f6e4a86d258a31-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"be484bf35d3aabad50f6e4a86d258a31\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:03:24 crc kubenswrapper[4681]: I0404 02:03:24.935639 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:03:24 crc kubenswrapper[4681]: W0404 02:03:24.969474 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe484bf35d3aabad50f6e4a86d258a31.slice/crio-c284797390e300cbc03e66f925500ec08bc143d07fd8bc6e3b561aebe45ba7b7 WatchSource:0}: Error finding container c284797390e300cbc03e66f925500ec08bc143d07fd8bc6e3b561aebe45ba7b7: Status 404 returned error can't find the container with id c284797390e300cbc03e66f925500ec08bc143d07fd8bc6e3b561aebe45ba7b7 Apr 04 02:03:24 crc kubenswrapper[4681]: E0404 02:03:24.973811 4681 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.71:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18a30506d27500a5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:be484bf35d3aabad50f6e4a86d258a31,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 02:03:24.972933285 +0000 UTC m=+484.638708435,LastTimestamp:2026-04-04 02:03:24.972933285 +0000 UTC m=+484.638708435,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 02:03:25 crc kubenswrapper[4681]: I0404 02:03:25.133244 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Apr 04 02:03:25 crc kubenswrapper[4681]: I0404 02:03:25.135202 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Apr 04 02:03:25 crc kubenswrapper[4681]: I0404 02:03:25.136531 4681 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c" exitCode=0 Apr 04 02:03:25 crc kubenswrapper[4681]: I0404 02:03:25.136561 4681 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878" exitCode=0 Apr 04 02:03:25 crc kubenswrapper[4681]: I0404 02:03:25.136572 4681 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d" exitCode=0 Apr 04 02:03:25 crc kubenswrapper[4681]: I0404 02:03:25.136581 4681 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725" exitCode=2 Apr 04 02:03:25 crc kubenswrapper[4681]: I0404 02:03:25.136647 4681 scope.go:117] "RemoveContainer" containerID="b80b875cd7e44a19eb81b31fcda9d74337d7cecb1c89e225b9820ee38006a8da" Apr 04 02:03:25 crc kubenswrapper[4681]: I0404 02:03:25.139612 4681 generic.go:334] "Generic (PLEG): container finished" podID="cd7db560-7870-4920-b2eb-70225807e946" containerID="1bf3e7db8ff31c862a43a9859ea12616ac17315fe579a44dca31a34b20a05fbb" exitCode=0 Apr 04 02:03:25 crc kubenswrapper[4681]: I0404 02:03:25.139701 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-10-crc" event={"ID":"cd7db560-7870-4920-b2eb-70225807e946","Type":"ContainerDied","Data":"1bf3e7db8ff31c862a43a9859ea12616ac17315fe579a44dca31a34b20a05fbb"} Apr 04 02:03:25 crc kubenswrapper[4681]: I0404 02:03:25.140598 4681 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:03:25 crc kubenswrapper[4681]: I0404 02:03:25.140868 4681 status_manager.go:851] "Failed to get status for pod" podUID="cd7db560-7870-4920-b2eb-70225807e946" pod="openshift-kube-apiserver/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-10-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:03:25 crc kubenswrapper[4681]: I0404 02:03:25.141659 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"be484bf35d3aabad50f6e4a86d258a31","Type":"ContainerStarted","Data":"c284797390e300cbc03e66f925500ec08bc143d07fd8bc6e3b561aebe45ba7b7"} Apr 04 02:03:25 crc kubenswrapper[4681]: I0404 02:03:25.360315 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-12-crc" Apr 04 02:03:25 crc kubenswrapper[4681]: I0404 02:03:25.361153 4681 status_manager.go:851] "Failed to get status for pod" podUID="e944745a-174e-4a7c-8c2a-50f43dc95d55" pod="openshift-kube-controller-manager/revision-pruner-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/revision-pruner-12-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:03:25 crc kubenswrapper[4681]: I0404 02:03:25.361544 4681 status_manager.go:851] "Failed to get status for pod" podUID="cd7db560-7870-4920-b2eb-70225807e946" pod="openshift-kube-apiserver/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-10-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:03:25 crc kubenswrapper[4681]: I0404 02:03:25.405217 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e944745a-174e-4a7c-8c2a-50f43dc95d55-kube-api-access\") pod \"e944745a-174e-4a7c-8c2a-50f43dc95d55\" (UID: \"e944745a-174e-4a7c-8c2a-50f43dc95d55\") " Apr 04 02:03:25 crc kubenswrapper[4681]: I0404 02:03:25.405403 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e944745a-174e-4a7c-8c2a-50f43dc95d55-kubelet-dir\") pod \"e944745a-174e-4a7c-8c2a-50f43dc95d55\" (UID: \"e944745a-174e-4a7c-8c2a-50f43dc95d55\") " Apr 04 02:03:25 crc kubenswrapper[4681]: I0404 02:03:25.405586 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e944745a-174e-4a7c-8c2a-50f43dc95d55-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e944745a-174e-4a7c-8c2a-50f43dc95d55" (UID: "e944745a-174e-4a7c-8c2a-50f43dc95d55"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:03:25 crc kubenswrapper[4681]: I0404 02:03:25.405855 4681 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e944745a-174e-4a7c-8c2a-50f43dc95d55-kubelet-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:03:25 crc kubenswrapper[4681]: I0404 02:03:25.410533 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e944745a-174e-4a7c-8c2a-50f43dc95d55-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e944745a-174e-4a7c-8c2a-50f43dc95d55" (UID: "e944745a-174e-4a7c-8c2a-50f43dc95d55"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:03:25 crc kubenswrapper[4681]: I0404 02:03:25.507150 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e944745a-174e-4a7c-8c2a-50f43dc95d55-kube-api-access\") on node \"crc\" DevicePath \"\"" Apr 04 02:03:26 crc kubenswrapper[4681]: I0404 02:03:26.151733 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Apr 04 02:03:26 crc kubenswrapper[4681]: I0404 02:03:26.154292 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"be484bf35d3aabad50f6e4a86d258a31","Type":"ContainerStarted","Data":"e7d0aed04e72c999c9297c7e0cc832d4cd2cfbaf640159c58210bf322eaa0683"} Apr 04 02:03:26 crc kubenswrapper[4681]: I0404 02:03:26.155484 4681 status_manager.go:851] "Failed to get status for pod" podUID="e944745a-174e-4a7c-8c2a-50f43dc95d55" pod="openshift-kube-controller-manager/revision-pruner-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/revision-pruner-12-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:03:26 crc kubenswrapper[4681]: E0404 02:03:26.155567 4681 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.71:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:03:26 crc kubenswrapper[4681]: I0404 02:03:26.155782 4681 status_manager.go:851] "Failed to get status for pod" podUID="cd7db560-7870-4920-b2eb-70225807e946" pod="openshift-kube-apiserver/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-10-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:03:26 crc kubenswrapper[4681]: I0404 02:03:26.157670 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-12-crc" event={"ID":"e944745a-174e-4a7c-8c2a-50f43dc95d55","Type":"ContainerDied","Data":"9aa44d15197d720686792047dc85b7d29ff72b9c913272d6de000988b40306ef"} Apr 04 02:03:26 crc kubenswrapper[4681]: I0404 02:03:26.157701 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9aa44d15197d720686792047dc85b7d29ff72b9c913272d6de000988b40306ef" Apr 04 02:03:26 crc kubenswrapper[4681]: I0404 02:03:26.157752 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-12-crc" Apr 04 02:03:26 crc kubenswrapper[4681]: E0404 02:03:26.168101 4681 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:03:26 crc kubenswrapper[4681]: E0404 02:03:26.168929 4681 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:03:26 crc kubenswrapper[4681]: E0404 02:03:26.169469 4681 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:03:26 crc kubenswrapper[4681]: E0404 02:03:26.170042 4681 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:03:26 crc kubenswrapper[4681]: E0404 02:03:26.170702 4681 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:03:26 crc kubenswrapper[4681]: I0404 02:03:26.170755 4681 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Apr 04 02:03:26 crc kubenswrapper[4681]: E0404 02:03:26.171316 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.71:6443: connect: connection refused" interval="200ms" Apr 04 02:03:26 crc kubenswrapper[4681]: I0404 02:03:26.172975 4681 status_manager.go:851] "Failed to get status for pod" podUID="e944745a-174e-4a7c-8c2a-50f43dc95d55" pod="openshift-kube-controller-manager/revision-pruner-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/revision-pruner-12-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:03:26 crc kubenswrapper[4681]: I0404 02:03:26.173706 4681 status_manager.go:851] "Failed to get status for pod" podUID="cd7db560-7870-4920-b2eb-70225807e946" pod="openshift-kube-apiserver/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-10-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:03:26 crc kubenswrapper[4681]: E0404 02:03:26.372793 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.71:6443: connect: connection refused" interval="400ms" Apr 04 02:03:26 crc kubenswrapper[4681]: I0404 02:03:26.464170 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-crc" Apr 04 02:03:26 crc kubenswrapper[4681]: I0404 02:03:26.464797 4681 status_manager.go:851] "Failed to get status for pod" podUID="cd7db560-7870-4920-b2eb-70225807e946" pod="openshift-kube-apiserver/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-10-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:03:26 crc kubenswrapper[4681]: I0404 02:03:26.465134 4681 status_manager.go:851] "Failed to get status for pod" podUID="e944745a-174e-4a7c-8c2a-50f43dc95d55" pod="openshift-kube-controller-manager/revision-pruner-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/revision-pruner-12-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:03:26 crc kubenswrapper[4681]: I0404 02:03:26.522183 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cd7db560-7870-4920-b2eb-70225807e946-var-lock\") pod \"cd7db560-7870-4920-b2eb-70225807e946\" (UID: \"cd7db560-7870-4920-b2eb-70225807e946\") " Apr 04 02:03:26 crc kubenswrapper[4681]: I0404 02:03:26.522294 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd7db560-7870-4920-b2eb-70225807e946-var-lock" (OuterVolumeSpecName: "var-lock") pod "cd7db560-7870-4920-b2eb-70225807e946" (UID: "cd7db560-7870-4920-b2eb-70225807e946"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:03:26 crc kubenswrapper[4681]: I0404 02:03:26.522328 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd7db560-7870-4920-b2eb-70225807e946-kube-api-access\") pod \"cd7db560-7870-4920-b2eb-70225807e946\" (UID: \"cd7db560-7870-4920-b2eb-70225807e946\") " Apr 04 02:03:26 crc kubenswrapper[4681]: I0404 02:03:26.522369 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd7db560-7870-4920-b2eb-70225807e946-kubelet-dir\") pod \"cd7db560-7870-4920-b2eb-70225807e946\" (UID: \"cd7db560-7870-4920-b2eb-70225807e946\") " Apr 04 02:03:26 crc kubenswrapper[4681]: I0404 02:03:26.522616 4681 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cd7db560-7870-4920-b2eb-70225807e946-var-lock\") on node \"crc\" DevicePath \"\"" Apr 04 02:03:26 crc kubenswrapper[4681]: I0404 02:03:26.523348 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd7db560-7870-4920-b2eb-70225807e946-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cd7db560-7870-4920-b2eb-70225807e946" (UID: "cd7db560-7870-4920-b2eb-70225807e946"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:03:26 crc kubenswrapper[4681]: I0404 02:03:26.527648 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd7db560-7870-4920-b2eb-70225807e946-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cd7db560-7870-4920-b2eb-70225807e946" (UID: "cd7db560-7870-4920-b2eb-70225807e946"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:03:26 crc kubenswrapper[4681]: E0404 02:03:26.549461 4681 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.71:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18a30506d27500a5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:be484bf35d3aabad50f6e4a86d258a31,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 02:03:24.972933285 +0000 UTC m=+484.638708435,LastTimestamp:2026-04-04 02:03:24.972933285 +0000 UTC m=+484.638708435,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 02:03:26 crc kubenswrapper[4681]: I0404 02:03:26.623384 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd7db560-7870-4920-b2eb-70225807e946-kube-api-access\") on node \"crc\" DevicePath \"\"" Apr 04 02:03:26 crc kubenswrapper[4681]: I0404 02:03:26.623422 4681 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd7db560-7870-4920-b2eb-70225807e946-kubelet-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:03:26 crc kubenswrapper[4681]: E0404 02:03:26.773796 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.71:6443: connect: connection refused" interval="800ms" Apr 04 02:03:26 crc kubenswrapper[4681]: I0404 02:03:26.964104 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Apr 04 02:03:26 crc kubenswrapper[4681]: I0404 02:03:26.966104 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:03:26 crc kubenswrapper[4681]: I0404 02:03:26.966806 4681 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:03:26 crc kubenswrapper[4681]: I0404 02:03:26.967541 4681 status_manager.go:851] "Failed to get status for pod" podUID="e944745a-174e-4a7c-8c2a-50f43dc95d55" pod="openshift-kube-controller-manager/revision-pruner-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/revision-pruner-12-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:03:26 crc kubenswrapper[4681]: I0404 02:03:26.968160 4681 status_manager.go:851] "Failed to get status for pod" podUID="cd7db560-7870-4920-b2eb-70225807e946" pod="openshift-kube-apiserver/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-10-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.029583 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.029653 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.029703 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.029741 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.029768 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.029899 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.030212 4681 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.030254 4681 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.030304 4681 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.170024 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.171240 4681 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba" exitCode=0 Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.171454 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.171487 4681 scope.go:117] "RemoveContainer" containerID="33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.174820 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-10-crc" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.174829 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-10-crc" event={"ID":"cd7db560-7870-4920-b2eb-70225807e946","Type":"ContainerDied","Data":"f1fd1669a48e00cd1bdb5a29e7cbd8fa3659f6f3b7da79983dd69176475c0b2c"} Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.174886 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1fd1669a48e00cd1bdb5a29e7cbd8fa3659f6f3b7da79983dd69176475c0b2c" Apr 04 02:03:27 crc kubenswrapper[4681]: E0404 02:03:27.175938 4681 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.71:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.193653 4681 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.194127 4681 status_manager.go:851] "Failed to get status for pod" podUID="e944745a-174e-4a7c-8c2a-50f43dc95d55" pod="openshift-kube-controller-manager/revision-pruner-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/revision-pruner-12-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.194656 4681 status_manager.go:851] "Failed to get status for pod" podUID="cd7db560-7870-4920-b2eb-70225807e946" pod="openshift-kube-apiserver/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-10-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.197337 4681 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.197358 4681 scope.go:117] "RemoveContainer" containerID="0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.197662 4681 status_manager.go:851] "Failed to get status for pod" podUID="e944745a-174e-4a7c-8c2a-50f43dc95d55" pod="openshift-kube-controller-manager/revision-pruner-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/revision-pruner-12-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.197923 4681 status_manager.go:851] "Failed to get status for pod" podUID="cd7db560-7870-4920-b2eb-70225807e946" pod="openshift-kube-apiserver/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-10-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.208291 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.212497 4681 scope.go:117] "RemoveContainer" containerID="56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.229482 4681 scope.go:117] "RemoveContainer" containerID="478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.242843 4681 scope.go:117] "RemoveContainer" containerID="6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.258664 4681 scope.go:117] "RemoveContainer" containerID="abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.280688 4681 scope.go:117] "RemoveContainer" containerID="33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c" Apr 04 02:03:27 crc kubenswrapper[4681]: E0404 02:03:27.282222 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\": container with ID starting with 33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c not found: ID does not exist" containerID="33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.282289 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c"} err="failed to get container status \"33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\": rpc error: code = NotFound desc = could not find container \"33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c\": container with ID starting with 33afd9253abf90d4aacf618709c4f55436b445e2e584586c28e4e8ee8adda87c not found: ID does not exist" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.282426 4681 scope.go:117] "RemoveContainer" containerID="0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878" Apr 04 02:03:27 crc kubenswrapper[4681]: E0404 02:03:27.283308 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\": container with ID starting with 0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878 not found: ID does not exist" containerID="0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.283344 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878"} err="failed to get container status \"0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\": rpc error: code = NotFound desc = could not find container \"0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878\": container with ID starting with 0163c4b604687a943a293b577795907579d3d1ce83d1e6390743df2df848b878 not found: ID does not exist" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.283372 4681 scope.go:117] "RemoveContainer" containerID="56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d" Apr 04 02:03:27 crc kubenswrapper[4681]: E0404 02:03:27.283884 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\": container with ID starting with 56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d not found: ID does not exist" containerID="56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.283910 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d"} err="failed to get container status \"56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\": rpc error: code = NotFound desc = could not find container \"56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d\": container with ID starting with 56fc9f435e761ae9cdb7e73f36ed44dbbee2cef879fd3b13bea7fc29787b4f8d not found: ID does not exist" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.283925 4681 scope.go:117] "RemoveContainer" containerID="478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725" Apr 04 02:03:27 crc kubenswrapper[4681]: E0404 02:03:27.284587 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\": container with ID starting with 478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725 not found: ID does not exist" containerID="478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.284622 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725"} err="failed to get container status \"478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\": rpc error: code = NotFound desc = could not find container \"478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725\": container with ID starting with 478963b7953e5a7fc36e129acd4a83a490540c030042b8613e2586c0ddfb0725 not found: ID does not exist" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.284647 4681 scope.go:117] "RemoveContainer" containerID="6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba" Apr 04 02:03:27 crc kubenswrapper[4681]: E0404 02:03:27.285055 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\": container with ID starting with 6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba not found: ID does not exist" containerID="6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.285086 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba"} err="failed to get container status \"6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\": rpc error: code = NotFound desc = could not find container \"6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba\": container with ID starting with 6ec5a675a86906dc8b1813a3289a889520762f2a79349f5f5df2449a1b52f3ba not found: ID does not exist" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.285101 4681 scope.go:117] "RemoveContainer" containerID="abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b" Apr 04 02:03:27 crc kubenswrapper[4681]: E0404 02:03:27.285517 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\": container with ID starting with abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b not found: ID does not exist" containerID="abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b" Apr 04 02:03:27 crc kubenswrapper[4681]: I0404 02:03:27.285554 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b"} err="failed to get container status \"abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\": rpc error: code = NotFound desc = could not find container \"abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b\": container with ID starting with abc470a9ecb5ff4e2ff5e95d884910b2f1b2d4bf5dca50da889f13141cd2651b not found: ID does not exist" Apr 04 02:03:27 crc kubenswrapper[4681]: E0404 02:03:27.575143 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.71:6443: connect: connection refused" interval="1.6s" Apr 04 02:03:29 crc kubenswrapper[4681]: E0404 02:03:29.176790 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.71:6443: connect: connection refused" interval="3.2s" Apr 04 02:03:31 crc kubenswrapper[4681]: I0404 02:03:31.207531 4681 status_manager.go:851] "Failed to get status for pod" podUID="cd7db560-7870-4920-b2eb-70225807e946" pod="openshift-kube-apiserver/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-10-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:03:31 crc kubenswrapper[4681]: I0404 02:03:31.208624 4681 status_manager.go:851] "Failed to get status for pod" podUID="e944745a-174e-4a7c-8c2a-50f43dc95d55" pod="openshift-kube-controller-manager/revision-pruner-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/revision-pruner-12-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:03:32 crc kubenswrapper[4681]: E0404 02:03:32.378219 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.71:6443: connect: connection refused" interval="6.4s" Apr 04 02:03:36 crc kubenswrapper[4681]: I0404 02:03:36.200816 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:03:36 crc kubenswrapper[4681]: I0404 02:03:36.201845 4681 status_manager.go:851] "Failed to get status for pod" podUID="cd7db560-7870-4920-b2eb-70225807e946" pod="openshift-kube-apiserver/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-10-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:03:36 crc kubenswrapper[4681]: I0404 02:03:36.204429 4681 status_manager.go:851] "Failed to get status for pod" podUID="e944745a-174e-4a7c-8c2a-50f43dc95d55" pod="openshift-kube-controller-manager/revision-pruner-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/revision-pruner-12-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:03:36 crc kubenswrapper[4681]: I0404 02:03:36.222880 4681 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e6f5d976-9467-446a-83cc-8b487a024874" Apr 04 02:03:36 crc kubenswrapper[4681]: I0404 02:03:36.222945 4681 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e6f5d976-9467-446a-83cc-8b487a024874" Apr 04 02:03:36 crc kubenswrapper[4681]: E0404 02:03:36.223366 4681 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:03:36 crc kubenswrapper[4681]: I0404 02:03:36.223857 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:03:36 crc kubenswrapper[4681]: E0404 02:03:36.550548 4681 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.71:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18a30506d27500a5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:be484bf35d3aabad50f6e4a86d258a31,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 02:03:24.972933285 +0000 UTC m=+484.638708435,LastTimestamp:2026-04-04 02:03:24.972933285 +0000 UTC m=+484.638708435,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 02:03:37 crc kubenswrapper[4681]: I0404 02:03:37.240488 4681 generic.go:334] "Generic (PLEG): container finished" podID="4e6039c7a12c5a0c0ef5917dc7ee5582" containerID="93e3ff5184c33538330842141451ce2a0284ed131e855521a2d07892b8f0737f" exitCode=0 Apr 04 02:03:37 crc kubenswrapper[4681]: I0404 02:03:37.240555 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"4e6039c7a12c5a0c0ef5917dc7ee5582","Type":"ContainerDied","Data":"93e3ff5184c33538330842141451ce2a0284ed131e855521a2d07892b8f0737f"} Apr 04 02:03:37 crc kubenswrapper[4681]: I0404 02:03:37.240605 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"4e6039c7a12c5a0c0ef5917dc7ee5582","Type":"ContainerStarted","Data":"6cf665aa13decbb8bc169514c589301543d600ce7ab2314491dc5e67dc907343"} Apr 04 02:03:37 crc kubenswrapper[4681]: I0404 02:03:37.241028 4681 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e6f5d976-9467-446a-83cc-8b487a024874" Apr 04 02:03:37 crc kubenswrapper[4681]: I0404 02:03:37.241052 4681 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e6f5d976-9467-446a-83cc-8b487a024874" Apr 04 02:03:37 crc kubenswrapper[4681]: E0404 02:03:37.241592 4681 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:03:37 crc kubenswrapper[4681]: I0404 02:03:37.241613 4681 status_manager.go:851] "Failed to get status for pod" podUID="e944745a-174e-4a7c-8c2a-50f43dc95d55" pod="openshift-kube-controller-manager/revision-pruner-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/revision-pruner-12-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:03:37 crc kubenswrapper[4681]: I0404 02:03:37.242237 4681 status_manager.go:851] "Failed to get status for pod" podUID="cd7db560-7870-4920-b2eb-70225807e946" pod="openshift-kube-apiserver/installer-10-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-10-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:03:38 crc kubenswrapper[4681]: I0404 02:03:38.250761 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"4e6039c7a12c5a0c0ef5917dc7ee5582","Type":"ContainerStarted","Data":"0a5efcb827410a45f26672998f33b39d45a6d1311b97fd36427f3a711a4e0373"} Apr 04 02:03:38 crc kubenswrapper[4681]: I0404 02:03:38.251404 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"4e6039c7a12c5a0c0ef5917dc7ee5582","Type":"ContainerStarted","Data":"30a82adf2abe4345dcd0f37dd190e89b9bfe5368a21abe58dcbaf26128bafc24"} Apr 04 02:03:38 crc kubenswrapper[4681]: I0404 02:03:38.251462 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"4e6039c7a12c5a0c0ef5917dc7ee5582","Type":"ContainerStarted","Data":"27fea8c33f2c10ead1dbc5983e8b3bb6383a995d4663cc52e0867033c4cbec0c"} Apr 04 02:03:39 crc kubenswrapper[4681]: I0404 02:03:39.257532 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_235e9295064844132a05dc40ef3a886a/kube-controller-manager/0.log" Apr 04 02:03:39 crc kubenswrapper[4681]: I0404 02:03:39.257573 4681 generic.go:334] "Generic (PLEG): container finished" podID="235e9295064844132a05dc40ef3a886a" containerID="e98d2361c12342c873e47f714e2b40aea0c84ab52b71bd8be3e23ff62fd0cd58" exitCode=1 Apr 04 02:03:39 crc kubenswrapper[4681]: I0404 02:03:39.257621 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"235e9295064844132a05dc40ef3a886a","Type":"ContainerDied","Data":"e98d2361c12342c873e47f714e2b40aea0c84ab52b71bd8be3e23ff62fd0cd58"} Apr 04 02:03:39 crc kubenswrapper[4681]: I0404 02:03:39.258063 4681 scope.go:117] "RemoveContainer" containerID="e98d2361c12342c873e47f714e2b40aea0c84ab52b71bd8be3e23ff62fd0cd58" Apr 04 02:03:39 crc kubenswrapper[4681]: I0404 02:03:39.261652 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"4e6039c7a12c5a0c0ef5917dc7ee5582","Type":"ContainerStarted","Data":"bbced439ceb23b64ba0d0fbd4ab1d4cd5e1143e8af342cbb67b657f7734ed332"} Apr 04 02:03:39 crc kubenswrapper[4681]: I0404 02:03:39.261680 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"4e6039c7a12c5a0c0ef5917dc7ee5582","Type":"ContainerStarted","Data":"813c127b31bfed1f8e3fff015adfbc3b97c796f4917ef9f10c557180b5d9c450"} Apr 04 02:03:39 crc kubenswrapper[4681]: I0404 02:03:39.261852 4681 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e6f5d976-9467-446a-83cc-8b487a024874" Apr 04 02:03:39 crc kubenswrapper[4681]: I0404 02:03:39.261871 4681 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e6f5d976-9467-446a-83cc-8b487a024874" Apr 04 02:03:39 crc kubenswrapper[4681]: I0404 02:03:39.262027 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:03:40 crc kubenswrapper[4681]: I0404 02:03:40.270050 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_235e9295064844132a05dc40ef3a886a/kube-controller-manager/0.log" Apr 04 02:03:40 crc kubenswrapper[4681]: I0404 02:03:40.270115 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"235e9295064844132a05dc40ef3a886a","Type":"ContainerStarted","Data":"2791fdb1449327901a9fbc2879b866d0662be7e5c045548be28f987c55b31f66"} Apr 04 02:03:41 crc kubenswrapper[4681]: I0404 02:03:41.224139 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:03:41 crc kubenswrapper[4681]: I0404 02:03:41.224603 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:03:41 crc kubenswrapper[4681]: I0404 02:03:41.234368 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:03:43 crc kubenswrapper[4681]: I0404 02:03:43.277324 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:03:43 crc kubenswrapper[4681]: I0404 02:03:43.278516 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:03:43 crc kubenswrapper[4681]: I0404 02:03:43.286869 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:03:44 crc kubenswrapper[4681]: I0404 02:03:44.271815 4681 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:03:44 crc kubenswrapper[4681]: I0404 02:03:44.308526 4681 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e6f5d976-9467-446a-83cc-8b487a024874" Apr 04 02:03:44 crc kubenswrapper[4681]: I0404 02:03:44.308558 4681 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e6f5d976-9467-446a-83cc-8b487a024874" Apr 04 02:03:44 crc kubenswrapper[4681]: I0404 02:03:44.314610 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:03:44 crc kubenswrapper[4681]: I0404 02:03:44.364034 4681 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="4e6039c7a12c5a0c0ef5917dc7ee5582" podUID="cb6dbd06-3a60-48ca-8124-88d665b42c16" Apr 04 02:03:45 crc kubenswrapper[4681]: I0404 02:03:45.323119 4681 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e6f5d976-9467-446a-83cc-8b487a024874" Apr 04 02:03:45 crc kubenswrapper[4681]: I0404 02:03:45.323641 4681 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e6f5d976-9467-446a-83cc-8b487a024874" Apr 04 02:03:45 crc kubenswrapper[4681]: I0404 02:03:45.328507 4681 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="4e6039c7a12c5a0c0ef5917dc7ee5582" podUID="cb6dbd06-3a60-48ca-8124-88d665b42c16" Apr 04 02:03:48 crc kubenswrapper[4681]: I0404 02:03:48.346197 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-8-crc_65d7cc73-a6ac-469b-8805-38b805286045/installer/0.log" Apr 04 02:03:48 crc kubenswrapper[4681]: I0404 02:03:48.346580 4681 generic.go:334] "Generic (PLEG): container finished" podID="65d7cc73-a6ac-469b-8805-38b805286045" containerID="fe215279d7dcb42c9132de16b3a18fabf154ca594cba65a9227859ebe9f3c978" exitCode=1 Apr 04 02:03:48 crc kubenswrapper[4681]: I0404 02:03:48.346621 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-8-crc" event={"ID":"65d7cc73-a6ac-469b-8805-38b805286045","Type":"ContainerDied","Data":"fe215279d7dcb42c9132de16b3a18fabf154ca594cba65a9227859ebe9f3c978"} Apr 04 02:03:49 crc kubenswrapper[4681]: I0404 02:03:49.616579 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-8-crc_65d7cc73-a6ac-469b-8805-38b805286045/installer/0.log" Apr 04 02:03:49 crc kubenswrapper[4681]: I0404 02:03:49.616953 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-8-crc" Apr 04 02:03:49 crc kubenswrapper[4681]: I0404 02:03:49.734075 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65d7cc73-a6ac-469b-8805-38b805286045-kubelet-dir\") pod \"65d7cc73-a6ac-469b-8805-38b805286045\" (UID: \"65d7cc73-a6ac-469b-8805-38b805286045\") " Apr 04 02:03:49 crc kubenswrapper[4681]: I0404 02:03:49.734169 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/65d7cc73-a6ac-469b-8805-38b805286045-var-lock\") pod \"65d7cc73-a6ac-469b-8805-38b805286045\" (UID: \"65d7cc73-a6ac-469b-8805-38b805286045\") " Apr 04 02:03:49 crc kubenswrapper[4681]: I0404 02:03:49.734170 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65d7cc73-a6ac-469b-8805-38b805286045-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "65d7cc73-a6ac-469b-8805-38b805286045" (UID: "65d7cc73-a6ac-469b-8805-38b805286045"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:03:49 crc kubenswrapper[4681]: I0404 02:03:49.734202 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65d7cc73-a6ac-469b-8805-38b805286045-kube-api-access\") pod \"65d7cc73-a6ac-469b-8805-38b805286045\" (UID: \"65d7cc73-a6ac-469b-8805-38b805286045\") " Apr 04 02:03:49 crc kubenswrapper[4681]: I0404 02:03:49.734228 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/65d7cc73-a6ac-469b-8805-38b805286045-var-lock" (OuterVolumeSpecName: "var-lock") pod "65d7cc73-a6ac-469b-8805-38b805286045" (UID: "65d7cc73-a6ac-469b-8805-38b805286045"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:03:49 crc kubenswrapper[4681]: I0404 02:03:49.734424 4681 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/65d7cc73-a6ac-469b-8805-38b805286045-kubelet-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:03:49 crc kubenswrapper[4681]: I0404 02:03:49.734436 4681 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/65d7cc73-a6ac-469b-8805-38b805286045-var-lock\") on node \"crc\" DevicePath \"\"" Apr 04 02:03:49 crc kubenswrapper[4681]: I0404 02:03:49.742231 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65d7cc73-a6ac-469b-8805-38b805286045-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "65d7cc73-a6ac-469b-8805-38b805286045" (UID: "65d7cc73-a6ac-469b-8805-38b805286045"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:03:49 crc kubenswrapper[4681]: I0404 02:03:49.835507 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65d7cc73-a6ac-469b-8805-38b805286045-kube-api-access\") on node \"crc\" DevicePath \"\"" Apr 04 02:03:50 crc kubenswrapper[4681]: I0404 02:03:50.363822 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-8-crc_65d7cc73-a6ac-469b-8805-38b805286045/installer/0.log" Apr 04 02:03:50 crc kubenswrapper[4681]: I0404 02:03:50.363889 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-8-crc" event={"ID":"65d7cc73-a6ac-469b-8805-38b805286045","Type":"ContainerDied","Data":"7bf139da63805c01d0722b9bd93fcde1a01ab802b97d9eedbf2d6fd2a0820ee0"} Apr 04 02:03:50 crc kubenswrapper[4681]: I0404 02:03:50.363921 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bf139da63805c01d0722b9bd93fcde1a01ab802b97d9eedbf2d6fd2a0820ee0" Apr 04 02:03:50 crc kubenswrapper[4681]: I0404 02:03:50.364004 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-8-crc" Apr 04 02:03:51 crc kubenswrapper[4681]: I0404 02:03:51.265246 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 02:03:53 crc kubenswrapper[4681]: I0404 02:03:53.283391 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:03:54 crc kubenswrapper[4681]: I0404 02:03:54.015053 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Apr 04 02:03:54 crc kubenswrapper[4681]: I0404 02:03:54.903559 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Apr 04 02:03:55 crc kubenswrapper[4681]: I0404 02:03:55.522381 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Apr 04 02:03:55 crc kubenswrapper[4681]: I0404 02:03:55.573386 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Apr 04 02:03:55 crc kubenswrapper[4681]: I0404 02:03:55.966021 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Apr 04 02:03:56 crc kubenswrapper[4681]: I0404 02:03:56.196701 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Apr 04 02:03:56 crc kubenswrapper[4681]: I0404 02:03:56.356365 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Apr 04 02:03:56 crc kubenswrapper[4681]: I0404 02:03:56.652411 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Apr 04 02:03:56 crc kubenswrapper[4681]: I0404 02:03:56.824529 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Apr 04 02:03:56 crc kubenswrapper[4681]: I0404 02:03:56.866740 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Apr 04 02:03:56 crc kubenswrapper[4681]: I0404 02:03:56.884427 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Apr 04 02:03:57 crc kubenswrapper[4681]: I0404 02:03:57.161037 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Apr 04 02:03:57 crc kubenswrapper[4681]: I0404 02:03:57.222480 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Apr 04 02:03:57 crc kubenswrapper[4681]: I0404 02:03:57.255119 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Apr 04 02:03:57 crc kubenswrapper[4681]: I0404 02:03:57.275780 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Apr 04 02:03:57 crc kubenswrapper[4681]: I0404 02:03:57.618630 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Apr 04 02:03:57 crc kubenswrapper[4681]: I0404 02:03:57.728881 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Apr 04 02:03:58 crc kubenswrapper[4681]: I0404 02:03:58.232198 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Apr 04 02:03:58 crc kubenswrapper[4681]: I0404 02:03:58.240689 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Apr 04 02:03:58 crc kubenswrapper[4681]: I0404 02:03:58.250431 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Apr 04 02:03:58 crc kubenswrapper[4681]: I0404 02:03:58.427645 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Apr 04 02:03:58 crc kubenswrapper[4681]: I0404 02:03:58.552003 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Apr 04 02:03:58 crc kubenswrapper[4681]: I0404 02:03:58.592459 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Apr 04 02:03:58 crc kubenswrapper[4681]: I0404 02:03:58.620489 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Apr 04 02:03:58 crc kubenswrapper[4681]: I0404 02:03:58.655086 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Apr 04 02:03:58 crc kubenswrapper[4681]: I0404 02:03:58.725782 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Apr 04 02:03:58 crc kubenswrapper[4681]: I0404 02:03:58.734426 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Apr 04 02:03:58 crc kubenswrapper[4681]: I0404 02:03:58.782773 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Apr 04 02:03:58 crc kubenswrapper[4681]: I0404 02:03:58.785540 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Apr 04 02:03:58 crc kubenswrapper[4681]: I0404 02:03:58.799733 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Apr 04 02:03:58 crc kubenswrapper[4681]: I0404 02:03:58.824140 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Apr 04 02:03:58 crc kubenswrapper[4681]: I0404 02:03:58.888324 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Apr 04 02:03:59 crc kubenswrapper[4681]: I0404 02:03:59.006829 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Apr 04 02:03:59 crc kubenswrapper[4681]: I0404 02:03:59.013353 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Apr 04 02:03:59 crc kubenswrapper[4681]: I0404 02:03:59.018022 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Apr 04 02:03:59 crc kubenswrapper[4681]: I0404 02:03:59.056670 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Apr 04 02:03:59 crc kubenswrapper[4681]: I0404 02:03:59.073636 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Apr 04 02:03:59 crc kubenswrapper[4681]: I0404 02:03:59.094206 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Apr 04 02:03:59 crc kubenswrapper[4681]: I0404 02:03:59.159579 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Apr 04 02:03:59 crc kubenswrapper[4681]: I0404 02:03:59.203036 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Apr 04 02:03:59 crc kubenswrapper[4681]: I0404 02:03:59.210423 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Apr 04 02:03:59 crc kubenswrapper[4681]: I0404 02:03:59.365521 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Apr 04 02:03:59 crc kubenswrapper[4681]: I0404 02:03:59.450650 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Apr 04 02:03:59 crc kubenswrapper[4681]: I0404 02:03:59.495847 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Apr 04 02:03:59 crc kubenswrapper[4681]: I0404 02:03:59.522468 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Apr 04 02:03:59 crc kubenswrapper[4681]: I0404 02:03:59.572295 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Apr 04 02:03:59 crc kubenswrapper[4681]: I0404 02:03:59.578033 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Apr 04 02:03:59 crc kubenswrapper[4681]: I0404 02:03:59.729075 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Apr 04 02:03:59 crc kubenswrapper[4681]: I0404 02:03:59.770202 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Apr 04 02:03:59 crc kubenswrapper[4681]: I0404 02:03:59.942353 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Apr 04 02:03:59 crc kubenswrapper[4681]: I0404 02:03:59.990633 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Apr 04 02:03:59 crc kubenswrapper[4681]: I0404 02:03:59.992998 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Apr 04 02:04:00 crc kubenswrapper[4681]: I0404 02:04:00.034699 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Apr 04 02:04:00 crc kubenswrapper[4681]: I0404 02:04:00.078541 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Apr 04 02:04:00 crc kubenswrapper[4681]: I0404 02:04:00.133678 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Apr 04 02:04:00 crc kubenswrapper[4681]: I0404 02:04:00.196915 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Apr 04 02:04:00 crc kubenswrapper[4681]: I0404 02:04:00.208751 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Apr 04 02:04:00 crc kubenswrapper[4681]: I0404 02:04:00.275202 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Apr 04 02:04:00 crc kubenswrapper[4681]: I0404 02:04:00.283506 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Apr 04 02:04:00 crc kubenswrapper[4681]: I0404 02:04:00.296334 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Apr 04 02:04:00 crc kubenswrapper[4681]: I0404 02:04:00.332379 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Apr 04 02:04:00 crc kubenswrapper[4681]: I0404 02:04:00.367079 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Apr 04 02:04:00 crc kubenswrapper[4681]: I0404 02:04:00.381019 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Apr 04 02:04:00 crc kubenswrapper[4681]: I0404 02:04:00.385750 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Apr 04 02:04:00 crc kubenswrapper[4681]: I0404 02:04:00.505091 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Apr 04 02:04:00 crc kubenswrapper[4681]: I0404 02:04:00.523314 4681 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Apr 04 02:04:00 crc kubenswrapper[4681]: I0404 02:04:00.558151 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Apr 04 02:04:00 crc kubenswrapper[4681]: I0404 02:04:00.574835 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Apr 04 02:04:00 crc kubenswrapper[4681]: I0404 02:04:00.646535 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Apr 04 02:04:00 crc kubenswrapper[4681]: I0404 02:04:00.718249 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Apr 04 02:04:00 crc kubenswrapper[4681]: I0404 02:04:00.833382 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Apr 04 02:04:00 crc kubenswrapper[4681]: I0404 02:04:00.855207 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Apr 04 02:04:00 crc kubenswrapper[4681]: I0404 02:04:00.899791 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Apr 04 02:04:00 crc kubenswrapper[4681]: I0404 02:04:00.958655 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Apr 04 02:04:01 crc kubenswrapper[4681]: I0404 02:04:01.180084 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Apr 04 02:04:01 crc kubenswrapper[4681]: I0404 02:04:01.186178 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Apr 04 02:04:01 crc kubenswrapper[4681]: I0404 02:04:01.221709 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Apr 04 02:04:01 crc kubenswrapper[4681]: I0404 02:04:01.228379 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Apr 04 02:04:01 crc kubenswrapper[4681]: I0404 02:04:01.240314 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Apr 04 02:04:01 crc kubenswrapper[4681]: I0404 02:04:01.261118 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Apr 04 02:04:01 crc kubenswrapper[4681]: I0404 02:04:01.269338 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Apr 04 02:04:01 crc kubenswrapper[4681]: I0404 02:04:01.315223 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Apr 04 02:04:01 crc kubenswrapper[4681]: I0404 02:04:01.357126 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Apr 04 02:04:01 crc kubenswrapper[4681]: I0404 02:04:01.370843 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Apr 04 02:04:01 crc kubenswrapper[4681]: I0404 02:04:01.389506 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Apr 04 02:04:01 crc kubenswrapper[4681]: I0404 02:04:01.656913 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Apr 04 02:04:01 crc kubenswrapper[4681]: I0404 02:04:01.689134 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Apr 04 02:04:01 crc kubenswrapper[4681]: I0404 02:04:01.742303 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Apr 04 02:04:01 crc kubenswrapper[4681]: I0404 02:04:01.783740 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Apr 04 02:04:01 crc kubenswrapper[4681]: I0404 02:04:01.789429 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Apr 04 02:04:01 crc kubenswrapper[4681]: I0404 02:04:01.812742 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Apr 04 02:04:01 crc kubenswrapper[4681]: I0404 02:04:01.816615 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Apr 04 02:04:02 crc kubenswrapper[4681]: I0404 02:04:02.057795 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Apr 04 02:04:02 crc kubenswrapper[4681]: I0404 02:04:02.060576 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Apr 04 02:04:02 crc kubenswrapper[4681]: I0404 02:04:02.224371 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Apr 04 02:04:02 crc kubenswrapper[4681]: I0404 02:04:02.359659 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Apr 04 02:04:02 crc kubenswrapper[4681]: I0404 02:04:02.425533 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Apr 04 02:04:02 crc kubenswrapper[4681]: I0404 02:04:02.437630 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Apr 04 02:04:02 crc kubenswrapper[4681]: I0404 02:04:02.458680 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Apr 04 02:04:02 crc kubenswrapper[4681]: I0404 02:04:02.533174 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Apr 04 02:04:02 crc kubenswrapper[4681]: I0404 02:04:02.673305 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Apr 04 02:04:02 crc kubenswrapper[4681]: I0404 02:04:02.794252 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Apr 04 02:04:02 crc kubenswrapper[4681]: I0404 02:04:02.811347 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Apr 04 02:04:02 crc kubenswrapper[4681]: I0404 02:04:02.855116 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Apr 04 02:04:02 crc kubenswrapper[4681]: I0404 02:04:02.965236 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Apr 04 02:04:02 crc kubenswrapper[4681]: I0404 02:04:02.987131 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Apr 04 02:04:03 crc kubenswrapper[4681]: I0404 02:04:03.036770 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Apr 04 02:04:03 crc kubenswrapper[4681]: I0404 02:04:03.045060 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Apr 04 02:04:03 crc kubenswrapper[4681]: I0404 02:04:03.100897 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Apr 04 02:04:03 crc kubenswrapper[4681]: I0404 02:04:03.143837 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Apr 04 02:04:03 crc kubenswrapper[4681]: I0404 02:04:03.221690 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Apr 04 02:04:03 crc kubenswrapper[4681]: I0404 02:04:03.228963 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Apr 04 02:04:03 crc kubenswrapper[4681]: I0404 02:04:03.288868 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Apr 04 02:04:03 crc kubenswrapper[4681]: I0404 02:04:03.310640 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Apr 04 02:04:03 crc kubenswrapper[4681]: I0404 02:04:03.358882 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Apr 04 02:04:03 crc kubenswrapper[4681]: I0404 02:04:03.415858 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Apr 04 02:04:03 crc kubenswrapper[4681]: I0404 02:04:03.472742 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Apr 04 02:04:03 crc kubenswrapper[4681]: I0404 02:04:03.482379 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Apr 04 02:04:03 crc kubenswrapper[4681]: I0404 02:04:03.532897 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Apr 04 02:04:03 crc kubenswrapper[4681]: I0404 02:04:03.674645 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Apr 04 02:04:03 crc kubenswrapper[4681]: I0404 02:04:03.752341 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Apr 04 02:04:03 crc kubenswrapper[4681]: I0404 02:04:03.860018 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Apr 04 02:04:04 crc kubenswrapper[4681]: I0404 02:04:04.116416 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Apr 04 02:04:04 crc kubenswrapper[4681]: I0404 02:04:04.221254 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Apr 04 02:04:04 crc kubenswrapper[4681]: I0404 02:04:04.248191 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Apr 04 02:04:04 crc kubenswrapper[4681]: I0404 02:04:04.256502 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Apr 04 02:04:04 crc kubenswrapper[4681]: I0404 02:04:04.271937 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Apr 04 02:04:04 crc kubenswrapper[4681]: I0404 02:04:04.275917 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Apr 04 02:04:04 crc kubenswrapper[4681]: I0404 02:04:04.321858 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Apr 04 02:04:04 crc kubenswrapper[4681]: I0404 02:04:04.454462 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Apr 04 02:04:04 crc kubenswrapper[4681]: I0404 02:04:04.638705 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Apr 04 02:04:04 crc kubenswrapper[4681]: I0404 02:04:04.683492 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Apr 04 02:04:04 crc kubenswrapper[4681]: I0404 02:04:04.700631 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Apr 04 02:04:04 crc kubenswrapper[4681]: I0404 02:04:04.701046 4681 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Apr 04 02:04:04 crc kubenswrapper[4681]: I0404 02:04:04.722400 4681 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Apr 04 02:04:04 crc kubenswrapper[4681]: I0404 02:04:04.835442 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Apr 04 02:04:04 crc kubenswrapper[4681]: I0404 02:04:04.911897 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Apr 04 02:04:05 crc kubenswrapper[4681]: I0404 02:04:05.022609 4681 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Apr 04 02:04:05 crc kubenswrapper[4681]: I0404 02:04:05.068018 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Apr 04 02:04:05 crc kubenswrapper[4681]: I0404 02:04:05.075964 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Apr 04 02:04:05 crc kubenswrapper[4681]: I0404 02:04:05.078545 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Apr 04 02:04:05 crc kubenswrapper[4681]: I0404 02:04:05.082474 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Apr 04 02:04:05 crc kubenswrapper[4681]: I0404 02:04:05.193908 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Apr 04 02:04:05 crc kubenswrapper[4681]: I0404 02:04:05.254666 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Apr 04 02:04:05 crc kubenswrapper[4681]: I0404 02:04:05.258492 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Apr 04 02:04:05 crc kubenswrapper[4681]: I0404 02:04:05.274902 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Apr 04 02:04:05 crc kubenswrapper[4681]: I0404 02:04:05.287971 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Apr 04 02:04:05 crc kubenswrapper[4681]: I0404 02:04:05.295085 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Apr 04 02:04:05 crc kubenswrapper[4681]: I0404 02:04:05.374795 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Apr 04 02:04:05 crc kubenswrapper[4681]: I0404 02:04:05.387655 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Apr 04 02:04:05 crc kubenswrapper[4681]: I0404 02:04:05.399257 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Apr 04 02:04:05 crc kubenswrapper[4681]: I0404 02:04:05.639955 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Apr 04 02:04:05 crc kubenswrapper[4681]: I0404 02:04:05.659697 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Apr 04 02:04:05 crc kubenswrapper[4681]: I0404 02:04:05.744330 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Apr 04 02:04:05 crc kubenswrapper[4681]: I0404 02:04:05.847570 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Apr 04 02:04:05 crc kubenswrapper[4681]: I0404 02:04:05.894288 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Apr 04 02:04:05 crc kubenswrapper[4681]: I0404 02:04:05.940516 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Apr 04 02:04:05 crc kubenswrapper[4681]: I0404 02:04:05.981102 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.001696 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.045574 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.059870 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.073597 4681 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.082135 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.082217 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-infra/auto-csr-approver-29587804-rq24v"] Apr 04 02:04:06 crc kubenswrapper[4681]: E0404 02:04:06.082552 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e944745a-174e-4a7c-8c2a-50f43dc95d55" containerName="pruner" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.082572 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="e944745a-174e-4a7c-8c2a-50f43dc95d55" containerName="pruner" Apr 04 02:04:06 crc kubenswrapper[4681]: E0404 02:04:06.082593 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd7db560-7870-4920-b2eb-70225807e946" containerName="installer" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.082607 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd7db560-7870-4920-b2eb-70225807e946" containerName="installer" Apr 04 02:04:06 crc kubenswrapper[4681]: E0404 02:04:06.082629 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65d7cc73-a6ac-469b-8805-38b805286045" containerName="installer" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.082643 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="65d7cc73-a6ac-469b-8805-38b805286045" containerName="installer" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.082799 4681 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e6f5d976-9467-446a-83cc-8b487a024874" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.082841 4681 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e6f5d976-9467-446a-83cc-8b487a024874" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.082850 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="65d7cc73-a6ac-469b-8805-38b805286045" containerName="installer" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.082875 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="e944745a-174e-4a7c-8c2a-50f43dc95d55" containerName="pruner" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.082907 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd7db560-7870-4920-b2eb-70225807e946" containerName="installer" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.083545 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587804-rq24v" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.085217 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.086229 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.088926 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.091375 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.113749 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.113693064 podStartE2EDuration="22.113693064s" podCreationTimestamp="2026-04-04 02:03:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:04:06.104197603 +0000 UTC m=+525.769972763" watchObservedRunningTime="2026-04-04 02:04:06.113693064 +0000 UTC m=+525.779468224" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.128808 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.153798 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.212246 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.263795 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.264991 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2jnh\" (UniqueName: \"kubernetes.io/projected/38b3eda4-d536-4b71-990d-56a7d574b4dc-kube-api-access-p2jnh\") pod \"auto-csr-approver-29587804-rq24v\" (UID: \"38b3eda4-d536-4b71-990d-56a7d574b4dc\") " pod="openshift-infra/auto-csr-approver-29587804-rq24v" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.361593 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.365968 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2jnh\" (UniqueName: \"kubernetes.io/projected/38b3eda4-d536-4b71-990d-56a7d574b4dc-kube-api-access-p2jnh\") pod \"auto-csr-approver-29587804-rq24v\" (UID: \"38b3eda4-d536-4b71-990d-56a7d574b4dc\") " pod="openshift-infra/auto-csr-approver-29587804-rq24v" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.376803 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.408469 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2jnh\" (UniqueName: \"kubernetes.io/projected/38b3eda4-d536-4b71-990d-56a7d574b4dc-kube-api-access-p2jnh\") pod \"auto-csr-approver-29587804-rq24v\" (UID: \"38b3eda4-d536-4b71-990d-56a7d574b4dc\") " pod="openshift-infra/auto-csr-approver-29587804-rq24v" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.414908 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587804-rq24v" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.446253 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.471915 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.669971 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587804-rq24v"] Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.672833 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.685091 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.731016 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.743602 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.790376 4681 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.790801 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="be484bf35d3aabad50f6e4a86d258a31" containerName="startup-monitor" containerID="cri-o://e7d0aed04e72c999c9297c7e0cc832d4cd2cfbaf640159c58210bf322eaa0683" gracePeriod=5 Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.792400 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.802406 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.817228 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.828948 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.897791 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.912782 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.921081 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.922750 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.958880 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.970474 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Apr 04 02:04:06 crc kubenswrapper[4681]: I0404 02:04:06.975536 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Apr 04 02:04:07 crc kubenswrapper[4681]: I0404 02:04:07.018742 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Apr 04 02:04:07 crc kubenswrapper[4681]: I0404 02:04:07.026372 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Apr 04 02:04:07 crc kubenswrapper[4681]: I0404 02:04:07.029366 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Apr 04 02:04:07 crc kubenswrapper[4681]: I0404 02:04:07.087433 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Apr 04 02:04:07 crc kubenswrapper[4681]: I0404 02:04:07.099875 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Apr 04 02:04:07 crc kubenswrapper[4681]: I0404 02:04:07.104084 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Apr 04 02:04:07 crc kubenswrapper[4681]: I0404 02:04:07.251642 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Apr 04 02:04:07 crc kubenswrapper[4681]: I0404 02:04:07.252511 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Apr 04 02:04:07 crc kubenswrapper[4681]: I0404 02:04:07.340324 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Apr 04 02:04:07 crc kubenswrapper[4681]: I0404 02:04:07.352044 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Apr 04 02:04:07 crc kubenswrapper[4681]: I0404 02:04:07.415106 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Apr 04 02:04:07 crc kubenswrapper[4681]: I0404 02:04:07.419876 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Apr 04 02:04:07 crc kubenswrapper[4681]: I0404 02:04:07.482492 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587804-rq24v" event={"ID":"38b3eda4-d536-4b71-990d-56a7d574b4dc","Type":"ContainerStarted","Data":"e3314821f7c3737f9a82a65e1e49fa3aef99c3a32d53156b7cb7135f378b39bb"} Apr 04 02:04:07 crc kubenswrapper[4681]: I0404 02:04:07.571427 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Apr 04 02:04:07 crc kubenswrapper[4681]: I0404 02:04:07.574439 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Apr 04 02:04:07 crc kubenswrapper[4681]: I0404 02:04:07.577857 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Apr 04 02:04:07 crc kubenswrapper[4681]: I0404 02:04:07.601738 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Apr 04 02:04:07 crc kubenswrapper[4681]: I0404 02:04:07.726194 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Apr 04 02:04:07 crc kubenswrapper[4681]: I0404 02:04:07.830583 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Apr 04 02:04:07 crc kubenswrapper[4681]: I0404 02:04:07.914736 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Apr 04 02:04:08 crc kubenswrapper[4681]: I0404 02:04:08.024860 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Apr 04 02:04:08 crc kubenswrapper[4681]: I0404 02:04:08.102293 4681 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Apr 04 02:04:08 crc kubenswrapper[4681]: I0404 02:04:08.110416 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Apr 04 02:04:08 crc kubenswrapper[4681]: I0404 02:04:08.248755 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Apr 04 02:04:08 crc kubenswrapper[4681]: I0404 02:04:08.333333 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Apr 04 02:04:08 crc kubenswrapper[4681]: I0404 02:04:08.421802 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Apr 04 02:04:08 crc kubenswrapper[4681]: I0404 02:04:08.489441 4681 generic.go:334] "Generic (PLEG): container finished" podID="38b3eda4-d536-4b71-990d-56a7d574b4dc" containerID="265177d99a0a8387ef416d770ef00f96d777916cd945fdcf9a2cb6dc0c3b21bb" exitCode=0 Apr 04 02:04:08 crc kubenswrapper[4681]: I0404 02:04:08.489495 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587804-rq24v" event={"ID":"38b3eda4-d536-4b71-990d-56a7d574b4dc","Type":"ContainerDied","Data":"265177d99a0a8387ef416d770ef00f96d777916cd945fdcf9a2cb6dc0c3b21bb"} Apr 04 02:04:08 crc kubenswrapper[4681]: I0404 02:04:08.565692 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Apr 04 02:04:08 crc kubenswrapper[4681]: I0404 02:04:08.572707 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Apr 04 02:04:08 crc kubenswrapper[4681]: I0404 02:04:08.797120 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Apr 04 02:04:08 crc kubenswrapper[4681]: I0404 02:04:08.804491 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Apr 04 02:04:08 crc kubenswrapper[4681]: I0404 02:04:08.849152 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Apr 04 02:04:08 crc kubenswrapper[4681]: I0404 02:04:08.900756 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Apr 04 02:04:09 crc kubenswrapper[4681]: I0404 02:04:09.167229 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Apr 04 02:04:09 crc kubenswrapper[4681]: I0404 02:04:09.172554 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Apr 04 02:04:09 crc kubenswrapper[4681]: I0404 02:04:09.211692 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Apr 04 02:04:09 crc kubenswrapper[4681]: I0404 02:04:09.292194 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Apr 04 02:04:09 crc kubenswrapper[4681]: I0404 02:04:09.353486 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Apr 04 02:04:09 crc kubenswrapper[4681]: I0404 02:04:09.357984 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Apr 04 02:04:09 crc kubenswrapper[4681]: I0404 02:04:09.379067 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Apr 04 02:04:09 crc kubenswrapper[4681]: I0404 02:04:09.477112 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Apr 04 02:04:09 crc kubenswrapper[4681]: I0404 02:04:09.589499 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Apr 04 02:04:09 crc kubenswrapper[4681]: I0404 02:04:09.618759 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Apr 04 02:04:09 crc kubenswrapper[4681]: I0404 02:04:09.735277 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Apr 04 02:04:09 crc kubenswrapper[4681]: I0404 02:04:09.737046 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Apr 04 02:04:09 crc kubenswrapper[4681]: I0404 02:04:09.738497 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587804-rq24v" Apr 04 02:04:09 crc kubenswrapper[4681]: I0404 02:04:09.751429 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Apr 04 02:04:09 crc kubenswrapper[4681]: I0404 02:04:09.785606 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Apr 04 02:04:09 crc kubenswrapper[4681]: I0404 02:04:09.804538 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Apr 04 02:04:09 crc kubenswrapper[4681]: I0404 02:04:09.908657 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2jnh\" (UniqueName: \"kubernetes.io/projected/38b3eda4-d536-4b71-990d-56a7d574b4dc-kube-api-access-p2jnh\") pod \"38b3eda4-d536-4b71-990d-56a7d574b4dc\" (UID: \"38b3eda4-d536-4b71-990d-56a7d574b4dc\") " Apr 04 02:04:09 crc kubenswrapper[4681]: I0404 02:04:09.917964 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38b3eda4-d536-4b71-990d-56a7d574b4dc-kube-api-access-p2jnh" (OuterVolumeSpecName: "kube-api-access-p2jnh") pod "38b3eda4-d536-4b71-990d-56a7d574b4dc" (UID: "38b3eda4-d536-4b71-990d-56a7d574b4dc"). InnerVolumeSpecName "kube-api-access-p2jnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:04:09 crc kubenswrapper[4681]: I0404 02:04:09.942834 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Apr 04 02:04:09 crc kubenswrapper[4681]: I0404 02:04:09.944291 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Apr 04 02:04:09 crc kubenswrapper[4681]: I0404 02:04:09.958600 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Apr 04 02:04:09 crc kubenswrapper[4681]: I0404 02:04:09.987505 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Apr 04 02:04:10 crc kubenswrapper[4681]: I0404 02:04:10.009822 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2jnh\" (UniqueName: \"kubernetes.io/projected/38b3eda4-d536-4b71-990d-56a7d574b4dc-kube-api-access-p2jnh\") on node \"crc\" DevicePath \"\"" Apr 04 02:04:10 crc kubenswrapper[4681]: I0404 02:04:10.158862 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Apr 04 02:04:10 crc kubenswrapper[4681]: I0404 02:04:10.212922 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Apr 04 02:04:10 crc kubenswrapper[4681]: I0404 02:04:10.217532 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Apr 04 02:04:10 crc kubenswrapper[4681]: I0404 02:04:10.449746 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Apr 04 02:04:10 crc kubenswrapper[4681]: I0404 02:04:10.502451 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587804-rq24v" event={"ID":"38b3eda4-d536-4b71-990d-56a7d574b4dc","Type":"ContainerDied","Data":"e3314821f7c3737f9a82a65e1e49fa3aef99c3a32d53156b7cb7135f378b39bb"} Apr 04 02:04:10 crc kubenswrapper[4681]: I0404 02:04:10.502514 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3314821f7c3737f9a82a65e1e49fa3aef99c3a32d53156b7cb7135f378b39bb" Apr 04 02:04:10 crc kubenswrapper[4681]: I0404 02:04:10.502480 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587804-rq24v" Apr 04 02:04:10 crc kubenswrapper[4681]: I0404 02:04:10.504434 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Apr 04 02:04:10 crc kubenswrapper[4681]: I0404 02:04:10.697911 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Apr 04 02:04:10 crc kubenswrapper[4681]: I0404 02:04:10.708328 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Apr 04 02:04:10 crc kubenswrapper[4681]: I0404 02:04:10.798368 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Apr 04 02:04:10 crc kubenswrapper[4681]: I0404 02:04:10.887493 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Apr 04 02:04:11 crc kubenswrapper[4681]: I0404 02:04:11.023673 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Apr 04 02:04:11 crc kubenswrapper[4681]: I0404 02:04:11.125749 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Apr 04 02:04:11 crc kubenswrapper[4681]: I0404 02:04:11.352900 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Apr 04 02:04:11 crc kubenswrapper[4681]: I0404 02:04:11.492721 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Apr 04 02:04:11 crc kubenswrapper[4681]: I0404 02:04:11.517919 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Apr 04 02:04:11 crc kubenswrapper[4681]: I0404 02:04:11.948123 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Apr 04 02:04:12 crc kubenswrapper[4681]: I0404 02:04:12.489004 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_be484bf35d3aabad50f6e4a86d258a31/startup-monitor/0.log" Apr 04 02:04:12 crc kubenswrapper[4681]: I0404 02:04:12.489288 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:04:12 crc kubenswrapper[4681]: I0404 02:04:12.515834 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Apr 04 02:04:12 crc kubenswrapper[4681]: I0404 02:04:12.518874 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_be484bf35d3aabad50f6e4a86d258a31/startup-monitor/0.log" Apr 04 02:04:12 crc kubenswrapper[4681]: I0404 02:04:12.518920 4681 generic.go:334] "Generic (PLEG): container finished" podID="be484bf35d3aabad50f6e4a86d258a31" containerID="e7d0aed04e72c999c9297c7e0cc832d4cd2cfbaf640159c58210bf322eaa0683" exitCode=137 Apr 04 02:04:12 crc kubenswrapper[4681]: I0404 02:04:12.518959 4681 scope.go:117] "RemoveContainer" containerID="e7d0aed04e72c999c9297c7e0cc832d4cd2cfbaf640159c58210bf322eaa0683" Apr 04 02:04:12 crc kubenswrapper[4681]: I0404 02:04:12.518998 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:04:12 crc kubenswrapper[4681]: I0404 02:04:12.533296 4681 scope.go:117] "RemoveContainer" containerID="e7d0aed04e72c999c9297c7e0cc832d4cd2cfbaf640159c58210bf322eaa0683" Apr 04 02:04:12 crc kubenswrapper[4681]: E0404 02:04:12.533803 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7d0aed04e72c999c9297c7e0cc832d4cd2cfbaf640159c58210bf322eaa0683\": container with ID starting with e7d0aed04e72c999c9297c7e0cc832d4cd2cfbaf640159c58210bf322eaa0683 not found: ID does not exist" containerID="e7d0aed04e72c999c9297c7e0cc832d4cd2cfbaf640159c58210bf322eaa0683" Apr 04 02:04:12 crc kubenswrapper[4681]: I0404 02:04:12.533840 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7d0aed04e72c999c9297c7e0cc832d4cd2cfbaf640159c58210bf322eaa0683"} err="failed to get container status \"e7d0aed04e72c999c9297c7e0cc832d4cd2cfbaf640159c58210bf322eaa0683\": rpc error: code = NotFound desc = could not find container \"e7d0aed04e72c999c9297c7e0cc832d4cd2cfbaf640159c58210bf322eaa0683\": container with ID starting with e7d0aed04e72c999c9297c7e0cc832d4cd2cfbaf640159c58210bf322eaa0683 not found: ID does not exist" Apr 04 02:04:12 crc kubenswrapper[4681]: I0404 02:04:12.641593 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/be484bf35d3aabad50f6e4a86d258a31-resource-dir\") pod \"be484bf35d3aabad50f6e4a86d258a31\" (UID: \"be484bf35d3aabad50f6e4a86d258a31\") " Apr 04 02:04:12 crc kubenswrapper[4681]: I0404 02:04:12.641642 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/be484bf35d3aabad50f6e4a86d258a31-manifests\") pod \"be484bf35d3aabad50f6e4a86d258a31\" (UID: \"be484bf35d3aabad50f6e4a86d258a31\") " Apr 04 02:04:12 crc kubenswrapper[4681]: I0404 02:04:12.641710 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be484bf35d3aabad50f6e4a86d258a31-manifests" (OuterVolumeSpecName: "manifests") pod "be484bf35d3aabad50f6e4a86d258a31" (UID: "be484bf35d3aabad50f6e4a86d258a31"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:04:12 crc kubenswrapper[4681]: I0404 02:04:12.641832 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/be484bf35d3aabad50f6e4a86d258a31-pod-resource-dir\") pod \"be484bf35d3aabad50f6e4a86d258a31\" (UID: \"be484bf35d3aabad50f6e4a86d258a31\") " Apr 04 02:04:12 crc kubenswrapper[4681]: I0404 02:04:12.641709 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be484bf35d3aabad50f6e4a86d258a31-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "be484bf35d3aabad50f6e4a86d258a31" (UID: "be484bf35d3aabad50f6e4a86d258a31"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:04:12 crc kubenswrapper[4681]: I0404 02:04:12.641958 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/be484bf35d3aabad50f6e4a86d258a31-var-lock\") pod \"be484bf35d3aabad50f6e4a86d258a31\" (UID: \"be484bf35d3aabad50f6e4a86d258a31\") " Apr 04 02:04:12 crc kubenswrapper[4681]: I0404 02:04:12.641992 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/be484bf35d3aabad50f6e4a86d258a31-var-log\") pod \"be484bf35d3aabad50f6e4a86d258a31\" (UID: \"be484bf35d3aabad50f6e4a86d258a31\") " Apr 04 02:04:12 crc kubenswrapper[4681]: I0404 02:04:12.642055 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be484bf35d3aabad50f6e4a86d258a31-var-lock" (OuterVolumeSpecName: "var-lock") pod "be484bf35d3aabad50f6e4a86d258a31" (UID: "be484bf35d3aabad50f6e4a86d258a31"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:04:12 crc kubenswrapper[4681]: I0404 02:04:12.642154 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be484bf35d3aabad50f6e4a86d258a31-var-log" (OuterVolumeSpecName: "var-log") pod "be484bf35d3aabad50f6e4a86d258a31" (UID: "be484bf35d3aabad50f6e4a86d258a31"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:04:12 crc kubenswrapper[4681]: I0404 02:04:12.642342 4681 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/be484bf35d3aabad50f6e4a86d258a31-var-lock\") on node \"crc\" DevicePath \"\"" Apr 04 02:04:12 crc kubenswrapper[4681]: I0404 02:04:12.642364 4681 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/be484bf35d3aabad50f6e4a86d258a31-var-log\") on node \"crc\" DevicePath \"\"" Apr 04 02:04:12 crc kubenswrapper[4681]: I0404 02:04:12.642376 4681 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/be484bf35d3aabad50f6e4a86d258a31-resource-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:04:12 crc kubenswrapper[4681]: I0404 02:04:12.642389 4681 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/be484bf35d3aabad50f6e4a86d258a31-manifests\") on node \"crc\" DevicePath \"\"" Apr 04 02:04:12 crc kubenswrapper[4681]: I0404 02:04:12.651144 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be484bf35d3aabad50f6e4a86d258a31-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "be484bf35d3aabad50f6e4a86d258a31" (UID: "be484bf35d3aabad50f6e4a86d258a31"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:04:12 crc kubenswrapper[4681]: I0404 02:04:12.693126 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Apr 04 02:04:12 crc kubenswrapper[4681]: I0404 02:04:12.743097 4681 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/be484bf35d3aabad50f6e4a86d258a31-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:04:12 crc kubenswrapper[4681]: I0404 02:04:12.780057 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Apr 04 02:04:13 crc kubenswrapper[4681]: I0404 02:04:13.207826 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be484bf35d3aabad50f6e4a86d258a31" path="/var/lib/kubelet/pods/be484bf35d3aabad50f6e4a86d258a31/volumes" Apr 04 02:04:14 crc kubenswrapper[4681]: I0404 02:04:14.690584 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587798-tmssr"] Apr 04 02:04:14 crc kubenswrapper[4681]: I0404 02:04:14.696457 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587798-tmssr"] Apr 04 02:04:15 crc kubenswrapper[4681]: I0404 02:04:15.209740 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcd50c8c-38c5-4c42-930d-2235c4384328" path="/var/lib/kubelet/pods/fcd50c8c-38c5-4c42-930d-2235c4384328/volumes" Apr 04 02:04:15 crc kubenswrapper[4681]: I0404 02:04:15.417120 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Apr 04 02:04:15 crc kubenswrapper[4681]: E0404 02:04:15.417878 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be484bf35d3aabad50f6e4a86d258a31" containerName="startup-monitor" Apr 04 02:04:15 crc kubenswrapper[4681]: I0404 02:04:15.417899 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="be484bf35d3aabad50f6e4a86d258a31" containerName="startup-monitor" Apr 04 02:04:15 crc kubenswrapper[4681]: E0404 02:04:15.417924 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b3eda4-d536-4b71-990d-56a7d574b4dc" containerName="oc" Apr 04 02:04:15 crc kubenswrapper[4681]: I0404 02:04:15.417937 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b3eda4-d536-4b71-990d-56a7d574b4dc" containerName="oc" Apr 04 02:04:15 crc kubenswrapper[4681]: I0404 02:04:15.418116 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="be484bf35d3aabad50f6e4a86d258a31" containerName="startup-monitor" Apr 04 02:04:15 crc kubenswrapper[4681]: I0404 02:04:15.418142 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b3eda4-d536-4b71-990d-56a7d574b4dc" containerName="oc" Apr 04 02:04:15 crc kubenswrapper[4681]: I0404 02:04:15.418720 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Apr 04 02:04:15 crc kubenswrapper[4681]: I0404 02:04:15.422255 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Apr 04 02:04:15 crc kubenswrapper[4681]: I0404 02:04:15.422315 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Apr 04 02:04:15 crc kubenswrapper[4681]: I0404 02:04:15.427203 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Apr 04 02:04:15 crc kubenswrapper[4681]: I0404 02:04:15.577501 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a76f74e0-8100-45f2-9d5c-9abe30ba7dc2-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"a76f74e0-8100-45f2-9d5c-9abe30ba7dc2\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Apr 04 02:04:15 crc kubenswrapper[4681]: I0404 02:04:15.577741 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a76f74e0-8100-45f2-9d5c-9abe30ba7dc2-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"a76f74e0-8100-45f2-9d5c-9abe30ba7dc2\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Apr 04 02:04:15 crc kubenswrapper[4681]: I0404 02:04:15.679616 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a76f74e0-8100-45f2-9d5c-9abe30ba7dc2-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"a76f74e0-8100-45f2-9d5c-9abe30ba7dc2\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Apr 04 02:04:15 crc kubenswrapper[4681]: I0404 02:04:15.679690 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a76f74e0-8100-45f2-9d5c-9abe30ba7dc2-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"a76f74e0-8100-45f2-9d5c-9abe30ba7dc2\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Apr 04 02:04:15 crc kubenswrapper[4681]: I0404 02:04:15.679794 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a76f74e0-8100-45f2-9d5c-9abe30ba7dc2-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"a76f74e0-8100-45f2-9d5c-9abe30ba7dc2\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Apr 04 02:04:15 crc kubenswrapper[4681]: I0404 02:04:15.700977 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a76f74e0-8100-45f2-9d5c-9abe30ba7dc2-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"a76f74e0-8100-45f2-9d5c-9abe30ba7dc2\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Apr 04 02:04:15 crc kubenswrapper[4681]: I0404 02:04:15.737716 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Apr 04 02:04:15 crc kubenswrapper[4681]: I0404 02:04:15.917767 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Apr 04 02:04:16 crc kubenswrapper[4681]: I0404 02:04:16.545895 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"a76f74e0-8100-45f2-9d5c-9abe30ba7dc2","Type":"ContainerStarted","Data":"06cfd8ecbbd397ec272da4fbc95a48bd7f74b2a85c2c6aa4c5787c4e009ee8db"} Apr 04 02:04:16 crc kubenswrapper[4681]: I0404 02:04:16.545947 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"a76f74e0-8100-45f2-9d5c-9abe30ba7dc2","Type":"ContainerStarted","Data":"d4f4f8ee6bdf5f096de85d258bcc024fe0d2774dc68ddea2f0b81580ee79dbeb"} Apr 04 02:04:16 crc kubenswrapper[4681]: I0404 02:04:16.563113 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=1.563090078 podStartE2EDuration="1.563090078s" podCreationTimestamp="2026-04-04 02:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:04:16.558922222 +0000 UTC m=+536.224697342" watchObservedRunningTime="2026-04-04 02:04:16.563090078 +0000 UTC m=+536.228865198" Apr 04 02:04:17 crc kubenswrapper[4681]: I0404 02:04:17.553399 4681 generic.go:334] "Generic (PLEG): container finished" podID="a76f74e0-8100-45f2-9d5c-9abe30ba7dc2" containerID="06cfd8ecbbd397ec272da4fbc95a48bd7f74b2a85c2c6aa4c5787c4e009ee8db" exitCode=0 Apr 04 02:04:17 crc kubenswrapper[4681]: I0404 02:04:17.553515 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"a76f74e0-8100-45f2-9d5c-9abe30ba7dc2","Type":"ContainerDied","Data":"06cfd8ecbbd397ec272da4fbc95a48bd7f74b2a85c2c6aa4c5787c4e009ee8db"} Apr 04 02:04:18 crc kubenswrapper[4681]: I0404 02:04:18.776497 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Apr 04 02:04:18 crc kubenswrapper[4681]: I0404 02:04:18.922648 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a76f74e0-8100-45f2-9d5c-9abe30ba7dc2-kube-api-access\") pod \"a76f74e0-8100-45f2-9d5c-9abe30ba7dc2\" (UID: \"a76f74e0-8100-45f2-9d5c-9abe30ba7dc2\") " Apr 04 02:04:18 crc kubenswrapper[4681]: I0404 02:04:18.922792 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a76f74e0-8100-45f2-9d5c-9abe30ba7dc2-kubelet-dir\") pod \"a76f74e0-8100-45f2-9d5c-9abe30ba7dc2\" (UID: \"a76f74e0-8100-45f2-9d5c-9abe30ba7dc2\") " Apr 04 02:04:18 crc kubenswrapper[4681]: I0404 02:04:18.922867 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a76f74e0-8100-45f2-9d5c-9abe30ba7dc2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a76f74e0-8100-45f2-9d5c-9abe30ba7dc2" (UID: "a76f74e0-8100-45f2-9d5c-9abe30ba7dc2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:04:18 crc kubenswrapper[4681]: I0404 02:04:18.923079 4681 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a76f74e0-8100-45f2-9d5c-9abe30ba7dc2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:04:18 crc kubenswrapper[4681]: I0404 02:04:18.927374 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a76f74e0-8100-45f2-9d5c-9abe30ba7dc2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a76f74e0-8100-45f2-9d5c-9abe30ba7dc2" (UID: "a76f74e0-8100-45f2-9d5c-9abe30ba7dc2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:04:19 crc kubenswrapper[4681]: I0404 02:04:19.024823 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a76f74e0-8100-45f2-9d5c-9abe30ba7dc2-kube-api-access\") on node \"crc\" DevicePath \"\"" Apr 04 02:04:19 crc kubenswrapper[4681]: I0404 02:04:19.565406 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"a76f74e0-8100-45f2-9d5c-9abe30ba7dc2","Type":"ContainerDied","Data":"d4f4f8ee6bdf5f096de85d258bcc024fe0d2774dc68ddea2f0b81580ee79dbeb"} Apr 04 02:04:19 crc kubenswrapper[4681]: I0404 02:04:19.565448 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4f4f8ee6bdf5f096de85d258bcc024fe0d2774dc68ddea2f0b81580ee79dbeb" Apr 04 02:04:19 crc kubenswrapper[4681]: I0404 02:04:19.565464 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Apr 04 02:04:26 crc kubenswrapper[4681]: I0404 02:04:26.523589 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:04:26 crc kubenswrapper[4681]: I0404 02:04:26.523986 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:04:37 crc kubenswrapper[4681]: I0404 02:04:37.687229 4681 generic.go:334] "Generic (PLEG): container finished" podID="664aa862-1bb6-421a-87b9-992ead56694b" containerID="2402eb5715d23e9fbc7ffffefaed58a58dacccd94a26b8309a9895842afa1341" exitCode=0 Apr 04 02:04:37 crc kubenswrapper[4681]: I0404 02:04:37.687365 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kxzxq" event={"ID":"664aa862-1bb6-421a-87b9-992ead56694b","Type":"ContainerDied","Data":"2402eb5715d23e9fbc7ffffefaed58a58dacccd94a26b8309a9895842afa1341"} Apr 04 02:04:37 crc kubenswrapper[4681]: I0404 02:04:37.688954 4681 scope.go:117] "RemoveContainer" containerID="2402eb5715d23e9fbc7ffffefaed58a58dacccd94a26b8309a9895842afa1341" Apr 04 02:04:38 crc kubenswrapper[4681]: I0404 02:04:38.697411 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kxzxq" event={"ID":"664aa862-1bb6-421a-87b9-992ead56694b","Type":"ContainerStarted","Data":"bdc413e206247e2853a81056cd41a77940bd28f211f4e44aecce13c5f01b238a"} Apr 04 02:04:38 crc kubenswrapper[4681]: I0404 02:04:38.697850 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-kxzxq" Apr 04 02:04:38 crc kubenswrapper[4681]: I0404 02:04:38.700671 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-kxzxq" Apr 04 02:04:56 crc kubenswrapper[4681]: I0404 02:04:56.524721 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:04:56 crc kubenswrapper[4681]: I0404 02:04:56.525577 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:05:20 crc kubenswrapper[4681]: I0404 02:05:20.437471 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-12-crc"] Apr 04 02:05:20 crc kubenswrapper[4681]: E0404 02:05:20.439547 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a76f74e0-8100-45f2-9d5c-9abe30ba7dc2" containerName="pruner" Apr 04 02:05:20 crc kubenswrapper[4681]: I0404 02:05:20.439690 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="a76f74e0-8100-45f2-9d5c-9abe30ba7dc2" containerName="pruner" Apr 04 02:05:20 crc kubenswrapper[4681]: I0404 02:05:20.439964 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="a76f74e0-8100-45f2-9d5c-9abe30ba7dc2" containerName="pruner" Apr 04 02:05:20 crc kubenswrapper[4681]: I0404 02:05:20.440650 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-12-crc" Apr 04 02:05:20 crc kubenswrapper[4681]: I0404 02:05:20.443533 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Apr 04 02:05:20 crc kubenswrapper[4681]: I0404 02:05:20.445136 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Apr 04 02:05:20 crc kubenswrapper[4681]: I0404 02:05:20.450012 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-12-crc"] Apr 04 02:05:20 crc kubenswrapper[4681]: I0404 02:05:20.639001 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f9f6f-bb0f-409a-af4c-b023eca2ab69-var-lock\") pod \"installer-12-crc\" (UID: \"5c4f9f6f-bb0f-409a-af4c-b023eca2ab69\") " pod="openshift-kube-controller-manager/installer-12-crc" Apr 04 02:05:20 crc kubenswrapper[4681]: I0404 02:05:20.639099 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f9f6f-bb0f-409a-af4c-b023eca2ab69-kubelet-dir\") pod \"installer-12-crc\" (UID: \"5c4f9f6f-bb0f-409a-af4c-b023eca2ab69\") " pod="openshift-kube-controller-manager/installer-12-crc" Apr 04 02:05:20 crc kubenswrapper[4681]: I0404 02:05:20.639160 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c4f9f6f-bb0f-409a-af4c-b023eca2ab69-kube-api-access\") pod \"installer-12-crc\" (UID: \"5c4f9f6f-bb0f-409a-af4c-b023eca2ab69\") " pod="openshift-kube-controller-manager/installer-12-crc" Apr 04 02:05:20 crc kubenswrapper[4681]: I0404 02:05:20.740175 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f9f6f-bb0f-409a-af4c-b023eca2ab69-var-lock\") pod \"installer-12-crc\" (UID: \"5c4f9f6f-bb0f-409a-af4c-b023eca2ab69\") " pod="openshift-kube-controller-manager/installer-12-crc" Apr 04 02:05:20 crc kubenswrapper[4681]: I0404 02:05:20.740239 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f9f6f-bb0f-409a-af4c-b023eca2ab69-kubelet-dir\") pod \"installer-12-crc\" (UID: \"5c4f9f6f-bb0f-409a-af4c-b023eca2ab69\") " pod="openshift-kube-controller-manager/installer-12-crc" Apr 04 02:05:20 crc kubenswrapper[4681]: I0404 02:05:20.740306 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c4f9f6f-bb0f-409a-af4c-b023eca2ab69-kube-api-access\") pod \"installer-12-crc\" (UID: \"5c4f9f6f-bb0f-409a-af4c-b023eca2ab69\") " pod="openshift-kube-controller-manager/installer-12-crc" Apr 04 02:05:20 crc kubenswrapper[4681]: I0404 02:05:20.740799 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f9f6f-bb0f-409a-af4c-b023eca2ab69-var-lock\") pod \"installer-12-crc\" (UID: \"5c4f9f6f-bb0f-409a-af4c-b023eca2ab69\") " pod="openshift-kube-controller-manager/installer-12-crc" Apr 04 02:05:20 crc kubenswrapper[4681]: I0404 02:05:20.740857 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f9f6f-bb0f-409a-af4c-b023eca2ab69-kubelet-dir\") pod \"installer-12-crc\" (UID: \"5c4f9f6f-bb0f-409a-af4c-b023eca2ab69\") " pod="openshift-kube-controller-manager/installer-12-crc" Apr 04 02:05:20 crc kubenswrapper[4681]: I0404 02:05:20.772566 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c4f9f6f-bb0f-409a-af4c-b023eca2ab69-kube-api-access\") pod \"installer-12-crc\" (UID: \"5c4f9f6f-bb0f-409a-af4c-b023eca2ab69\") " pod="openshift-kube-controller-manager/installer-12-crc" Apr 04 02:05:21 crc kubenswrapper[4681]: I0404 02:05:21.060495 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-12-crc" Apr 04 02:05:21 crc kubenswrapper[4681]: I0404 02:05:21.482844 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-12-crc"] Apr 04 02:05:21 crc kubenswrapper[4681]: W0404 02:05:21.490169 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5c4f9f6f_bb0f_409a_af4c_b023eca2ab69.slice/crio-8365afb5f514ea9c70f454123c1cc27d54a313597618daccf70a281825d1f0cc WatchSource:0}: Error finding container 8365afb5f514ea9c70f454123c1cc27d54a313597618daccf70a281825d1f0cc: Status 404 returned error can't find the container with id 8365afb5f514ea9c70f454123c1cc27d54a313597618daccf70a281825d1f0cc Apr 04 02:05:22 crc kubenswrapper[4681]: I0404 02:05:22.007872 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-12-crc" event={"ID":"5c4f9f6f-bb0f-409a-af4c-b023eca2ab69","Type":"ContainerStarted","Data":"cb46c191eefaa813003dbe700b661affa6091b0f541a43020f65198717cf1f96"} Apr 04 02:05:22 crc kubenswrapper[4681]: I0404 02:05:22.007939 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-12-crc" event={"ID":"5c4f9f6f-bb0f-409a-af4c-b023eca2ab69","Type":"ContainerStarted","Data":"8365afb5f514ea9c70f454123c1cc27d54a313597618daccf70a281825d1f0cc"} Apr 04 02:05:24 crc kubenswrapper[4681]: I0404 02:05:24.405247 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-12-crc" podStartSLOduration=4.405223861 podStartE2EDuration="4.405223861s" podCreationTimestamp="2026-04-04 02:05:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:05:22.023795615 +0000 UTC m=+601.689570745" watchObservedRunningTime="2026-04-04 02:05:24.405223861 +0000 UTC m=+604.070998991" Apr 04 02:05:24 crc kubenswrapper[4681]: I0404 02:05:24.408561 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-11-crc"] Apr 04 02:05:24 crc kubenswrapper[4681]: I0404 02:05:24.409517 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-crc" Apr 04 02:05:24 crc kubenswrapper[4681]: I0404 02:05:24.413707 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Apr 04 02:05:24 crc kubenswrapper[4681]: I0404 02:05:24.414662 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Apr 04 02:05:24 crc kubenswrapper[4681]: I0404 02:05:24.426087 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-11-crc"] Apr 04 02:05:24 crc kubenswrapper[4681]: I0404 02:05:24.596171 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27afc22e-f982-4252-a9d3-a07bdd837b1c-kube-api-access\") pod \"installer-11-crc\" (UID: \"27afc22e-f982-4252-a9d3-a07bdd837b1c\") " pod="openshift-kube-apiserver/installer-11-crc" Apr 04 02:05:24 crc kubenswrapper[4681]: I0404 02:05:24.596303 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27afc22e-f982-4252-a9d3-a07bdd837b1c-kubelet-dir\") pod \"installer-11-crc\" (UID: \"27afc22e-f982-4252-a9d3-a07bdd837b1c\") " pod="openshift-kube-apiserver/installer-11-crc" Apr 04 02:05:24 crc kubenswrapper[4681]: I0404 02:05:24.596351 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/27afc22e-f982-4252-a9d3-a07bdd837b1c-var-lock\") pod \"installer-11-crc\" (UID: \"27afc22e-f982-4252-a9d3-a07bdd837b1c\") " pod="openshift-kube-apiserver/installer-11-crc" Apr 04 02:05:24 crc kubenswrapper[4681]: I0404 02:05:24.698017 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/27afc22e-f982-4252-a9d3-a07bdd837b1c-var-lock\") pod \"installer-11-crc\" (UID: \"27afc22e-f982-4252-a9d3-a07bdd837b1c\") " pod="openshift-kube-apiserver/installer-11-crc" Apr 04 02:05:24 crc kubenswrapper[4681]: I0404 02:05:24.698142 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/27afc22e-f982-4252-a9d3-a07bdd837b1c-var-lock\") pod \"installer-11-crc\" (UID: \"27afc22e-f982-4252-a9d3-a07bdd837b1c\") " pod="openshift-kube-apiserver/installer-11-crc" Apr 04 02:05:24 crc kubenswrapper[4681]: I0404 02:05:24.698231 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27afc22e-f982-4252-a9d3-a07bdd837b1c-kube-api-access\") pod \"installer-11-crc\" (UID: \"27afc22e-f982-4252-a9d3-a07bdd837b1c\") " pod="openshift-kube-apiserver/installer-11-crc" Apr 04 02:05:24 crc kubenswrapper[4681]: I0404 02:05:24.698393 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27afc22e-f982-4252-a9d3-a07bdd837b1c-kubelet-dir\") pod \"installer-11-crc\" (UID: \"27afc22e-f982-4252-a9d3-a07bdd837b1c\") " pod="openshift-kube-apiserver/installer-11-crc" Apr 04 02:05:24 crc kubenswrapper[4681]: I0404 02:05:24.698524 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27afc22e-f982-4252-a9d3-a07bdd837b1c-kubelet-dir\") pod \"installer-11-crc\" (UID: \"27afc22e-f982-4252-a9d3-a07bdd837b1c\") " pod="openshift-kube-apiserver/installer-11-crc" Apr 04 02:05:24 crc kubenswrapper[4681]: I0404 02:05:24.729158 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27afc22e-f982-4252-a9d3-a07bdd837b1c-kube-api-access\") pod \"installer-11-crc\" (UID: \"27afc22e-f982-4252-a9d3-a07bdd837b1c\") " pod="openshift-kube-apiserver/installer-11-crc" Apr 04 02:05:24 crc kubenswrapper[4681]: I0404 02:05:24.774306 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-crc" Apr 04 02:05:24 crc kubenswrapper[4681]: I0404 02:05:24.981868 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-11-crc"] Apr 04 02:05:25 crc kubenswrapper[4681]: I0404 02:05:25.030794 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-11-crc" event={"ID":"27afc22e-f982-4252-a9d3-a07bdd837b1c","Type":"ContainerStarted","Data":"1765f988587e872351addbe5baf66afe930998e79150e62f276cfa16e546fbb1"} Apr 04 02:05:25 crc kubenswrapper[4681]: I0404 02:05:25.625872 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-7b45f65999-2bcr6"] Apr 04 02:05:25 crc kubenswrapper[4681]: I0404 02:05:25.626694 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7b45f65999-2bcr6" Apr 04 02:05:25 crc kubenswrapper[4681]: I0404 02:05:25.678989 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-7b45f65999-2bcr6"] Apr 04 02:05:25 crc kubenswrapper[4681]: I0404 02:05:25.812819 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4b2daa0c-f135-4086-a211-a0fc4cb5116f-registry-certificates\") pod \"image-registry-7b45f65999-2bcr6\" (UID: \"4b2daa0c-f135-4086-a211-a0fc4cb5116f\") " pod="openshift-image-registry/image-registry-7b45f65999-2bcr6" Apr 04 02:05:25 crc kubenswrapper[4681]: I0404 02:05:25.812969 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4b2daa0c-f135-4086-a211-a0fc4cb5116f-registry-tls\") pod \"image-registry-7b45f65999-2bcr6\" (UID: \"4b2daa0c-f135-4086-a211-a0fc4cb5116f\") " pod="openshift-image-registry/image-registry-7b45f65999-2bcr6" Apr 04 02:05:25 crc kubenswrapper[4681]: I0404 02:05:25.813050 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-7b45f65999-2bcr6\" (UID: \"4b2daa0c-f135-4086-a211-a0fc4cb5116f\") " pod="openshift-image-registry/image-registry-7b45f65999-2bcr6" Apr 04 02:05:25 crc kubenswrapper[4681]: I0404 02:05:25.813113 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4b2daa0c-f135-4086-a211-a0fc4cb5116f-ca-trust-extracted\") pod \"image-registry-7b45f65999-2bcr6\" (UID: \"4b2daa0c-f135-4086-a211-a0fc4cb5116f\") " pod="openshift-image-registry/image-registry-7b45f65999-2bcr6" Apr 04 02:05:25 crc kubenswrapper[4681]: I0404 02:05:25.813162 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b2daa0c-f135-4086-a211-a0fc4cb5116f-bound-sa-token\") pod \"image-registry-7b45f65999-2bcr6\" (UID: \"4b2daa0c-f135-4086-a211-a0fc4cb5116f\") " pod="openshift-image-registry/image-registry-7b45f65999-2bcr6" Apr 04 02:05:25 crc kubenswrapper[4681]: I0404 02:05:25.813212 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b2daa0c-f135-4086-a211-a0fc4cb5116f-trusted-ca\") pod \"image-registry-7b45f65999-2bcr6\" (UID: \"4b2daa0c-f135-4086-a211-a0fc4cb5116f\") " pod="openshift-image-registry/image-registry-7b45f65999-2bcr6" Apr 04 02:05:25 crc kubenswrapper[4681]: I0404 02:05:25.813253 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4b2daa0c-f135-4086-a211-a0fc4cb5116f-installation-pull-secrets\") pod \"image-registry-7b45f65999-2bcr6\" (UID: \"4b2daa0c-f135-4086-a211-a0fc4cb5116f\") " pod="openshift-image-registry/image-registry-7b45f65999-2bcr6" Apr 04 02:05:25 crc kubenswrapper[4681]: I0404 02:05:25.813362 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmx7g\" (UniqueName: \"kubernetes.io/projected/4b2daa0c-f135-4086-a211-a0fc4cb5116f-kube-api-access-fmx7g\") pod \"image-registry-7b45f65999-2bcr6\" (UID: \"4b2daa0c-f135-4086-a211-a0fc4cb5116f\") " pod="openshift-image-registry/image-registry-7b45f65999-2bcr6" Apr 04 02:05:25 crc kubenswrapper[4681]: I0404 02:05:25.837277 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-7b45f65999-2bcr6\" (UID: \"4b2daa0c-f135-4086-a211-a0fc4cb5116f\") " pod="openshift-image-registry/image-registry-7b45f65999-2bcr6" Apr 04 02:05:25 crc kubenswrapper[4681]: I0404 02:05:25.915135 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4b2daa0c-f135-4086-a211-a0fc4cb5116f-registry-certificates\") pod \"image-registry-7b45f65999-2bcr6\" (UID: \"4b2daa0c-f135-4086-a211-a0fc4cb5116f\") " pod="openshift-image-registry/image-registry-7b45f65999-2bcr6" Apr 04 02:05:25 crc kubenswrapper[4681]: I0404 02:05:25.915236 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4b2daa0c-f135-4086-a211-a0fc4cb5116f-registry-tls\") pod \"image-registry-7b45f65999-2bcr6\" (UID: \"4b2daa0c-f135-4086-a211-a0fc4cb5116f\") " pod="openshift-image-registry/image-registry-7b45f65999-2bcr6" Apr 04 02:05:25 crc kubenswrapper[4681]: I0404 02:05:25.915385 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4b2daa0c-f135-4086-a211-a0fc4cb5116f-ca-trust-extracted\") pod \"image-registry-7b45f65999-2bcr6\" (UID: \"4b2daa0c-f135-4086-a211-a0fc4cb5116f\") " pod="openshift-image-registry/image-registry-7b45f65999-2bcr6" Apr 04 02:05:25 crc kubenswrapper[4681]: I0404 02:05:25.915460 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b2daa0c-f135-4086-a211-a0fc4cb5116f-bound-sa-token\") pod \"image-registry-7b45f65999-2bcr6\" (UID: \"4b2daa0c-f135-4086-a211-a0fc4cb5116f\") " pod="openshift-image-registry/image-registry-7b45f65999-2bcr6" Apr 04 02:05:25 crc kubenswrapper[4681]: I0404 02:05:25.915525 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b2daa0c-f135-4086-a211-a0fc4cb5116f-trusted-ca\") pod \"image-registry-7b45f65999-2bcr6\" (UID: \"4b2daa0c-f135-4086-a211-a0fc4cb5116f\") " pod="openshift-image-registry/image-registry-7b45f65999-2bcr6" Apr 04 02:05:25 crc kubenswrapper[4681]: I0404 02:05:25.915601 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4b2daa0c-f135-4086-a211-a0fc4cb5116f-installation-pull-secrets\") pod \"image-registry-7b45f65999-2bcr6\" (UID: \"4b2daa0c-f135-4086-a211-a0fc4cb5116f\") " pod="openshift-image-registry/image-registry-7b45f65999-2bcr6" Apr 04 02:05:25 crc kubenswrapper[4681]: I0404 02:05:25.915680 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmx7g\" (UniqueName: \"kubernetes.io/projected/4b2daa0c-f135-4086-a211-a0fc4cb5116f-kube-api-access-fmx7g\") pod \"image-registry-7b45f65999-2bcr6\" (UID: \"4b2daa0c-f135-4086-a211-a0fc4cb5116f\") " pod="openshift-image-registry/image-registry-7b45f65999-2bcr6" Apr 04 02:05:25 crc kubenswrapper[4681]: I0404 02:05:25.916468 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4b2daa0c-f135-4086-a211-a0fc4cb5116f-registry-certificates\") pod \"image-registry-7b45f65999-2bcr6\" (UID: \"4b2daa0c-f135-4086-a211-a0fc4cb5116f\") " pod="openshift-image-registry/image-registry-7b45f65999-2bcr6" Apr 04 02:05:25 crc kubenswrapper[4681]: I0404 02:05:25.916997 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4b2daa0c-f135-4086-a211-a0fc4cb5116f-ca-trust-extracted\") pod \"image-registry-7b45f65999-2bcr6\" (UID: \"4b2daa0c-f135-4086-a211-a0fc4cb5116f\") " pod="openshift-image-registry/image-registry-7b45f65999-2bcr6" Apr 04 02:05:25 crc kubenswrapper[4681]: I0404 02:05:25.917464 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4b2daa0c-f135-4086-a211-a0fc4cb5116f-trusted-ca\") pod \"image-registry-7b45f65999-2bcr6\" (UID: \"4b2daa0c-f135-4086-a211-a0fc4cb5116f\") " pod="openshift-image-registry/image-registry-7b45f65999-2bcr6" Apr 04 02:05:25 crc kubenswrapper[4681]: I0404 02:05:25.924191 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4b2daa0c-f135-4086-a211-a0fc4cb5116f-registry-tls\") pod \"image-registry-7b45f65999-2bcr6\" (UID: \"4b2daa0c-f135-4086-a211-a0fc4cb5116f\") " pod="openshift-image-registry/image-registry-7b45f65999-2bcr6" Apr 04 02:05:25 crc kubenswrapper[4681]: I0404 02:05:25.926065 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4b2daa0c-f135-4086-a211-a0fc4cb5116f-installation-pull-secrets\") pod \"image-registry-7b45f65999-2bcr6\" (UID: \"4b2daa0c-f135-4086-a211-a0fc4cb5116f\") " pod="openshift-image-registry/image-registry-7b45f65999-2bcr6" Apr 04 02:05:25 crc kubenswrapper[4681]: I0404 02:05:25.939950 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4b2daa0c-f135-4086-a211-a0fc4cb5116f-bound-sa-token\") pod \"image-registry-7b45f65999-2bcr6\" (UID: \"4b2daa0c-f135-4086-a211-a0fc4cb5116f\") " pod="openshift-image-registry/image-registry-7b45f65999-2bcr6" Apr 04 02:05:25 crc kubenswrapper[4681]: I0404 02:05:25.940765 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmx7g\" (UniqueName: \"kubernetes.io/projected/4b2daa0c-f135-4086-a211-a0fc4cb5116f-kube-api-access-fmx7g\") pod \"image-registry-7b45f65999-2bcr6\" (UID: \"4b2daa0c-f135-4086-a211-a0fc4cb5116f\") " pod="openshift-image-registry/image-registry-7b45f65999-2bcr6" Apr 04 02:05:25 crc kubenswrapper[4681]: I0404 02:05:25.942486 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7b45f65999-2bcr6" Apr 04 02:05:26 crc kubenswrapper[4681]: I0404 02:05:26.040987 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-11-crc" event={"ID":"27afc22e-f982-4252-a9d3-a07bdd837b1c","Type":"ContainerStarted","Data":"c62be49f931c85d8079c2af819cf0ba23ea6e77d258f8e6c59e57db695ec7a18"} Apr 04 02:05:26 crc kubenswrapper[4681]: I0404 02:05:26.068837 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-11-crc" podStartSLOduration=2.068817688 podStartE2EDuration="2.068817688s" podCreationTimestamp="2026-04-04 02:05:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:05:26.064635121 +0000 UTC m=+605.730410241" watchObservedRunningTime="2026-04-04 02:05:26.068817688 +0000 UTC m=+605.734592808" Apr 04 02:05:26 crc kubenswrapper[4681]: I0404 02:05:26.168233 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-7b45f65999-2bcr6"] Apr 04 02:05:26 crc kubenswrapper[4681]: W0404 02:05:26.171575 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b2daa0c_f135_4086_a211_a0fc4cb5116f.slice/crio-5e0ebe6f58c104ba9c4c4b7f1ca5bf62289d2c945b03b127b4ba22583f5a26c0 WatchSource:0}: Error finding container 5e0ebe6f58c104ba9c4c4b7f1ca5bf62289d2c945b03b127b4ba22583f5a26c0: Status 404 returned error can't find the container with id 5e0ebe6f58c104ba9c4c4b7f1ca5bf62289d2c945b03b127b4ba22583f5a26c0 Apr 04 02:05:26 crc kubenswrapper[4681]: I0404 02:05:26.524578 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:05:26 crc kubenswrapper[4681]: I0404 02:05:26.525066 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:05:26 crc kubenswrapper[4681]: I0404 02:05:26.525144 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 02:05:26 crc kubenswrapper[4681]: I0404 02:05:26.526137 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c85362a63d53f1caf92cf1cf160f8c227b257437b2ac80c0232b940eca17eb43"} pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Apr 04 02:05:26 crc kubenswrapper[4681]: I0404 02:05:26.526246 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" containerID="cri-o://c85362a63d53f1caf92cf1cf160f8c227b257437b2ac80c0232b940eca17eb43" gracePeriod=600 Apr 04 02:05:27 crc kubenswrapper[4681]: I0404 02:05:27.055940 4681 generic.go:334] "Generic (PLEG): container finished" podID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerID="c85362a63d53f1caf92cf1cf160f8c227b257437b2ac80c0232b940eca17eb43" exitCode=0 Apr 04 02:05:27 crc kubenswrapper[4681]: I0404 02:05:27.056056 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerDied","Data":"c85362a63d53f1caf92cf1cf160f8c227b257437b2ac80c0232b940eca17eb43"} Apr 04 02:05:27 crc kubenswrapper[4681]: I0404 02:05:27.056401 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerStarted","Data":"9cdb4a37ebc45c431b49d8569b090b8ea3b25e9985ca40aa49f8ebf3ea0f3152"} Apr 04 02:05:27 crc kubenswrapper[4681]: I0404 02:05:27.056429 4681 scope.go:117] "RemoveContainer" containerID="e3d77f999933ace5e138cad5a0f8abd255269546205adace19a75289ff0eb7f5" Apr 04 02:05:27 crc kubenswrapper[4681]: I0404 02:05:27.060131 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-7b45f65999-2bcr6" event={"ID":"4b2daa0c-f135-4086-a211-a0fc4cb5116f","Type":"ContainerStarted","Data":"58e751f24c18aedb0742d5dc89929f55e06eb3f852623684a77928cdd3e96701"} Apr 04 02:05:27 crc kubenswrapper[4681]: I0404 02:05:27.060165 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-7b45f65999-2bcr6" Apr 04 02:05:27 crc kubenswrapper[4681]: I0404 02:05:27.060181 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-7b45f65999-2bcr6" event={"ID":"4b2daa0c-f135-4086-a211-a0fc4cb5116f","Type":"ContainerStarted","Data":"5e0ebe6f58c104ba9c4c4b7f1ca5bf62289d2c945b03b127b4ba22583f5a26c0"} Apr 04 02:05:27 crc kubenswrapper[4681]: I0404 02:05:27.108926 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-7b45f65999-2bcr6" podStartSLOduration=2.1089068 podStartE2EDuration="2.1089068s" podCreationTimestamp="2026-04-04 02:05:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:05:27.107603287 +0000 UTC m=+606.773378447" watchObservedRunningTime="2026-04-04 02:05:27.1089068 +0000 UTC m=+606.774681920" Apr 04 02:05:45 crc kubenswrapper[4681]: I0404 02:05:45.952545 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-7b45f65999-2bcr6" Apr 04 02:05:46 crc kubenswrapper[4681]: I0404 02:05:46.023142 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-7bdb65549f-f4hr5"] Apr 04 02:05:46 crc kubenswrapper[4681]: I0404 02:05:46.540569 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-8-crc"] Apr 04 02:05:46 crc kubenswrapper[4681]: I0404 02:05:46.541358 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-8-crc" Apr 04 02:05:46 crc kubenswrapper[4681]: I0404 02:05:46.543546 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Apr 04 02:05:46 crc kubenswrapper[4681]: I0404 02:05:46.544041 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-5vhrm" Apr 04 02:05:46 crc kubenswrapper[4681]: I0404 02:05:46.552636 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-8-crc"] Apr 04 02:05:46 crc kubenswrapper[4681]: I0404 02:05:46.605707 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dde70690-f245-4455-ba5e-1233391c502d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"dde70690-f245-4455-ba5e-1233391c502d\") " pod="openshift-kube-scheduler/revision-pruner-8-crc" Apr 04 02:05:46 crc kubenswrapper[4681]: I0404 02:05:46.605959 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dde70690-f245-4455-ba5e-1233391c502d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"dde70690-f245-4455-ba5e-1233391c502d\") " pod="openshift-kube-scheduler/revision-pruner-8-crc" Apr 04 02:05:46 crc kubenswrapper[4681]: I0404 02:05:46.707117 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dde70690-f245-4455-ba5e-1233391c502d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"dde70690-f245-4455-ba5e-1233391c502d\") " pod="openshift-kube-scheduler/revision-pruner-8-crc" Apr 04 02:05:46 crc kubenswrapper[4681]: I0404 02:05:46.707169 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dde70690-f245-4455-ba5e-1233391c502d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"dde70690-f245-4455-ba5e-1233391c502d\") " pod="openshift-kube-scheduler/revision-pruner-8-crc" Apr 04 02:05:46 crc kubenswrapper[4681]: I0404 02:05:46.707240 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dde70690-f245-4455-ba5e-1233391c502d-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"dde70690-f245-4455-ba5e-1233391c502d\") " pod="openshift-kube-scheduler/revision-pruner-8-crc" Apr 04 02:05:46 crc kubenswrapper[4681]: I0404 02:05:46.728986 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dde70690-f245-4455-ba5e-1233391c502d-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"dde70690-f245-4455-ba5e-1233391c502d\") " pod="openshift-kube-scheduler/revision-pruner-8-crc" Apr 04 02:05:46 crc kubenswrapper[4681]: I0404 02:05:46.863100 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-8-crc" Apr 04 02:05:47 crc kubenswrapper[4681]: I0404 02:05:47.080628 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-8-crc"] Apr 04 02:05:47 crc kubenswrapper[4681]: I0404 02:05:47.206323 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-8-crc" event={"ID":"dde70690-f245-4455-ba5e-1233391c502d","Type":"ContainerStarted","Data":"0cba19490c6f19d80ef2beed8c8d24f1aad0c12d52cbcc6d920ce3138aa1b150"} Apr 04 02:05:48 crc kubenswrapper[4681]: I0404 02:05:48.217339 4681 generic.go:334] "Generic (PLEG): container finished" podID="dde70690-f245-4455-ba5e-1233391c502d" containerID="a129fe7c853d2786ce0228dda45529a58a0f1177dbf7df4218edc42b8f7de438" exitCode=0 Apr 04 02:05:48 crc kubenswrapper[4681]: I0404 02:05:48.217596 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-8-crc" event={"ID":"dde70690-f245-4455-ba5e-1233391c502d","Type":"ContainerDied","Data":"a129fe7c853d2786ce0228dda45529a58a0f1177dbf7df4218edc42b8f7de438"} Apr 04 02:05:49 crc kubenswrapper[4681]: I0404 02:05:49.409861 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-8-crc" Apr 04 02:05:49 crc kubenswrapper[4681]: I0404 02:05:49.548850 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dde70690-f245-4455-ba5e-1233391c502d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dde70690-f245-4455-ba5e-1233391c502d" (UID: "dde70690-f245-4455-ba5e-1233391c502d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:05:49 crc kubenswrapper[4681]: I0404 02:05:49.548717 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dde70690-f245-4455-ba5e-1233391c502d-kubelet-dir\") pod \"dde70690-f245-4455-ba5e-1233391c502d\" (UID: \"dde70690-f245-4455-ba5e-1233391c502d\") " Apr 04 02:05:49 crc kubenswrapper[4681]: I0404 02:05:49.549009 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dde70690-f245-4455-ba5e-1233391c502d-kube-api-access\") pod \"dde70690-f245-4455-ba5e-1233391c502d\" (UID: \"dde70690-f245-4455-ba5e-1233391c502d\") " Apr 04 02:05:49 crc kubenswrapper[4681]: I0404 02:05:49.550942 4681 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dde70690-f245-4455-ba5e-1233391c502d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:05:49 crc kubenswrapper[4681]: I0404 02:05:49.558789 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dde70690-f245-4455-ba5e-1233391c502d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dde70690-f245-4455-ba5e-1233391c502d" (UID: "dde70690-f245-4455-ba5e-1233391c502d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:05:49 crc kubenswrapper[4681]: I0404 02:05:49.654478 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dde70690-f245-4455-ba5e-1233391c502d-kube-api-access\") on node \"crc\" DevicePath \"\"" Apr 04 02:05:50 crc kubenswrapper[4681]: I0404 02:05:50.232714 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-8-crc" event={"ID":"dde70690-f245-4455-ba5e-1233391c502d","Type":"ContainerDied","Data":"0cba19490c6f19d80ef2beed8c8d24f1aad0c12d52cbcc6d920ce3138aa1b150"} Apr 04 02:05:50 crc kubenswrapper[4681]: I0404 02:05:50.232780 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cba19490c6f19d80ef2beed8c8d24f1aad0c12d52cbcc6d920ce3138aa1b150" Apr 04 02:05:50 crc kubenswrapper[4681]: I0404 02:05:50.232831 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-8-crc" Apr 04 02:05:54 crc kubenswrapper[4681]: I0404 02:05:54.794053 4681 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Apr 04 02:05:54 crc kubenswrapper[4681]: I0404 02:05:54.795573 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="235e9295064844132a05dc40ef3a886a" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://868f34c3df6ae6d9391df0ea11fce0cef7174893730a39b413545dc27f95de5a" gracePeriod=30 Apr 04 02:05:54 crc kubenswrapper[4681]: I0404 02:05:54.795671 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="235e9295064844132a05dc40ef3a886a" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://f7d46b3977fe622d3792a4163862ec755d995bc680053ceaf1044528676f2ae0" gracePeriod=30 Apr 04 02:05:54 crc kubenswrapper[4681]: I0404 02:05:54.795749 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="235e9295064844132a05dc40ef3a886a" containerName="kube-controller-manager" containerID="cri-o://2791fdb1449327901a9fbc2879b866d0662be7e5c045548be28f987c55b31f66" gracePeriod=30 Apr 04 02:05:54 crc kubenswrapper[4681]: I0404 02:05:54.796631 4681 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Apr 04 02:05:54 crc kubenswrapper[4681]: E0404 02:05:54.797062 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dde70690-f245-4455-ba5e-1233391c502d" containerName="pruner" Apr 04 02:05:54 crc kubenswrapper[4681]: I0404 02:05:54.797160 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="dde70690-f245-4455-ba5e-1233391c502d" containerName="pruner" Apr 04 02:05:54 crc kubenswrapper[4681]: E0404 02:05:54.797309 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="235e9295064844132a05dc40ef3a886a" containerName="kube-controller-manager-recovery-controller" Apr 04 02:05:54 crc kubenswrapper[4681]: I0404 02:05:54.797407 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="235e9295064844132a05dc40ef3a886a" containerName="kube-controller-manager-recovery-controller" Apr 04 02:05:54 crc kubenswrapper[4681]: E0404 02:05:54.797519 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="235e9295064844132a05dc40ef3a886a" containerName="kube-controller-manager" Apr 04 02:05:54 crc kubenswrapper[4681]: I0404 02:05:54.797600 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="235e9295064844132a05dc40ef3a886a" containerName="kube-controller-manager" Apr 04 02:05:54 crc kubenswrapper[4681]: E0404 02:05:54.797681 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="235e9295064844132a05dc40ef3a886a" containerName="kube-controller-manager-cert-syncer" Apr 04 02:05:54 crc kubenswrapper[4681]: I0404 02:05:54.799126 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="235e9295064844132a05dc40ef3a886a" containerName="kube-controller-manager-cert-syncer" Apr 04 02:05:54 crc kubenswrapper[4681]: E0404 02:05:54.799297 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="235e9295064844132a05dc40ef3a886a" containerName="cluster-policy-controller" Apr 04 02:05:54 crc kubenswrapper[4681]: I0404 02:05:54.799416 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="235e9295064844132a05dc40ef3a886a" containerName="cluster-policy-controller" Apr 04 02:05:54 crc kubenswrapper[4681]: I0404 02:05:54.799689 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="235e9295064844132a05dc40ef3a886a" containerName="cluster-policy-controller" Apr 04 02:05:54 crc kubenswrapper[4681]: I0404 02:05:54.799777 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="235e9295064844132a05dc40ef3a886a" containerName="kube-controller-manager-cert-syncer" Apr 04 02:05:54 crc kubenswrapper[4681]: I0404 02:05:54.799875 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="dde70690-f245-4455-ba5e-1233391c502d" containerName="pruner" Apr 04 02:05:54 crc kubenswrapper[4681]: I0404 02:05:54.799960 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="235e9295064844132a05dc40ef3a886a" containerName="kube-controller-manager-recovery-controller" Apr 04 02:05:54 crc kubenswrapper[4681]: I0404 02:05:54.800036 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="235e9295064844132a05dc40ef3a886a" containerName="kube-controller-manager" Apr 04 02:05:54 crc kubenswrapper[4681]: I0404 02:05:54.800129 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="235e9295064844132a05dc40ef3a886a" containerName="kube-controller-manager" Apr 04 02:05:54 crc kubenswrapper[4681]: E0404 02:05:54.800427 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="235e9295064844132a05dc40ef3a886a" containerName="kube-controller-manager" Apr 04 02:05:54 crc kubenswrapper[4681]: I0404 02:05:54.800527 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="235e9295064844132a05dc40ef3a886a" containerName="kube-controller-manager" Apr 04 02:05:54 crc kubenswrapper[4681]: I0404 02:05:54.795507 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="235e9295064844132a05dc40ef3a886a" containerName="cluster-policy-controller" containerID="cri-o://0934466cb993c126f425e71e93e3598d91ed1cb12056623df2eb3d173564bef2" gracePeriod=30 Apr 04 02:05:54 crc kubenswrapper[4681]: I0404 02:05:54.921350 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c32a96981201f35bdc64ba062620676a-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"c32a96981201f35bdc64ba062620676a\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:05:54 crc kubenswrapper[4681]: I0404 02:05:54.921481 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c32a96981201f35bdc64ba062620676a-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"c32a96981201f35bdc64ba062620676a\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.022931 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c32a96981201f35bdc64ba062620676a-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"c32a96981201f35bdc64ba062620676a\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.023205 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c32a96981201f35bdc64ba062620676a-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"c32a96981201f35bdc64ba062620676a\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.023341 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/c32a96981201f35bdc64ba062620676a-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"c32a96981201f35bdc64ba062620676a\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.023081 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/c32a96981201f35bdc64ba062620676a-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"c32a96981201f35bdc64ba062620676a\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.036833 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_235e9295064844132a05dc40ef3a886a/kube-controller-manager-cert-syncer/0.log" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.038164 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_235e9295064844132a05dc40ef3a886a/kube-controller-manager/0.log" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.038386 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.044254 4681 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-crc" oldPodUID="235e9295064844132a05dc40ef3a886a" podUID="c32a96981201f35bdc64ba062620676a" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.124160 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/235e9295064844132a05dc40ef3a886a-resource-dir\") pod \"235e9295064844132a05dc40ef3a886a\" (UID: \"235e9295064844132a05dc40ef3a886a\") " Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.124424 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/235e9295064844132a05dc40ef3a886a-cert-dir\") pod \"235e9295064844132a05dc40ef3a886a\" (UID: \"235e9295064844132a05dc40ef3a886a\") " Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.124410 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/235e9295064844132a05dc40ef3a886a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "235e9295064844132a05dc40ef3a886a" (UID: "235e9295064844132a05dc40ef3a886a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.124474 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/235e9295064844132a05dc40ef3a886a-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "235e9295064844132a05dc40ef3a886a" (UID: "235e9295064844132a05dc40ef3a886a"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.125118 4681 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/235e9295064844132a05dc40ef3a886a-resource-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.125187 4681 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/235e9295064844132a05dc40ef3a886a-cert-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.212140 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="235e9295064844132a05dc40ef3a886a" path="/var/lib/kubelet/pods/235e9295064844132a05dc40ef3a886a/volumes" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.273089 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_235e9295064844132a05dc40ef3a886a/kube-controller-manager-cert-syncer/0.log" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.276588 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_235e9295064844132a05dc40ef3a886a/kube-controller-manager/0.log" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.276685 4681 generic.go:334] "Generic (PLEG): container finished" podID="235e9295064844132a05dc40ef3a886a" containerID="2791fdb1449327901a9fbc2879b866d0662be7e5c045548be28f987c55b31f66" exitCode=0 Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.276729 4681 generic.go:334] "Generic (PLEG): container finished" podID="235e9295064844132a05dc40ef3a886a" containerID="868f34c3df6ae6d9391df0ea11fce0cef7174893730a39b413545dc27f95de5a" exitCode=0 Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.276748 4681 generic.go:334] "Generic (PLEG): container finished" podID="235e9295064844132a05dc40ef3a886a" containerID="f7d46b3977fe622d3792a4163862ec755d995bc680053ceaf1044528676f2ae0" exitCode=2 Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.276765 4681 generic.go:334] "Generic (PLEG): container finished" podID="235e9295064844132a05dc40ef3a886a" containerID="0934466cb993c126f425e71e93e3598d91ed1cb12056623df2eb3d173564bef2" exitCode=0 Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.276799 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.276849 4681 scope.go:117] "RemoveContainer" containerID="2791fdb1449327901a9fbc2879b866d0662be7e5c045548be28f987c55b31f66" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.280625 4681 generic.go:334] "Generic (PLEG): container finished" podID="5c4f9f6f-bb0f-409a-af4c-b023eca2ab69" containerID="cb46c191eefaa813003dbe700b661affa6091b0f541a43020f65198717cf1f96" exitCode=0 Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.280664 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-12-crc" event={"ID":"5c4f9f6f-bb0f-409a-af4c-b023eca2ab69","Type":"ContainerDied","Data":"cb46c191eefaa813003dbe700b661affa6091b0f541a43020f65198717cf1f96"} Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.281594 4681 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-crc" oldPodUID="235e9295064844132a05dc40ef3a886a" podUID="c32a96981201f35bdc64ba062620676a" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.302501 4681 scope.go:117] "RemoveContainer" containerID="868f34c3df6ae6d9391df0ea11fce0cef7174893730a39b413545dc27f95de5a" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.306682 4681 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-crc" oldPodUID="235e9295064844132a05dc40ef3a886a" podUID="c32a96981201f35bdc64ba062620676a" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.323558 4681 scope.go:117] "RemoveContainer" containerID="f7d46b3977fe622d3792a4163862ec755d995bc680053ceaf1044528676f2ae0" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.340169 4681 scope.go:117] "RemoveContainer" containerID="0934466cb993c126f425e71e93e3598d91ed1cb12056623df2eb3d173564bef2" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.359308 4681 scope.go:117] "RemoveContainer" containerID="e98d2361c12342c873e47f714e2b40aea0c84ab52b71bd8be3e23ff62fd0cd58" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.386431 4681 scope.go:117] "RemoveContainer" containerID="2791fdb1449327901a9fbc2879b866d0662be7e5c045548be28f987c55b31f66" Apr 04 02:05:55 crc kubenswrapper[4681]: E0404 02:05:55.386890 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2791fdb1449327901a9fbc2879b866d0662be7e5c045548be28f987c55b31f66\": container with ID starting with 2791fdb1449327901a9fbc2879b866d0662be7e5c045548be28f987c55b31f66 not found: ID does not exist" containerID="2791fdb1449327901a9fbc2879b866d0662be7e5c045548be28f987c55b31f66" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.386935 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2791fdb1449327901a9fbc2879b866d0662be7e5c045548be28f987c55b31f66"} err="failed to get container status \"2791fdb1449327901a9fbc2879b866d0662be7e5c045548be28f987c55b31f66\": rpc error: code = NotFound desc = could not find container \"2791fdb1449327901a9fbc2879b866d0662be7e5c045548be28f987c55b31f66\": container with ID starting with 2791fdb1449327901a9fbc2879b866d0662be7e5c045548be28f987c55b31f66 not found: ID does not exist" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.386961 4681 scope.go:117] "RemoveContainer" containerID="868f34c3df6ae6d9391df0ea11fce0cef7174893730a39b413545dc27f95de5a" Apr 04 02:05:55 crc kubenswrapper[4681]: E0404 02:05:55.387349 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"868f34c3df6ae6d9391df0ea11fce0cef7174893730a39b413545dc27f95de5a\": container with ID starting with 868f34c3df6ae6d9391df0ea11fce0cef7174893730a39b413545dc27f95de5a not found: ID does not exist" containerID="868f34c3df6ae6d9391df0ea11fce0cef7174893730a39b413545dc27f95de5a" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.387384 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"868f34c3df6ae6d9391df0ea11fce0cef7174893730a39b413545dc27f95de5a"} err="failed to get container status \"868f34c3df6ae6d9391df0ea11fce0cef7174893730a39b413545dc27f95de5a\": rpc error: code = NotFound desc = could not find container \"868f34c3df6ae6d9391df0ea11fce0cef7174893730a39b413545dc27f95de5a\": container with ID starting with 868f34c3df6ae6d9391df0ea11fce0cef7174893730a39b413545dc27f95de5a not found: ID does not exist" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.387408 4681 scope.go:117] "RemoveContainer" containerID="f7d46b3977fe622d3792a4163862ec755d995bc680053ceaf1044528676f2ae0" Apr 04 02:05:55 crc kubenswrapper[4681]: E0404 02:05:55.387781 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7d46b3977fe622d3792a4163862ec755d995bc680053ceaf1044528676f2ae0\": container with ID starting with f7d46b3977fe622d3792a4163862ec755d995bc680053ceaf1044528676f2ae0 not found: ID does not exist" containerID="f7d46b3977fe622d3792a4163862ec755d995bc680053ceaf1044528676f2ae0" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.387833 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7d46b3977fe622d3792a4163862ec755d995bc680053ceaf1044528676f2ae0"} err="failed to get container status \"f7d46b3977fe622d3792a4163862ec755d995bc680053ceaf1044528676f2ae0\": rpc error: code = NotFound desc = could not find container \"f7d46b3977fe622d3792a4163862ec755d995bc680053ceaf1044528676f2ae0\": container with ID starting with f7d46b3977fe622d3792a4163862ec755d995bc680053ceaf1044528676f2ae0 not found: ID does not exist" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.387869 4681 scope.go:117] "RemoveContainer" containerID="0934466cb993c126f425e71e93e3598d91ed1cb12056623df2eb3d173564bef2" Apr 04 02:05:55 crc kubenswrapper[4681]: E0404 02:05:55.388192 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0934466cb993c126f425e71e93e3598d91ed1cb12056623df2eb3d173564bef2\": container with ID starting with 0934466cb993c126f425e71e93e3598d91ed1cb12056623df2eb3d173564bef2 not found: ID does not exist" containerID="0934466cb993c126f425e71e93e3598d91ed1cb12056623df2eb3d173564bef2" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.388224 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0934466cb993c126f425e71e93e3598d91ed1cb12056623df2eb3d173564bef2"} err="failed to get container status \"0934466cb993c126f425e71e93e3598d91ed1cb12056623df2eb3d173564bef2\": rpc error: code = NotFound desc = could not find container \"0934466cb993c126f425e71e93e3598d91ed1cb12056623df2eb3d173564bef2\": container with ID starting with 0934466cb993c126f425e71e93e3598d91ed1cb12056623df2eb3d173564bef2 not found: ID does not exist" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.388242 4681 scope.go:117] "RemoveContainer" containerID="e98d2361c12342c873e47f714e2b40aea0c84ab52b71bd8be3e23ff62fd0cd58" Apr 04 02:05:55 crc kubenswrapper[4681]: E0404 02:05:55.388566 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e98d2361c12342c873e47f714e2b40aea0c84ab52b71bd8be3e23ff62fd0cd58\": container with ID starting with e98d2361c12342c873e47f714e2b40aea0c84ab52b71bd8be3e23ff62fd0cd58 not found: ID does not exist" containerID="e98d2361c12342c873e47f714e2b40aea0c84ab52b71bd8be3e23ff62fd0cd58" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.388596 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e98d2361c12342c873e47f714e2b40aea0c84ab52b71bd8be3e23ff62fd0cd58"} err="failed to get container status \"e98d2361c12342c873e47f714e2b40aea0c84ab52b71bd8be3e23ff62fd0cd58\": rpc error: code = NotFound desc = could not find container \"e98d2361c12342c873e47f714e2b40aea0c84ab52b71bd8be3e23ff62fd0cd58\": container with ID starting with e98d2361c12342c873e47f714e2b40aea0c84ab52b71bd8be3e23ff62fd0cd58 not found: ID does not exist" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.388612 4681 scope.go:117] "RemoveContainer" containerID="2791fdb1449327901a9fbc2879b866d0662be7e5c045548be28f987c55b31f66" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.388914 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2791fdb1449327901a9fbc2879b866d0662be7e5c045548be28f987c55b31f66"} err="failed to get container status \"2791fdb1449327901a9fbc2879b866d0662be7e5c045548be28f987c55b31f66\": rpc error: code = NotFound desc = could not find container \"2791fdb1449327901a9fbc2879b866d0662be7e5c045548be28f987c55b31f66\": container with ID starting with 2791fdb1449327901a9fbc2879b866d0662be7e5c045548be28f987c55b31f66 not found: ID does not exist" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.388938 4681 scope.go:117] "RemoveContainer" containerID="868f34c3df6ae6d9391df0ea11fce0cef7174893730a39b413545dc27f95de5a" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.389140 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"868f34c3df6ae6d9391df0ea11fce0cef7174893730a39b413545dc27f95de5a"} err="failed to get container status \"868f34c3df6ae6d9391df0ea11fce0cef7174893730a39b413545dc27f95de5a\": rpc error: code = NotFound desc = could not find container \"868f34c3df6ae6d9391df0ea11fce0cef7174893730a39b413545dc27f95de5a\": container with ID starting with 868f34c3df6ae6d9391df0ea11fce0cef7174893730a39b413545dc27f95de5a not found: ID does not exist" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.389168 4681 scope.go:117] "RemoveContainer" containerID="f7d46b3977fe622d3792a4163862ec755d995bc680053ceaf1044528676f2ae0" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.389434 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7d46b3977fe622d3792a4163862ec755d995bc680053ceaf1044528676f2ae0"} err="failed to get container status \"f7d46b3977fe622d3792a4163862ec755d995bc680053ceaf1044528676f2ae0\": rpc error: code = NotFound desc = could not find container \"f7d46b3977fe622d3792a4163862ec755d995bc680053ceaf1044528676f2ae0\": container with ID starting with f7d46b3977fe622d3792a4163862ec755d995bc680053ceaf1044528676f2ae0 not found: ID does not exist" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.389455 4681 scope.go:117] "RemoveContainer" containerID="0934466cb993c126f425e71e93e3598d91ed1cb12056623df2eb3d173564bef2" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.389672 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0934466cb993c126f425e71e93e3598d91ed1cb12056623df2eb3d173564bef2"} err="failed to get container status \"0934466cb993c126f425e71e93e3598d91ed1cb12056623df2eb3d173564bef2\": rpc error: code = NotFound desc = could not find container \"0934466cb993c126f425e71e93e3598d91ed1cb12056623df2eb3d173564bef2\": container with ID starting with 0934466cb993c126f425e71e93e3598d91ed1cb12056623df2eb3d173564bef2 not found: ID does not exist" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.389725 4681 scope.go:117] "RemoveContainer" containerID="e98d2361c12342c873e47f714e2b40aea0c84ab52b71bd8be3e23ff62fd0cd58" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.390123 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e98d2361c12342c873e47f714e2b40aea0c84ab52b71bd8be3e23ff62fd0cd58"} err="failed to get container status \"e98d2361c12342c873e47f714e2b40aea0c84ab52b71bd8be3e23ff62fd0cd58\": rpc error: code = NotFound desc = could not find container \"e98d2361c12342c873e47f714e2b40aea0c84ab52b71bd8be3e23ff62fd0cd58\": container with ID starting with e98d2361c12342c873e47f714e2b40aea0c84ab52b71bd8be3e23ff62fd0cd58 not found: ID does not exist" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.390149 4681 scope.go:117] "RemoveContainer" containerID="2791fdb1449327901a9fbc2879b866d0662be7e5c045548be28f987c55b31f66" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.390344 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2791fdb1449327901a9fbc2879b866d0662be7e5c045548be28f987c55b31f66"} err="failed to get container status \"2791fdb1449327901a9fbc2879b866d0662be7e5c045548be28f987c55b31f66\": rpc error: code = NotFound desc = could not find container \"2791fdb1449327901a9fbc2879b866d0662be7e5c045548be28f987c55b31f66\": container with ID starting with 2791fdb1449327901a9fbc2879b866d0662be7e5c045548be28f987c55b31f66 not found: ID does not exist" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.390361 4681 scope.go:117] "RemoveContainer" containerID="868f34c3df6ae6d9391df0ea11fce0cef7174893730a39b413545dc27f95de5a" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.390647 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"868f34c3df6ae6d9391df0ea11fce0cef7174893730a39b413545dc27f95de5a"} err="failed to get container status \"868f34c3df6ae6d9391df0ea11fce0cef7174893730a39b413545dc27f95de5a\": rpc error: code = NotFound desc = could not find container \"868f34c3df6ae6d9391df0ea11fce0cef7174893730a39b413545dc27f95de5a\": container with ID starting with 868f34c3df6ae6d9391df0ea11fce0cef7174893730a39b413545dc27f95de5a not found: ID does not exist" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.390667 4681 scope.go:117] "RemoveContainer" containerID="f7d46b3977fe622d3792a4163862ec755d995bc680053ceaf1044528676f2ae0" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.390812 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7d46b3977fe622d3792a4163862ec755d995bc680053ceaf1044528676f2ae0"} err="failed to get container status \"f7d46b3977fe622d3792a4163862ec755d995bc680053ceaf1044528676f2ae0\": rpc error: code = NotFound desc = could not find container \"f7d46b3977fe622d3792a4163862ec755d995bc680053ceaf1044528676f2ae0\": container with ID starting with f7d46b3977fe622d3792a4163862ec755d995bc680053ceaf1044528676f2ae0 not found: ID does not exist" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.390832 4681 scope.go:117] "RemoveContainer" containerID="0934466cb993c126f425e71e93e3598d91ed1cb12056623df2eb3d173564bef2" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.391075 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0934466cb993c126f425e71e93e3598d91ed1cb12056623df2eb3d173564bef2"} err="failed to get container status \"0934466cb993c126f425e71e93e3598d91ed1cb12056623df2eb3d173564bef2\": rpc error: code = NotFound desc = could not find container \"0934466cb993c126f425e71e93e3598d91ed1cb12056623df2eb3d173564bef2\": container with ID starting with 0934466cb993c126f425e71e93e3598d91ed1cb12056623df2eb3d173564bef2 not found: ID does not exist" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.391098 4681 scope.go:117] "RemoveContainer" containerID="e98d2361c12342c873e47f714e2b40aea0c84ab52b71bd8be3e23ff62fd0cd58" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.391340 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e98d2361c12342c873e47f714e2b40aea0c84ab52b71bd8be3e23ff62fd0cd58"} err="failed to get container status \"e98d2361c12342c873e47f714e2b40aea0c84ab52b71bd8be3e23ff62fd0cd58\": rpc error: code = NotFound desc = could not find container \"e98d2361c12342c873e47f714e2b40aea0c84ab52b71bd8be3e23ff62fd0cd58\": container with ID starting with e98d2361c12342c873e47f714e2b40aea0c84ab52b71bd8be3e23ff62fd0cd58 not found: ID does not exist" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.391361 4681 scope.go:117] "RemoveContainer" containerID="2791fdb1449327901a9fbc2879b866d0662be7e5c045548be28f987c55b31f66" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.391532 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2791fdb1449327901a9fbc2879b866d0662be7e5c045548be28f987c55b31f66"} err="failed to get container status \"2791fdb1449327901a9fbc2879b866d0662be7e5c045548be28f987c55b31f66\": rpc error: code = NotFound desc = could not find container \"2791fdb1449327901a9fbc2879b866d0662be7e5c045548be28f987c55b31f66\": container with ID starting with 2791fdb1449327901a9fbc2879b866d0662be7e5c045548be28f987c55b31f66 not found: ID does not exist" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.391554 4681 scope.go:117] "RemoveContainer" containerID="868f34c3df6ae6d9391df0ea11fce0cef7174893730a39b413545dc27f95de5a" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.391716 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"868f34c3df6ae6d9391df0ea11fce0cef7174893730a39b413545dc27f95de5a"} err="failed to get container status \"868f34c3df6ae6d9391df0ea11fce0cef7174893730a39b413545dc27f95de5a\": rpc error: code = NotFound desc = could not find container \"868f34c3df6ae6d9391df0ea11fce0cef7174893730a39b413545dc27f95de5a\": container with ID starting with 868f34c3df6ae6d9391df0ea11fce0cef7174893730a39b413545dc27f95de5a not found: ID does not exist" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.391738 4681 scope.go:117] "RemoveContainer" containerID="f7d46b3977fe622d3792a4163862ec755d995bc680053ceaf1044528676f2ae0" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.392057 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7d46b3977fe622d3792a4163862ec755d995bc680053ceaf1044528676f2ae0"} err="failed to get container status \"f7d46b3977fe622d3792a4163862ec755d995bc680053ceaf1044528676f2ae0\": rpc error: code = NotFound desc = could not find container \"f7d46b3977fe622d3792a4163862ec755d995bc680053ceaf1044528676f2ae0\": container with ID starting with f7d46b3977fe622d3792a4163862ec755d995bc680053ceaf1044528676f2ae0 not found: ID does not exist" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.392106 4681 scope.go:117] "RemoveContainer" containerID="0934466cb993c126f425e71e93e3598d91ed1cb12056623df2eb3d173564bef2" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.392419 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0934466cb993c126f425e71e93e3598d91ed1cb12056623df2eb3d173564bef2"} err="failed to get container status \"0934466cb993c126f425e71e93e3598d91ed1cb12056623df2eb3d173564bef2\": rpc error: code = NotFound desc = could not find container \"0934466cb993c126f425e71e93e3598d91ed1cb12056623df2eb3d173564bef2\": container with ID starting with 0934466cb993c126f425e71e93e3598d91ed1cb12056623df2eb3d173564bef2 not found: ID does not exist" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.392436 4681 scope.go:117] "RemoveContainer" containerID="e98d2361c12342c873e47f714e2b40aea0c84ab52b71bd8be3e23ff62fd0cd58" Apr 04 02:05:55 crc kubenswrapper[4681]: I0404 02:05:55.392770 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e98d2361c12342c873e47f714e2b40aea0c84ab52b71bd8be3e23ff62fd0cd58"} err="failed to get container status \"e98d2361c12342c873e47f714e2b40aea0c84ab52b71bd8be3e23ff62fd0cd58\": rpc error: code = NotFound desc = could not find container \"e98d2361c12342c873e47f714e2b40aea0c84ab52b71bd8be3e23ff62fd0cd58\": container with ID starting with e98d2361c12342c873e47f714e2b40aea0c84ab52b71bd8be3e23ff62fd0cd58 not found: ID does not exist" Apr 04 02:05:56 crc kubenswrapper[4681]: I0404 02:05:56.552410 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-12-crc" Apr 04 02:05:56 crc kubenswrapper[4681]: I0404 02:05:56.649863 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c4f9f6f-bb0f-409a-af4c-b023eca2ab69-kube-api-access\") pod \"5c4f9f6f-bb0f-409a-af4c-b023eca2ab69\" (UID: \"5c4f9f6f-bb0f-409a-af4c-b023eca2ab69\") " Apr 04 02:05:56 crc kubenswrapper[4681]: I0404 02:05:56.649928 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f9f6f-bb0f-409a-af4c-b023eca2ab69-var-lock\") pod \"5c4f9f6f-bb0f-409a-af4c-b023eca2ab69\" (UID: \"5c4f9f6f-bb0f-409a-af4c-b023eca2ab69\") " Apr 04 02:05:56 crc kubenswrapper[4681]: I0404 02:05:56.649963 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f9f6f-bb0f-409a-af4c-b023eca2ab69-kubelet-dir\") pod \"5c4f9f6f-bb0f-409a-af4c-b023eca2ab69\" (UID: \"5c4f9f6f-bb0f-409a-af4c-b023eca2ab69\") " Apr 04 02:05:56 crc kubenswrapper[4681]: I0404 02:05:56.650066 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c4f9f6f-bb0f-409a-af4c-b023eca2ab69-var-lock" (OuterVolumeSpecName: "var-lock") pod "5c4f9f6f-bb0f-409a-af4c-b023eca2ab69" (UID: "5c4f9f6f-bb0f-409a-af4c-b023eca2ab69"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:05:56 crc kubenswrapper[4681]: I0404 02:05:56.650136 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c4f9f6f-bb0f-409a-af4c-b023eca2ab69-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5c4f9f6f-bb0f-409a-af4c-b023eca2ab69" (UID: "5c4f9f6f-bb0f-409a-af4c-b023eca2ab69"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:05:56 crc kubenswrapper[4681]: I0404 02:05:56.650487 4681 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5c4f9f6f-bb0f-409a-af4c-b023eca2ab69-var-lock\") on node \"crc\" DevicePath \"\"" Apr 04 02:05:56 crc kubenswrapper[4681]: I0404 02:05:56.650516 4681 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c4f9f6f-bb0f-409a-af4c-b023eca2ab69-kubelet-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:05:56 crc kubenswrapper[4681]: I0404 02:05:56.655526 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c4f9f6f-bb0f-409a-af4c-b023eca2ab69-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5c4f9f6f-bb0f-409a-af4c-b023eca2ab69" (UID: "5c4f9f6f-bb0f-409a-af4c-b023eca2ab69"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:05:56 crc kubenswrapper[4681]: I0404 02:05:56.751608 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c4f9f6f-bb0f-409a-af4c-b023eca2ab69-kube-api-access\") on node \"crc\" DevicePath \"\"" Apr 04 02:05:57 crc kubenswrapper[4681]: I0404 02:05:57.298153 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-12-crc" event={"ID":"5c4f9f6f-bb0f-409a-af4c-b023eca2ab69","Type":"ContainerDied","Data":"8365afb5f514ea9c70f454123c1cc27d54a313597618daccf70a281825d1f0cc"} Apr 04 02:05:57 crc kubenswrapper[4681]: I0404 02:05:57.298208 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8365afb5f514ea9c70f454123c1cc27d54a313597618daccf70a281825d1f0cc" Apr 04 02:05:57 crc kubenswrapper[4681]: I0404 02:05:57.298257 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-12-crc" Apr 04 02:05:59 crc kubenswrapper[4681]: I0404 02:05:59.025384 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-8-retry-1-crc"] Apr 04 02:05:59 crc kubenswrapper[4681]: E0404 02:05:59.025989 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c4f9f6f-bb0f-409a-af4c-b023eca2ab69" containerName="installer" Apr 04 02:05:59 crc kubenswrapper[4681]: I0404 02:05:59.026009 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c4f9f6f-bb0f-409a-af4c-b023eca2ab69" containerName="installer" Apr 04 02:05:59 crc kubenswrapper[4681]: I0404 02:05:59.026180 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c4f9f6f-bb0f-409a-af4c-b023eca2ab69" containerName="installer" Apr 04 02:05:59 crc kubenswrapper[4681]: I0404 02:05:59.026793 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-8-retry-1-crc" Apr 04 02:05:59 crc kubenswrapper[4681]: I0404 02:05:59.028118 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-8-retry-1-crc"] Apr 04 02:05:59 crc kubenswrapper[4681]: I0404 02:05:59.029131 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-5vhrm" Apr 04 02:05:59 crc kubenswrapper[4681]: I0404 02:05:59.029191 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Apr 04 02:05:59 crc kubenswrapper[4681]: I0404 02:05:59.186992 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab7e3566-522f-4eed-add8-d690d057fb83-kubelet-dir\") pod \"installer-8-retry-1-crc\" (UID: \"ab7e3566-522f-4eed-add8-d690d057fb83\") " pod="openshift-kube-scheduler/installer-8-retry-1-crc" Apr 04 02:05:59 crc kubenswrapper[4681]: I0404 02:05:59.187072 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab7e3566-522f-4eed-add8-d690d057fb83-kube-api-access\") pod \"installer-8-retry-1-crc\" (UID: \"ab7e3566-522f-4eed-add8-d690d057fb83\") " pod="openshift-kube-scheduler/installer-8-retry-1-crc" Apr 04 02:05:59 crc kubenswrapper[4681]: I0404 02:05:59.187293 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ab7e3566-522f-4eed-add8-d690d057fb83-var-lock\") pod \"installer-8-retry-1-crc\" (UID: \"ab7e3566-522f-4eed-add8-d690d057fb83\") " pod="openshift-kube-scheduler/installer-8-retry-1-crc" Apr 04 02:05:59 crc kubenswrapper[4681]: I0404 02:05:59.288966 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab7e3566-522f-4eed-add8-d690d057fb83-kubelet-dir\") pod \"installer-8-retry-1-crc\" (UID: \"ab7e3566-522f-4eed-add8-d690d057fb83\") " pod="openshift-kube-scheduler/installer-8-retry-1-crc" Apr 04 02:05:59 crc kubenswrapper[4681]: I0404 02:05:59.289054 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab7e3566-522f-4eed-add8-d690d057fb83-kube-api-access\") pod \"installer-8-retry-1-crc\" (UID: \"ab7e3566-522f-4eed-add8-d690d057fb83\") " pod="openshift-kube-scheduler/installer-8-retry-1-crc" Apr 04 02:05:59 crc kubenswrapper[4681]: I0404 02:05:59.289096 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab7e3566-522f-4eed-add8-d690d057fb83-kubelet-dir\") pod \"installer-8-retry-1-crc\" (UID: \"ab7e3566-522f-4eed-add8-d690d057fb83\") " pod="openshift-kube-scheduler/installer-8-retry-1-crc" Apr 04 02:05:59 crc kubenswrapper[4681]: I0404 02:05:59.289255 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ab7e3566-522f-4eed-add8-d690d057fb83-var-lock\") pod \"installer-8-retry-1-crc\" (UID: \"ab7e3566-522f-4eed-add8-d690d057fb83\") " pod="openshift-kube-scheduler/installer-8-retry-1-crc" Apr 04 02:05:59 crc kubenswrapper[4681]: I0404 02:05:59.289445 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ab7e3566-522f-4eed-add8-d690d057fb83-var-lock\") pod \"installer-8-retry-1-crc\" (UID: \"ab7e3566-522f-4eed-add8-d690d057fb83\") " pod="openshift-kube-scheduler/installer-8-retry-1-crc" Apr 04 02:05:59 crc kubenswrapper[4681]: I0404 02:05:59.310101 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab7e3566-522f-4eed-add8-d690d057fb83-kube-api-access\") pod \"installer-8-retry-1-crc\" (UID: \"ab7e3566-522f-4eed-add8-d690d057fb83\") " pod="openshift-kube-scheduler/installer-8-retry-1-crc" Apr 04 02:05:59 crc kubenswrapper[4681]: I0404 02:05:59.353945 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-8-retry-1-crc" Apr 04 02:05:59 crc kubenswrapper[4681]: I0404 02:05:59.605480 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-8-retry-1-crc"] Apr 04 02:06:00 crc kubenswrapper[4681]: I0404 02:06:00.320796 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-8-retry-1-crc" event={"ID":"ab7e3566-522f-4eed-add8-d690d057fb83","Type":"ContainerStarted","Data":"73aab193c44f4693a19ba5319ee6aee805b7967b3684abfbd6d71a5b2079f449"} Apr 04 02:06:01 crc kubenswrapper[4681]: I0404 02:06:01.328059 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-8-retry-1-crc" event={"ID":"ab7e3566-522f-4eed-add8-d690d057fb83","Type":"ContainerStarted","Data":"805a5fd4191094db61221f54f874297ffd936efcfb8a6cad8aa5b99a76391176"} Apr 04 02:06:01 crc kubenswrapper[4681]: I0404 02:06:01.350988 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-8-retry-1-crc" podStartSLOduration=2.350966966 podStartE2EDuration="2.350966966s" podCreationTimestamp="2026-04-04 02:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:06:01.345853608 +0000 UTC m=+641.011628758" watchObservedRunningTime="2026-04-04 02:06:01.350966966 +0000 UTC m=+641.016742096" Apr 04 02:06:02 crc kubenswrapper[4681]: I0404 02:06:02.526450 4681 scope.go:117] "RemoveContainer" containerID="77fc41ed9dc3215e18eabf5836b3c72a6b725147a83366fb1517e2c064752993" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.026667 4681 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.028312 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.079795 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Apr 04 02:06:03 crc kubenswrapper[4681]: E0404 02:06:03.093670 4681 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.094126 4681 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.094646 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4e6039c7a12c5a0c0ef5917dc7ee5582" containerName="kube-apiserver" containerID="cri-o://27fea8c33f2c10ead1dbc5983e8b3bb6383a995d4663cc52e0867033c4cbec0c" gracePeriod=15 Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.094763 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4e6039c7a12c5a0c0ef5917dc7ee5582" containerName="kube-apiserver-check-endpoints" containerID="cri-o://bbced439ceb23b64ba0d0fbd4ab1d4cd5e1143e8af342cbb67b657f7734ed332" gracePeriod=15 Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.094830 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4e6039c7a12c5a0c0ef5917dc7ee5582" containerName="kube-apiserver-cert-syncer" containerID="cri-o://30a82adf2abe4345dcd0f37dd190e89b9bfe5368a21abe58dcbaf26128bafc24" gracePeriod=15 Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.094834 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4e6039c7a12c5a0c0ef5917dc7ee5582" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://0a5efcb827410a45f26672998f33b39d45a6d1311b97fd36427f3a711a4e0373" gracePeriod=15 Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.094803 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="4e6039c7a12c5a0c0ef5917dc7ee5582" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://813c127b31bfed1f8e3fff015adfbc3b97c796f4917ef9f10c557180b5d9c450" gracePeriod=15 Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.095575 4681 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Apr 04 02:06:03 crc kubenswrapper[4681]: E0404 02:06:03.095807 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e6039c7a12c5a0c0ef5917dc7ee5582" containerName="setup" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.095819 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e6039c7a12c5a0c0ef5917dc7ee5582" containerName="setup" Apr 04 02:06:03 crc kubenswrapper[4681]: E0404 02:06:03.095833 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e6039c7a12c5a0c0ef5917dc7ee5582" containerName="kube-apiserver-insecure-readyz" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.095841 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e6039c7a12c5a0c0ef5917dc7ee5582" containerName="kube-apiserver-insecure-readyz" Apr 04 02:06:03 crc kubenswrapper[4681]: E0404 02:06:03.095857 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e6039c7a12c5a0c0ef5917dc7ee5582" containerName="kube-apiserver-cert-syncer" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.095865 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e6039c7a12c5a0c0ef5917dc7ee5582" containerName="kube-apiserver-cert-syncer" Apr 04 02:06:03 crc kubenswrapper[4681]: E0404 02:06:03.095874 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e6039c7a12c5a0c0ef5917dc7ee5582" containerName="kube-apiserver" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.095882 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e6039c7a12c5a0c0ef5917dc7ee5582" containerName="kube-apiserver" Apr 04 02:06:03 crc kubenswrapper[4681]: E0404 02:06:03.095899 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e6039c7a12c5a0c0ef5917dc7ee5582" containerName="kube-apiserver-cert-regeneration-controller" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.095907 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e6039c7a12c5a0c0ef5917dc7ee5582" containerName="kube-apiserver-cert-regeneration-controller" Apr 04 02:06:03 crc kubenswrapper[4681]: E0404 02:06:03.095922 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e6039c7a12c5a0c0ef5917dc7ee5582" containerName="kube-apiserver-check-endpoints" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.095931 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e6039c7a12c5a0c0ef5917dc7ee5582" containerName="kube-apiserver-check-endpoints" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.096043 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e6039c7a12c5a0c0ef5917dc7ee5582" containerName="kube-apiserver-cert-regeneration-controller" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.096058 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e6039c7a12c5a0c0ef5917dc7ee5582" containerName="kube-apiserver-check-endpoints" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.096068 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e6039c7a12c5a0c0ef5917dc7ee5582" containerName="kube-apiserver-cert-syncer" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.096079 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e6039c7a12c5a0c0ef5917dc7ee5582" containerName="kube-apiserver" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.096088 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e6039c7a12c5a0c0ef5917dc7ee5582" containerName="kube-apiserver-insecure-readyz" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.170409 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/aaec8d0ffd277c0e93001246672220ba-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"aaec8d0ffd277c0e93001246672220ba\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.170470 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/aaec8d0ffd277c0e93001246672220ba-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"aaec8d0ffd277c0e93001246672220ba\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.170528 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aaec8d0ffd277c0e93001246672220ba-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"aaec8d0ffd277c0e93001246672220ba\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.170676 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/aaec8d0ffd277c0e93001246672220ba-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"aaec8d0ffd277c0e93001246672220ba\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.170737 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/aaec8d0ffd277c0e93001246672220ba-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"aaec8d0ffd277c0e93001246672220ba\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.271730 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/aaec8d0ffd277c0e93001246672220ba-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"aaec8d0ffd277c0e93001246672220ba\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.271792 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/aaec8d0ffd277c0e93001246672220ba-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"aaec8d0ffd277c0e93001246672220ba\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.271828 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aaec8d0ffd277c0e93001246672220ba-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"aaec8d0ffd277c0e93001246672220ba\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.271850 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3f04c31653fd2d52d145a959c922a0d3-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3f04c31653fd2d52d145a959c922a0d3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.271878 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/aaec8d0ffd277c0e93001246672220ba-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"aaec8d0ffd277c0e93001246672220ba\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.271898 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3f04c31653fd2d52d145a959c922a0d3-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3f04c31653fd2d52d145a959c922a0d3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.271893 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/aaec8d0ffd277c0e93001246672220ba-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"aaec8d0ffd277c0e93001246672220ba\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.271917 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3f04c31653fd2d52d145a959c922a0d3-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3f04c31653fd2d52d145a959c922a0d3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.271950 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/aaec8d0ffd277c0e93001246672220ba-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"aaec8d0ffd277c0e93001246672220ba\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.271999 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/aaec8d0ffd277c0e93001246672220ba-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"aaec8d0ffd277c0e93001246672220ba\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.271975 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aaec8d0ffd277c0e93001246672220ba-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"aaec8d0ffd277c0e93001246672220ba\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.272008 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/aaec8d0ffd277c0e93001246672220ba-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"aaec8d0ffd277c0e93001246672220ba\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.272105 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/aaec8d0ffd277c0e93001246672220ba-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"aaec8d0ffd277c0e93001246672220ba\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.346668 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_4e6039c7a12c5a0c0ef5917dc7ee5582/kube-apiserver-cert-syncer/0.log" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.347946 4681 generic.go:334] "Generic (PLEG): container finished" podID="4e6039c7a12c5a0c0ef5917dc7ee5582" containerID="bbced439ceb23b64ba0d0fbd4ab1d4cd5e1143e8af342cbb67b657f7734ed332" exitCode=0 Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.347993 4681 generic.go:334] "Generic (PLEG): container finished" podID="4e6039c7a12c5a0c0ef5917dc7ee5582" containerID="813c127b31bfed1f8e3fff015adfbc3b97c796f4917ef9f10c557180b5d9c450" exitCode=0 Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.348011 4681 generic.go:334] "Generic (PLEG): container finished" podID="4e6039c7a12c5a0c0ef5917dc7ee5582" containerID="0a5efcb827410a45f26672998f33b39d45a6d1311b97fd36427f3a711a4e0373" exitCode=0 Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.348025 4681 generic.go:334] "Generic (PLEG): container finished" podID="4e6039c7a12c5a0c0ef5917dc7ee5582" containerID="30a82adf2abe4345dcd0f37dd190e89b9bfe5368a21abe58dcbaf26128bafc24" exitCode=2 Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.364527 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.374103 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3f04c31653fd2d52d145a959c922a0d3-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3f04c31653fd2d52d145a959c922a0d3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.374233 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3f04c31653fd2d52d145a959c922a0d3-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3f04c31653fd2d52d145a959c922a0d3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.374253 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3f04c31653fd2d52d145a959c922a0d3-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3f04c31653fd2d52d145a959c922a0d3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.374327 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3f04c31653fd2d52d145a959c922a0d3-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3f04c31653fd2d52d145a959c922a0d3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.374361 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3f04c31653fd2d52d145a959c922a0d3-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3f04c31653fd2d52d145a959c922a0d3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:06:03 crc kubenswrapper[4681]: I0404 02:06:03.374382 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3f04c31653fd2d52d145a959c922a0d3-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3f04c31653fd2d52d145a959c922a0d3\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:06:03 crc kubenswrapper[4681]: E0404 02:06:03.403511 4681 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.71:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18a3052bb59d2ac4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:aaec8d0ffd277c0e93001246672220ba,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 02:06:03.402816196 +0000 UTC m=+643.068591336,LastTimestamp:2026-04-04 02:06:03.402816196 +0000 UTC m=+643.068591336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 02:06:04 crc kubenswrapper[4681]: I0404 02:06:04.370346 4681 generic.go:334] "Generic (PLEG): container finished" podID="27afc22e-f982-4252-a9d3-a07bdd837b1c" containerID="c62be49f931c85d8079c2af819cf0ba23ea6e77d258f8e6c59e57db695ec7a18" exitCode=0 Apr 04 02:06:04 crc kubenswrapper[4681]: I0404 02:06:04.370407 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-11-crc" event={"ID":"27afc22e-f982-4252-a9d3-a07bdd837b1c","Type":"ContainerDied","Data":"c62be49f931c85d8079c2af819cf0ba23ea6e77d258f8e6c59e57db695ec7a18"} Apr 04 02:06:04 crc kubenswrapper[4681]: I0404 02:06:04.372147 4681 status_manager.go:851] "Failed to get status for pod" podUID="aaec8d0ffd277c0e93001246672220ba" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:04 crc kubenswrapper[4681]: I0404 02:06:04.372786 4681 status_manager.go:851] "Failed to get status for pod" podUID="27afc22e-f982-4252-a9d3-a07bdd837b1c" pod="openshift-kube-apiserver/installer-11-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-11-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:04 crc kubenswrapper[4681]: I0404 02:06:04.374320 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"aaec8d0ffd277c0e93001246672220ba","Type":"ContainerStarted","Data":"236e3932c70c15213281c4096f884684cfef0796c1d4c7fd41f46bfd2b0eab0e"} Apr 04 02:06:04 crc kubenswrapper[4681]: I0404 02:06:04.374369 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"aaec8d0ffd277c0e93001246672220ba","Type":"ContainerStarted","Data":"77a59ec202d106563e36b9efe3986f1df03d16ff9ae69d261e3f7171dc8ac36b"} Apr 04 02:06:04 crc kubenswrapper[4681]: I0404 02:06:04.375883 4681 status_manager.go:851] "Failed to get status for pod" podUID="aaec8d0ffd277c0e93001246672220ba" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:04 crc kubenswrapper[4681]: I0404 02:06:04.376441 4681 status_manager.go:851] "Failed to get status for pod" podUID="27afc22e-f982-4252-a9d3-a07bdd837b1c" pod="openshift-kube-apiserver/installer-11-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-11-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:05 crc kubenswrapper[4681]: I0404 02:06:05.619853 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-crc" Apr 04 02:06:05 crc kubenswrapper[4681]: I0404 02:06:05.621176 4681 status_manager.go:851] "Failed to get status for pod" podUID="aaec8d0ffd277c0e93001246672220ba" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:05 crc kubenswrapper[4681]: I0404 02:06:05.621718 4681 status_manager.go:851] "Failed to get status for pod" podUID="27afc22e-f982-4252-a9d3-a07bdd837b1c" pod="openshift-kube-apiserver/installer-11-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-11-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:05 crc kubenswrapper[4681]: I0404 02:06:05.643401 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27afc22e-f982-4252-a9d3-a07bdd837b1c-kube-api-access\") pod \"27afc22e-f982-4252-a9d3-a07bdd837b1c\" (UID: \"27afc22e-f982-4252-a9d3-a07bdd837b1c\") " Apr 04 02:06:05 crc kubenswrapper[4681]: I0404 02:06:05.643519 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27afc22e-f982-4252-a9d3-a07bdd837b1c-kubelet-dir\") pod \"27afc22e-f982-4252-a9d3-a07bdd837b1c\" (UID: \"27afc22e-f982-4252-a9d3-a07bdd837b1c\") " Apr 04 02:06:05 crc kubenswrapper[4681]: I0404 02:06:05.643559 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/27afc22e-f982-4252-a9d3-a07bdd837b1c-var-lock\") pod \"27afc22e-f982-4252-a9d3-a07bdd837b1c\" (UID: \"27afc22e-f982-4252-a9d3-a07bdd837b1c\") " Apr 04 02:06:05 crc kubenswrapper[4681]: I0404 02:06:05.643585 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27afc22e-f982-4252-a9d3-a07bdd837b1c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "27afc22e-f982-4252-a9d3-a07bdd837b1c" (UID: "27afc22e-f982-4252-a9d3-a07bdd837b1c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:06:05 crc kubenswrapper[4681]: I0404 02:06:05.643668 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/27afc22e-f982-4252-a9d3-a07bdd837b1c-var-lock" (OuterVolumeSpecName: "var-lock") pod "27afc22e-f982-4252-a9d3-a07bdd837b1c" (UID: "27afc22e-f982-4252-a9d3-a07bdd837b1c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:06:05 crc kubenswrapper[4681]: I0404 02:06:05.643884 4681 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/27afc22e-f982-4252-a9d3-a07bdd837b1c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:05 crc kubenswrapper[4681]: I0404 02:06:05.643902 4681 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/27afc22e-f982-4252-a9d3-a07bdd837b1c-var-lock\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:05 crc kubenswrapper[4681]: I0404 02:06:05.650591 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27afc22e-f982-4252-a9d3-a07bdd837b1c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "27afc22e-f982-4252-a9d3-a07bdd837b1c" (UID: "27afc22e-f982-4252-a9d3-a07bdd837b1c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:06:05 crc kubenswrapper[4681]: I0404 02:06:05.745874 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/27afc22e-f982-4252-a9d3-a07bdd837b1c-kube-api-access\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:05 crc kubenswrapper[4681]: I0404 02:06:05.995098 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_4e6039c7a12c5a0c0ef5917dc7ee5582/kube-apiserver-cert-syncer/0.log" Apr 04 02:06:05 crc kubenswrapper[4681]: I0404 02:06:05.996169 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:06:05 crc kubenswrapper[4681]: I0404 02:06:05.996966 4681 status_manager.go:851] "Failed to get status for pod" podUID="4e6039c7a12c5a0c0ef5917dc7ee5582" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:05 crc kubenswrapper[4681]: I0404 02:06:05.997221 4681 status_manager.go:851] "Failed to get status for pod" podUID="aaec8d0ffd277c0e93001246672220ba" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:05 crc kubenswrapper[4681]: I0404 02:06:05.997510 4681 status_manager.go:851] "Failed to get status for pod" podUID="27afc22e-f982-4252-a9d3-a07bdd837b1c" pod="openshift-kube-apiserver/installer-11-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-11-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.050364 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4e6039c7a12c5a0c0ef5917dc7ee5582-audit-dir\") pod \"4e6039c7a12c5a0c0ef5917dc7ee5582\" (UID: \"4e6039c7a12c5a0c0ef5917dc7ee5582\") " Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.050422 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4e6039c7a12c5a0c0ef5917dc7ee5582-resource-dir\") pod \"4e6039c7a12c5a0c0ef5917dc7ee5582\" (UID: \"4e6039c7a12c5a0c0ef5917dc7ee5582\") " Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.050455 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4e6039c7a12c5a0c0ef5917dc7ee5582-cert-dir\") pod \"4e6039c7a12c5a0c0ef5917dc7ee5582\" (UID: \"4e6039c7a12c5a0c0ef5917dc7ee5582\") " Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.050507 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e6039c7a12c5a0c0ef5917dc7ee5582-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "4e6039c7a12c5a0c0ef5917dc7ee5582" (UID: "4e6039c7a12c5a0c0ef5917dc7ee5582"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.050523 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e6039c7a12c5a0c0ef5917dc7ee5582-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "4e6039c7a12c5a0c0ef5917dc7ee5582" (UID: "4e6039c7a12c5a0c0ef5917dc7ee5582"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.050590 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e6039c7a12c5a0c0ef5917dc7ee5582-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "4e6039c7a12c5a0c0ef5917dc7ee5582" (UID: "4e6039c7a12c5a0c0ef5917dc7ee5582"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.050683 4681 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4e6039c7a12c5a0c0ef5917dc7ee5582-audit-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.050697 4681 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4e6039c7a12c5a0c0ef5917dc7ee5582-resource-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.050710 4681 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4e6039c7a12c5a0c0ef5917dc7ee5582-cert-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.391312 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_4e6039c7a12c5a0c0ef5917dc7ee5582/kube-apiserver-cert-syncer/0.log" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.392180 4681 generic.go:334] "Generic (PLEG): container finished" podID="4e6039c7a12c5a0c0ef5917dc7ee5582" containerID="27fea8c33f2c10ead1dbc5983e8b3bb6383a995d4663cc52e0867033c4cbec0c" exitCode=0 Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.392316 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.392570 4681 scope.go:117] "RemoveContainer" containerID="bbced439ceb23b64ba0d0fbd4ab1d4cd5e1143e8af342cbb67b657f7734ed332" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.395179 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-11-crc" event={"ID":"27afc22e-f982-4252-a9d3-a07bdd837b1c","Type":"ContainerDied","Data":"1765f988587e872351addbe5baf66afe930998e79150e62f276cfa16e546fbb1"} Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.395211 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1765f988587e872351addbe5baf66afe930998e79150e62f276cfa16e546fbb1" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.395298 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-11-crc" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.425395 4681 status_manager.go:851] "Failed to get status for pod" podUID="4e6039c7a12c5a0c0ef5917dc7ee5582" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.426083 4681 status_manager.go:851] "Failed to get status for pod" podUID="aaec8d0ffd277c0e93001246672220ba" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.426583 4681 status_manager.go:851] "Failed to get status for pod" podUID="27afc22e-f982-4252-a9d3-a07bdd837b1c" pod="openshift-kube-apiserver/installer-11-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-11-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.426994 4681 status_manager.go:851] "Failed to get status for pod" podUID="27afc22e-f982-4252-a9d3-a07bdd837b1c" pod="openshift-kube-apiserver/installer-11-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-11-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.427324 4681 status_manager.go:851] "Failed to get status for pod" podUID="4e6039c7a12c5a0c0ef5917dc7ee5582" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.427717 4681 status_manager.go:851] "Failed to get status for pod" podUID="aaec8d0ffd277c0e93001246672220ba" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.437204 4681 scope.go:117] "RemoveContainer" containerID="813c127b31bfed1f8e3fff015adfbc3b97c796f4917ef9f10c557180b5d9c450" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.468538 4681 scope.go:117] "RemoveContainer" containerID="0a5efcb827410a45f26672998f33b39d45a6d1311b97fd36427f3a711a4e0373" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.489828 4681 scope.go:117] "RemoveContainer" containerID="30a82adf2abe4345dcd0f37dd190e89b9bfe5368a21abe58dcbaf26128bafc24" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.508431 4681 scope.go:117] "RemoveContainer" containerID="27fea8c33f2c10ead1dbc5983e8b3bb6383a995d4663cc52e0867033c4cbec0c" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.522277 4681 scope.go:117] "RemoveContainer" containerID="93e3ff5184c33538330842141451ce2a0284ed131e855521a2d07892b8f0737f" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.541023 4681 scope.go:117] "RemoveContainer" containerID="bbced439ceb23b64ba0d0fbd4ab1d4cd5e1143e8af342cbb67b657f7734ed332" Apr 04 02:06:06 crc kubenswrapper[4681]: E0404 02:06:06.541507 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbced439ceb23b64ba0d0fbd4ab1d4cd5e1143e8af342cbb67b657f7734ed332\": container with ID starting with bbced439ceb23b64ba0d0fbd4ab1d4cd5e1143e8af342cbb67b657f7734ed332 not found: ID does not exist" containerID="bbced439ceb23b64ba0d0fbd4ab1d4cd5e1143e8af342cbb67b657f7734ed332" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.541565 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbced439ceb23b64ba0d0fbd4ab1d4cd5e1143e8af342cbb67b657f7734ed332"} err="failed to get container status \"bbced439ceb23b64ba0d0fbd4ab1d4cd5e1143e8af342cbb67b657f7734ed332\": rpc error: code = NotFound desc = could not find container \"bbced439ceb23b64ba0d0fbd4ab1d4cd5e1143e8af342cbb67b657f7734ed332\": container with ID starting with bbced439ceb23b64ba0d0fbd4ab1d4cd5e1143e8af342cbb67b657f7734ed332 not found: ID does not exist" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.541607 4681 scope.go:117] "RemoveContainer" containerID="813c127b31bfed1f8e3fff015adfbc3b97c796f4917ef9f10c557180b5d9c450" Apr 04 02:06:06 crc kubenswrapper[4681]: E0404 02:06:06.541914 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"813c127b31bfed1f8e3fff015adfbc3b97c796f4917ef9f10c557180b5d9c450\": container with ID starting with 813c127b31bfed1f8e3fff015adfbc3b97c796f4917ef9f10c557180b5d9c450 not found: ID does not exist" containerID="813c127b31bfed1f8e3fff015adfbc3b97c796f4917ef9f10c557180b5d9c450" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.541946 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"813c127b31bfed1f8e3fff015adfbc3b97c796f4917ef9f10c557180b5d9c450"} err="failed to get container status \"813c127b31bfed1f8e3fff015adfbc3b97c796f4917ef9f10c557180b5d9c450\": rpc error: code = NotFound desc = could not find container \"813c127b31bfed1f8e3fff015adfbc3b97c796f4917ef9f10c557180b5d9c450\": container with ID starting with 813c127b31bfed1f8e3fff015adfbc3b97c796f4917ef9f10c557180b5d9c450 not found: ID does not exist" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.541968 4681 scope.go:117] "RemoveContainer" containerID="0a5efcb827410a45f26672998f33b39d45a6d1311b97fd36427f3a711a4e0373" Apr 04 02:06:06 crc kubenswrapper[4681]: E0404 02:06:06.542377 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a5efcb827410a45f26672998f33b39d45a6d1311b97fd36427f3a711a4e0373\": container with ID starting with 0a5efcb827410a45f26672998f33b39d45a6d1311b97fd36427f3a711a4e0373 not found: ID does not exist" containerID="0a5efcb827410a45f26672998f33b39d45a6d1311b97fd36427f3a711a4e0373" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.542447 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a5efcb827410a45f26672998f33b39d45a6d1311b97fd36427f3a711a4e0373"} err="failed to get container status \"0a5efcb827410a45f26672998f33b39d45a6d1311b97fd36427f3a711a4e0373\": rpc error: code = NotFound desc = could not find container \"0a5efcb827410a45f26672998f33b39d45a6d1311b97fd36427f3a711a4e0373\": container with ID starting with 0a5efcb827410a45f26672998f33b39d45a6d1311b97fd36427f3a711a4e0373 not found: ID does not exist" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.542497 4681 scope.go:117] "RemoveContainer" containerID="30a82adf2abe4345dcd0f37dd190e89b9bfe5368a21abe58dcbaf26128bafc24" Apr 04 02:06:06 crc kubenswrapper[4681]: E0404 02:06:06.542823 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30a82adf2abe4345dcd0f37dd190e89b9bfe5368a21abe58dcbaf26128bafc24\": container with ID starting with 30a82adf2abe4345dcd0f37dd190e89b9bfe5368a21abe58dcbaf26128bafc24 not found: ID does not exist" containerID="30a82adf2abe4345dcd0f37dd190e89b9bfe5368a21abe58dcbaf26128bafc24" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.542848 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30a82adf2abe4345dcd0f37dd190e89b9bfe5368a21abe58dcbaf26128bafc24"} err="failed to get container status \"30a82adf2abe4345dcd0f37dd190e89b9bfe5368a21abe58dcbaf26128bafc24\": rpc error: code = NotFound desc = could not find container \"30a82adf2abe4345dcd0f37dd190e89b9bfe5368a21abe58dcbaf26128bafc24\": container with ID starting with 30a82adf2abe4345dcd0f37dd190e89b9bfe5368a21abe58dcbaf26128bafc24 not found: ID does not exist" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.542861 4681 scope.go:117] "RemoveContainer" containerID="27fea8c33f2c10ead1dbc5983e8b3bb6383a995d4663cc52e0867033c4cbec0c" Apr 04 02:06:06 crc kubenswrapper[4681]: E0404 02:06:06.543455 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27fea8c33f2c10ead1dbc5983e8b3bb6383a995d4663cc52e0867033c4cbec0c\": container with ID starting with 27fea8c33f2c10ead1dbc5983e8b3bb6383a995d4663cc52e0867033c4cbec0c not found: ID does not exist" containerID="27fea8c33f2c10ead1dbc5983e8b3bb6383a995d4663cc52e0867033c4cbec0c" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.543479 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27fea8c33f2c10ead1dbc5983e8b3bb6383a995d4663cc52e0867033c4cbec0c"} err="failed to get container status \"27fea8c33f2c10ead1dbc5983e8b3bb6383a995d4663cc52e0867033c4cbec0c\": rpc error: code = NotFound desc = could not find container \"27fea8c33f2c10ead1dbc5983e8b3bb6383a995d4663cc52e0867033c4cbec0c\": container with ID starting with 27fea8c33f2c10ead1dbc5983e8b3bb6383a995d4663cc52e0867033c4cbec0c not found: ID does not exist" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.543493 4681 scope.go:117] "RemoveContainer" containerID="93e3ff5184c33538330842141451ce2a0284ed131e855521a2d07892b8f0737f" Apr 04 02:06:06 crc kubenswrapper[4681]: E0404 02:06:06.543786 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93e3ff5184c33538330842141451ce2a0284ed131e855521a2d07892b8f0737f\": container with ID starting with 93e3ff5184c33538330842141451ce2a0284ed131e855521a2d07892b8f0737f not found: ID does not exist" containerID="93e3ff5184c33538330842141451ce2a0284ed131e855521a2d07892b8f0737f" Apr 04 02:06:06 crc kubenswrapper[4681]: I0404 02:06:06.543827 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93e3ff5184c33538330842141451ce2a0284ed131e855521a2d07892b8f0737f"} err="failed to get container status \"93e3ff5184c33538330842141451ce2a0284ed131e855521a2d07892b8f0737f\": rpc error: code = NotFound desc = could not find container \"93e3ff5184c33538330842141451ce2a0284ed131e855521a2d07892b8f0737f\": container with ID starting with 93e3ff5184c33538330842141451ce2a0284ed131e855521a2d07892b8f0737f not found: ID does not exist" Apr 04 02:06:07 crc kubenswrapper[4681]: E0404 02:06:07.112426 4681 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:07 crc kubenswrapper[4681]: E0404 02:06:07.113659 4681 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:07 crc kubenswrapper[4681]: E0404 02:06:07.114226 4681 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:07 crc kubenswrapper[4681]: E0404 02:06:07.114771 4681 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:07 crc kubenswrapper[4681]: E0404 02:06:07.115227 4681 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:07 crc kubenswrapper[4681]: I0404 02:06:07.115316 4681 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Apr 04 02:06:07 crc kubenswrapper[4681]: E0404 02:06:07.115739 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.71:6443: connect: connection refused" interval="200ms" Apr 04 02:06:07 crc kubenswrapper[4681]: I0404 02:06:07.201023 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:06:07 crc kubenswrapper[4681]: I0404 02:06:07.202327 4681 status_manager.go:851] "Failed to get status for pod" podUID="4e6039c7a12c5a0c0ef5917dc7ee5582" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:07 crc kubenswrapper[4681]: I0404 02:06:07.202863 4681 status_manager.go:851] "Failed to get status for pod" podUID="aaec8d0ffd277c0e93001246672220ba" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:07 crc kubenswrapper[4681]: I0404 02:06:07.203437 4681 status_manager.go:851] "Failed to get status for pod" podUID="27afc22e-f982-4252-a9d3-a07bdd837b1c" pod="openshift-kube-apiserver/installer-11-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-11-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:07 crc kubenswrapper[4681]: I0404 02:06:07.211593 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e6039c7a12c5a0c0ef5917dc7ee5582" path="/var/lib/kubelet/pods/4e6039c7a12c5a0c0ef5917dc7ee5582/volumes" Apr 04 02:06:07 crc kubenswrapper[4681]: I0404 02:06:07.222711 4681 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="4736c33c-1698-4dd0-97e3-c8b6f881661c" Apr 04 02:06:07 crc kubenswrapper[4681]: I0404 02:06:07.222916 4681 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="4736c33c-1698-4dd0-97e3-c8b6f881661c" Apr 04 02:06:07 crc kubenswrapper[4681]: E0404 02:06:07.223614 4681 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:06:07 crc kubenswrapper[4681]: I0404 02:06:07.224148 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:06:07 crc kubenswrapper[4681]: W0404 02:06:07.255806 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc32a96981201f35bdc64ba062620676a.slice/crio-655b08c952dd46fac2b8a65cbcf3652d57cd73a5dfa0ccc11440b3ad9c8f8521 WatchSource:0}: Error finding container 655b08c952dd46fac2b8a65cbcf3652d57cd73a5dfa0ccc11440b3ad9c8f8521: Status 404 returned error can't find the container with id 655b08c952dd46fac2b8a65cbcf3652d57cd73a5dfa0ccc11440b3ad9c8f8521 Apr 04 02:06:07 crc kubenswrapper[4681]: E0404 02:06:07.316549 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.71:6443: connect: connection refused" interval="400ms" Apr 04 02:06:07 crc kubenswrapper[4681]: I0404 02:06:07.413021 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"c32a96981201f35bdc64ba062620676a","Type":"ContainerStarted","Data":"655b08c952dd46fac2b8a65cbcf3652d57cd73a5dfa0ccc11440b3ad9c8f8521"} Apr 04 02:06:07 crc kubenswrapper[4681]: E0404 02:06:07.718303 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.71:6443: connect: connection refused" interval="800ms" Apr 04 02:06:08 crc kubenswrapper[4681]: I0404 02:06:08.420207 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"c32a96981201f35bdc64ba062620676a","Type":"ContainerStarted","Data":"56c823dc089257e2ab7c3b75d18462445cf1e7bfab2bea0436e27ad4c10f7c5a"} Apr 04 02:06:08 crc kubenswrapper[4681]: E0404 02:06:08.519761 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.71:6443: connect: connection refused" interval="1.6s" Apr 04 02:06:09 crc kubenswrapper[4681]: I0404 02:06:09.427214 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"c32a96981201f35bdc64ba062620676a","Type":"ContainerStarted","Data":"f96538d250f4df88b34f5e778d2904dac602c19fee42c7bdfc8086640ba11984"} Apr 04 02:06:09 crc kubenswrapper[4681]: I0404 02:06:09.427530 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"c32a96981201f35bdc64ba062620676a","Type":"ContainerStarted","Data":"7f69d800b9977cf49664e51ef49b18eb8c9a3505a045045965466303c3b2d0df"} Apr 04 02:06:10 crc kubenswrapper[4681]: E0404 02:06:10.121043 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.71:6443: connect: connection refused" interval="3.2s" Apr 04 02:06:10 crc kubenswrapper[4681]: I0404 02:06:10.438503 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"c32a96981201f35bdc64ba062620676a","Type":"ContainerStarted","Data":"91d4de8b48ecf15154a1c2e49c305494f5224ad9f0cc06a503592cca72c3bf1c"} Apr 04 02:06:10 crc kubenswrapper[4681]: I0404 02:06:10.438914 4681 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="4736c33c-1698-4dd0-97e3-c8b6f881661c" Apr 04 02:06:10 crc kubenswrapper[4681]: I0404 02:06:10.438939 4681 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="4736c33c-1698-4dd0-97e3-c8b6f881661c" Apr 04 02:06:10 crc kubenswrapper[4681]: E0404 02:06:10.439563 4681 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:06:10 crc kubenswrapper[4681]: I0404 02:06:10.439729 4681 status_manager.go:851] "Failed to get status for pod" podUID="aaec8d0ffd277c0e93001246672220ba" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:10 crc kubenswrapper[4681]: I0404 02:06:10.439918 4681 status_manager.go:851] "Failed to get status for pod" podUID="27afc22e-f982-4252-a9d3-a07bdd837b1c" pod="openshift-kube-apiserver/installer-11-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-11-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:11 crc kubenswrapper[4681]: I0404 02:06:11.079459 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" podUID="434092c3-92f3-4a1f-833a-872828fdd96e" containerName="registry" containerID="cri-o://865eaed61abfd23ed92dd38fcab82fb83585db37b279678b0dbea7a0e9b9fc65" gracePeriod=30 Apr 04 02:06:11 crc kubenswrapper[4681]: I0404 02:06:11.205374 4681 status_manager.go:851] "Failed to get status for pod" podUID="aaec8d0ffd277c0e93001246672220ba" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:11 crc kubenswrapper[4681]: I0404 02:06:11.206132 4681 status_manager.go:851] "Failed to get status for pod" podUID="27afc22e-f982-4252-a9d3-a07bdd837b1c" pod="openshift-kube-apiserver/installer-11-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-11-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:11 crc kubenswrapper[4681]: I0404 02:06:11.206588 4681 status_manager.go:851] "Failed to get status for pod" podUID="c32a96981201f35bdc64ba062620676a" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:11 crc kubenswrapper[4681]: I0404 02:06:11.446377 4681 generic.go:334] "Generic (PLEG): container finished" podID="434092c3-92f3-4a1f-833a-872828fdd96e" containerID="865eaed61abfd23ed92dd38fcab82fb83585db37b279678b0dbea7a0e9b9fc65" exitCode=0 Apr 04 02:06:11 crc kubenswrapper[4681]: I0404 02:06:11.446485 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" event={"ID":"434092c3-92f3-4a1f-833a-872828fdd96e","Type":"ContainerDied","Data":"865eaed61abfd23ed92dd38fcab82fb83585db37b279678b0dbea7a0e9b9fc65"} Apr 04 02:06:11 crc kubenswrapper[4681]: I0404 02:06:11.446659 4681 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="4736c33c-1698-4dd0-97e3-c8b6f881661c" Apr 04 02:06:11 crc kubenswrapper[4681]: I0404 02:06:11.446672 4681 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="4736c33c-1698-4dd0-97e3-c8b6f881661c" Apr 04 02:06:11 crc kubenswrapper[4681]: E0404 02:06:11.447089 4681 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.077357 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.078223 4681 status_manager.go:851] "Failed to get status for pod" podUID="c32a96981201f35bdc64ba062620676a" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.078821 4681 status_manager.go:851] "Failed to get status for pod" podUID="aaec8d0ffd277c0e93001246672220ba" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.079684 4681 status_manager.go:851] "Failed to get status for pod" podUID="27afc22e-f982-4252-a9d3-a07bdd837b1c" pod="openshift-kube-apiserver/installer-11-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-11-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.080117 4681 status_manager.go:851] "Failed to get status for pod" podUID="434092c3-92f3-4a1f-833a-872828fdd96e" pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7bdb65549f-f4hr5\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:12 crc kubenswrapper[4681]: E0404 02:06:12.086792 4681 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.71:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18a3052bb59d2ac4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:aaec8d0ffd277c0e93001246672220ba,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-04-04 02:06:03.402816196 +0000 UTC m=+643.068591336,LastTimestamp:2026-04-04 02:06:03.402816196 +0000 UTC m=+643.068591336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.172309 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/434092c3-92f3-4a1f-833a-872828fdd96e-registry-tls\") pod \"434092c3-92f3-4a1f-833a-872828fdd96e\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.172372 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/434092c3-92f3-4a1f-833a-872828fdd96e-registry-certificates\") pod \"434092c3-92f3-4a1f-833a-872828fdd96e\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.172403 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rrws\" (UniqueName: \"kubernetes.io/projected/434092c3-92f3-4a1f-833a-872828fdd96e-kube-api-access-5rrws\") pod \"434092c3-92f3-4a1f-833a-872828fdd96e\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.172435 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/434092c3-92f3-4a1f-833a-872828fdd96e-ca-trust-extracted\") pod \"434092c3-92f3-4a1f-833a-872828fdd96e\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.173473 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/434092c3-92f3-4a1f-833a-872828fdd96e-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "434092c3-92f3-4a1f-833a-872828fdd96e" (UID: "434092c3-92f3-4a1f-833a-872828fdd96e"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.177896 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/434092c3-92f3-4a1f-833a-872828fdd96e-kube-api-access-5rrws" (OuterVolumeSpecName: "kube-api-access-5rrws") pod "434092c3-92f3-4a1f-833a-872828fdd96e" (UID: "434092c3-92f3-4a1f-833a-872828fdd96e"). InnerVolumeSpecName "kube-api-access-5rrws". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.177896 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/434092c3-92f3-4a1f-833a-872828fdd96e-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "434092c3-92f3-4a1f-833a-872828fdd96e" (UID: "434092c3-92f3-4a1f-833a-872828fdd96e"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.191974 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/434092c3-92f3-4a1f-833a-872828fdd96e-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "434092c3-92f3-4a1f-833a-872828fdd96e" (UID: "434092c3-92f3-4a1f-833a-872828fdd96e"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.273548 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/434092c3-92f3-4a1f-833a-872828fdd96e-bound-sa-token\") pod \"434092c3-92f3-4a1f-833a-872828fdd96e\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.273625 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/434092c3-92f3-4a1f-833a-872828fdd96e-trusted-ca\") pod \"434092c3-92f3-4a1f-833a-872828fdd96e\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.273655 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/434092c3-92f3-4a1f-833a-872828fdd96e-installation-pull-secrets\") pod \"434092c3-92f3-4a1f-833a-872828fdd96e\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.273969 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"434092c3-92f3-4a1f-833a-872828fdd96e\" (UID: \"434092c3-92f3-4a1f-833a-872828fdd96e\") " Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.274208 4681 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/434092c3-92f3-4a1f-833a-872828fdd96e-registry-tls\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.274228 4681 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/434092c3-92f3-4a1f-833a-872828fdd96e-registry-certificates\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.274239 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rrws\" (UniqueName: \"kubernetes.io/projected/434092c3-92f3-4a1f-833a-872828fdd96e-kube-api-access-5rrws\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.274251 4681 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/434092c3-92f3-4a1f-833a-872828fdd96e-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.274815 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/434092c3-92f3-4a1f-833a-872828fdd96e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "434092c3-92f3-4a1f-833a-872828fdd96e" (UID: "434092c3-92f3-4a1f-833a-872828fdd96e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.276858 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/434092c3-92f3-4a1f-833a-872828fdd96e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "434092c3-92f3-4a1f-833a-872828fdd96e" (UID: "434092c3-92f3-4a1f-833a-872828fdd96e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.277987 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/434092c3-92f3-4a1f-833a-872828fdd96e-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "434092c3-92f3-4a1f-833a-872828fdd96e" (UID: "434092c3-92f3-4a1f-833a-872828fdd96e"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.282052 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "434092c3-92f3-4a1f-833a-872828fdd96e" (UID: "434092c3-92f3-4a1f-833a-872828fdd96e"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.375656 4681 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/434092c3-92f3-4a1f-833a-872828fdd96e-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.375708 4681 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/434092c3-92f3-4a1f-833a-872828fdd96e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.375729 4681 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/434092c3-92f3-4a1f-833a-872828fdd96e-trusted-ca\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.454650 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" event={"ID":"434092c3-92f3-4a1f-833a-872828fdd96e","Type":"ContainerDied","Data":"d2058dbdc852ebbe96ac3955dae456a3da364aa1e4e8253617a4738f39a710d3"} Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.454739 4681 scope.go:117] "RemoveContainer" containerID="865eaed61abfd23ed92dd38fcab82fb83585db37b279678b0dbea7a0e9b9fc65" Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.454747 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.455816 4681 status_manager.go:851] "Failed to get status for pod" podUID="434092c3-92f3-4a1f-833a-872828fdd96e" pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7bdb65549f-f4hr5\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.456305 4681 status_manager.go:851] "Failed to get status for pod" podUID="c32a96981201f35bdc64ba062620676a" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.456765 4681 status_manager.go:851] "Failed to get status for pod" podUID="aaec8d0ffd277c0e93001246672220ba" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.458334 4681 status_manager.go:851] "Failed to get status for pod" podUID="27afc22e-f982-4252-a9d3-a07bdd837b1c" pod="openshift-kube-apiserver/installer-11-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-11-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.474528 4681 status_manager.go:851] "Failed to get status for pod" podUID="434092c3-92f3-4a1f-833a-872828fdd96e" pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7bdb65549f-f4hr5\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.474896 4681 status_manager.go:851] "Failed to get status for pod" podUID="c32a96981201f35bdc64ba062620676a" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.475328 4681 status_manager.go:851] "Failed to get status for pod" podUID="aaec8d0ffd277c0e93001246672220ba" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:12 crc kubenswrapper[4681]: I0404 02:06:12.475716 4681 status_manager.go:851] "Failed to get status for pod" podUID="27afc22e-f982-4252-a9d3-a07bdd837b1c" pod="openshift-kube-apiserver/installer-11-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-11-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:13 crc kubenswrapper[4681]: E0404 02:06:13.321696 4681 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.71:6443: connect: connection refused" interval="6.4s" Apr 04 02:06:15 crc kubenswrapper[4681]: I0404 02:06:15.205564 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:06:15 crc kubenswrapper[4681]: I0404 02:06:15.206356 4681 status_manager.go:851] "Failed to get status for pod" podUID="aaec8d0ffd277c0e93001246672220ba" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:15 crc kubenswrapper[4681]: I0404 02:06:15.206557 4681 status_manager.go:851] "Failed to get status for pod" podUID="27afc22e-f982-4252-a9d3-a07bdd837b1c" pod="openshift-kube-apiserver/installer-11-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-11-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:15 crc kubenswrapper[4681]: I0404 02:06:15.206755 4681 status_manager.go:851] "Failed to get status for pod" podUID="434092c3-92f3-4a1f-833a-872828fdd96e" pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7bdb65549f-f4hr5\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:15 crc kubenswrapper[4681]: I0404 02:06:15.207000 4681 status_manager.go:851] "Failed to get status for pod" podUID="c32a96981201f35bdc64ba062620676a" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:15 crc kubenswrapper[4681]: I0404 02:06:15.218648 4681 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cb6dbd06-3a60-48ca-8124-88d665b42c16" Apr 04 02:06:15 crc kubenswrapper[4681]: I0404 02:06:15.218685 4681 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cb6dbd06-3a60-48ca-8124-88d665b42c16" Apr 04 02:06:15 crc kubenswrapper[4681]: E0404 02:06:15.219159 4681 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:06:15 crc kubenswrapper[4681]: I0404 02:06:15.219820 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:06:15 crc kubenswrapper[4681]: W0404 02:06:15.253453 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f04c31653fd2d52d145a959c922a0d3.slice/crio-f8926d0fc7fb672bf60e67f27aaa5d320b9b6a90a95089b387b8c3d1d3f43466 WatchSource:0}: Error finding container f8926d0fc7fb672bf60e67f27aaa5d320b9b6a90a95089b387b8c3d1d3f43466: Status 404 returned error can't find the container with id f8926d0fc7fb672bf60e67f27aaa5d320b9b6a90a95089b387b8c3d1d3f43466 Apr 04 02:06:15 crc kubenswrapper[4681]: I0404 02:06:15.482068 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3f04c31653fd2d52d145a959c922a0d3","Type":"ContainerStarted","Data":"f8926d0fc7fb672bf60e67f27aaa5d320b9b6a90a95089b387b8c3d1d3f43466"} Apr 04 02:06:16 crc kubenswrapper[4681]: I0404 02:06:16.489987 4681 generic.go:334] "Generic (PLEG): container finished" podID="3f04c31653fd2d52d145a959c922a0d3" containerID="ee939fbf4952b365528a4b4edc5de473a2efaaaebd2759aaeb0ab9f24b5be117" exitCode=0 Apr 04 02:06:16 crc kubenswrapper[4681]: I0404 02:06:16.490047 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3f04c31653fd2d52d145a959c922a0d3","Type":"ContainerDied","Data":"ee939fbf4952b365528a4b4edc5de473a2efaaaebd2759aaeb0ab9f24b5be117"} Apr 04 02:06:16 crc kubenswrapper[4681]: I0404 02:06:16.490361 4681 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cb6dbd06-3a60-48ca-8124-88d665b42c16" Apr 04 02:06:16 crc kubenswrapper[4681]: I0404 02:06:16.490395 4681 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cb6dbd06-3a60-48ca-8124-88d665b42c16" Apr 04 02:06:16 crc kubenswrapper[4681]: E0404 02:06:16.490852 4681 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:06:16 crc kubenswrapper[4681]: I0404 02:06:16.491025 4681 status_manager.go:851] "Failed to get status for pod" podUID="434092c3-92f3-4a1f-833a-872828fdd96e" pod="openshift-image-registry/image-registry-7bdb65549f-f4hr5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-7bdb65549f-f4hr5\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:16 crc kubenswrapper[4681]: I0404 02:06:16.491826 4681 status_manager.go:851] "Failed to get status for pod" podUID="c32a96981201f35bdc64ba062620676a" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:16 crc kubenswrapper[4681]: I0404 02:06:16.492115 4681 status_manager.go:851] "Failed to get status for pod" podUID="aaec8d0ffd277c0e93001246672220ba" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:16 crc kubenswrapper[4681]: I0404 02:06:16.492567 4681 status_manager.go:851] "Failed to get status for pod" podUID="27afc22e-f982-4252-a9d3-a07bdd837b1c" pod="openshift-kube-apiserver/installer-11-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-11-crc\": dial tcp 38.129.56.71:6443: connect: connection refused" Apr 04 02:06:17 crc kubenswrapper[4681]: I0404 02:06:17.225208 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:06:17 crc kubenswrapper[4681]: I0404 02:06:17.225543 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:06:17 crc kubenswrapper[4681]: I0404 02:06:17.225628 4681 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Apr 04 02:06:17 crc kubenswrapper[4681]: I0404 02:06:17.225670 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="c32a96981201f35bdc64ba062620676a" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Apr 04 02:06:17 crc kubenswrapper[4681]: I0404 02:06:17.226009 4681 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="4736c33c-1698-4dd0-97e3-c8b6f881661c" Apr 04 02:06:17 crc kubenswrapper[4681]: I0404 02:06:17.226039 4681 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="4736c33c-1698-4dd0-97e3-c8b6f881661c" Apr 04 02:06:17 crc kubenswrapper[4681]: I0404 02:06:17.226216 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:06:17 crc kubenswrapper[4681]: I0404 02:06:17.226243 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:06:17 crc kubenswrapper[4681]: I0404 02:06:17.500331 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3f04c31653fd2d52d145a959c922a0d3","Type":"ContainerStarted","Data":"c2ec5ed3770b3baa2015bc36eccaba2300ac84c4ea2550ed233af7e5a5d56653"} Apr 04 02:06:17 crc kubenswrapper[4681]: I0404 02:06:17.500372 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3f04c31653fd2d52d145a959c922a0d3","Type":"ContainerStarted","Data":"e3c703c111fed07befd2021173e336c3a1830d81e7dbdb84eabc16adbfa18036"} Apr 04 02:06:17 crc kubenswrapper[4681]: I0404 02:06:17.500381 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3f04c31653fd2d52d145a959c922a0d3","Type":"ContainerStarted","Data":"0327d0b443b40fe6add2106562a92ae7500f6b4fdfe828daa024e25a14f05e1a"} Apr 04 02:06:18 crc kubenswrapper[4681]: I0404 02:06:18.509281 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3f04c31653fd2d52d145a959c922a0d3","Type":"ContainerStarted","Data":"89d7b642e70f5c5c5d387a3caa8d75523bbd34986dccbcb1823f23c71902f1e6"} Apr 04 02:06:18 crc kubenswrapper[4681]: I0404 02:06:18.509574 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:06:18 crc kubenswrapper[4681]: I0404 02:06:18.509585 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3f04c31653fd2d52d145a959c922a0d3","Type":"ContainerStarted","Data":"fe2ab3a0dececa1ea3bbabff1019c2e6b2bf22b8bca093c7972016c5511d79bc"} Apr 04 02:06:18 crc kubenswrapper[4681]: I0404 02:06:18.509587 4681 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cb6dbd06-3a60-48ca-8124-88d665b42c16" Apr 04 02:06:18 crc kubenswrapper[4681]: I0404 02:06:18.509612 4681 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cb6dbd06-3a60-48ca-8124-88d665b42c16" Apr 04 02:06:20 crc kubenswrapper[4681]: I0404 02:06:20.219958 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:06:20 crc kubenswrapper[4681]: I0404 02:06:20.220014 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:06:20 crc kubenswrapper[4681]: I0404 02:06:20.226416 4681 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 02:06:20 crc kubenswrapper[4681]: I0404 02:06:20.226576 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="c32a96981201f35bdc64ba062620676a" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 02:06:20 crc kubenswrapper[4681]: I0404 02:06:20.227788 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:06:23 crc kubenswrapper[4681]: I0404 02:06:23.234965 4681 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:06:23 crc kubenswrapper[4681]: I0404 02:06:23.519916 4681 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:06:23 crc kubenswrapper[4681]: I0404 02:06:23.533438 4681 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3f04c31653fd2d52d145a959c922a0d3" podUID="29240208-1031-4f43-8c6b-5b3bb80acdf5" Apr 04 02:06:23 crc kubenswrapper[4681]: I0404 02:06:23.550191 4681 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cb6dbd06-3a60-48ca-8124-88d665b42c16" Apr 04 02:06:23 crc kubenswrapper[4681]: I0404 02:06:23.550234 4681 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cb6dbd06-3a60-48ca-8124-88d665b42c16" Apr 04 02:06:23 crc kubenswrapper[4681]: I0404 02:06:23.550546 4681 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="4736c33c-1698-4dd0-97e3-c8b6f881661c" Apr 04 02:06:23 crc kubenswrapper[4681]: I0404 02:06:23.550575 4681 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="4736c33c-1698-4dd0-97e3-c8b6f881661c" Apr 04 02:06:23 crc kubenswrapper[4681]: I0404 02:06:23.607751 4681 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-crc" oldPodUID="c32a96981201f35bdc64ba062620676a" podUID="bd38d99c-33b5-4195-b545-a9748adf7d0d" Apr 04 02:06:23 crc kubenswrapper[4681]: I0404 02:06:23.619044 4681 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3f04c31653fd2d52d145a959c922a0d3" podUID="29240208-1031-4f43-8c6b-5b3bb80acdf5" Apr 04 02:06:31 crc kubenswrapper[4681]: I0404 02:06:31.611546 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-8-retry-1-crc_ab7e3566-522f-4eed-add8-d690d057fb83/installer/0.log" Apr 04 02:06:31 crc kubenswrapper[4681]: I0404 02:06:31.612075 4681 generic.go:334] "Generic (PLEG): container finished" podID="ab7e3566-522f-4eed-add8-d690d057fb83" containerID="805a5fd4191094db61221f54f874297ffd936efcfb8a6cad8aa5b99a76391176" exitCode=1 Apr 04 02:06:31 crc kubenswrapper[4681]: I0404 02:06:31.612111 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-8-retry-1-crc" event={"ID":"ab7e3566-522f-4eed-add8-d690d057fb83","Type":"ContainerDied","Data":"805a5fd4191094db61221f54f874297ffd936efcfb8a6cad8aa5b99a76391176"} Apr 04 02:06:32 crc kubenswrapper[4681]: I0404 02:06:32.848981 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-8-retry-1-crc_ab7e3566-522f-4eed-add8-d690d057fb83/installer/0.log" Apr 04 02:06:32 crc kubenswrapper[4681]: I0404 02:06:32.849360 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-8-retry-1-crc" Apr 04 02:06:32 crc kubenswrapper[4681]: I0404 02:06:32.949484 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab7e3566-522f-4eed-add8-d690d057fb83-kube-api-access\") pod \"ab7e3566-522f-4eed-add8-d690d057fb83\" (UID: \"ab7e3566-522f-4eed-add8-d690d057fb83\") " Apr 04 02:06:32 crc kubenswrapper[4681]: I0404 02:06:32.949528 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ab7e3566-522f-4eed-add8-d690d057fb83-var-lock\") pod \"ab7e3566-522f-4eed-add8-d690d057fb83\" (UID: \"ab7e3566-522f-4eed-add8-d690d057fb83\") " Apr 04 02:06:32 crc kubenswrapper[4681]: I0404 02:06:32.949597 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab7e3566-522f-4eed-add8-d690d057fb83-kubelet-dir\") pod \"ab7e3566-522f-4eed-add8-d690d057fb83\" (UID: \"ab7e3566-522f-4eed-add8-d690d057fb83\") " Apr 04 02:06:32 crc kubenswrapper[4681]: I0404 02:06:32.949684 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab7e3566-522f-4eed-add8-d690d057fb83-var-lock" (OuterVolumeSpecName: "var-lock") pod "ab7e3566-522f-4eed-add8-d690d057fb83" (UID: "ab7e3566-522f-4eed-add8-d690d057fb83"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:06:32 crc kubenswrapper[4681]: I0404 02:06:32.949757 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ab7e3566-522f-4eed-add8-d690d057fb83-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ab7e3566-522f-4eed-add8-d690d057fb83" (UID: "ab7e3566-522f-4eed-add8-d690d057fb83"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:06:32 crc kubenswrapper[4681]: I0404 02:06:32.954536 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab7e3566-522f-4eed-add8-d690d057fb83-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ab7e3566-522f-4eed-add8-d690d057fb83" (UID: "ab7e3566-522f-4eed-add8-d690d057fb83"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:06:33 crc kubenswrapper[4681]: I0404 02:06:33.050232 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ab7e3566-522f-4eed-add8-d690d057fb83-kube-api-access\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:33 crc kubenswrapper[4681]: I0404 02:06:33.050327 4681 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ab7e3566-522f-4eed-add8-d690d057fb83-var-lock\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:33 crc kubenswrapper[4681]: I0404 02:06:33.050342 4681 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ab7e3566-522f-4eed-add8-d690d057fb83-kubelet-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:33 crc kubenswrapper[4681]: I0404 02:06:33.627770 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-8-retry-1-crc_ab7e3566-522f-4eed-add8-d690d057fb83/installer/0.log" Apr 04 02:06:33 crc kubenswrapper[4681]: I0404 02:06:33.627845 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-8-retry-1-crc" event={"ID":"ab7e3566-522f-4eed-add8-d690d057fb83","Type":"ContainerDied","Data":"73aab193c44f4693a19ba5319ee6aee805b7967b3684abfbd6d71a5b2079f449"} Apr 04 02:06:33 crc kubenswrapper[4681]: I0404 02:06:33.627875 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73aab193c44f4693a19ba5319ee6aee805b7967b3684abfbd6d71a5b2079f449" Apr 04 02:06:33 crc kubenswrapper[4681]: I0404 02:06:33.627952 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-8-retry-1-crc" Apr 04 02:06:34 crc kubenswrapper[4681]: I0404 02:06:34.784441 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Apr 04 02:06:34 crc kubenswrapper[4681]: I0404 02:06:34.928603 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Apr 04 02:06:35 crc kubenswrapper[4681]: I0404 02:06:35.227688 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Apr 04 02:06:35 crc kubenswrapper[4681]: I0404 02:06:35.616243 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Apr 04 02:06:35 crc kubenswrapper[4681]: I0404 02:06:35.855429 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Apr 04 02:06:36 crc kubenswrapper[4681]: I0404 02:06:36.047831 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Apr 04 02:06:36 crc kubenswrapper[4681]: I0404 02:06:36.100901 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Apr 04 02:06:36 crc kubenswrapper[4681]: I0404 02:06:36.123009 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Apr 04 02:06:36 crc kubenswrapper[4681]: I0404 02:06:36.438539 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Apr 04 02:06:36 crc kubenswrapper[4681]: I0404 02:06:36.519342 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Apr 04 02:06:36 crc kubenswrapper[4681]: I0404 02:06:36.562107 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Apr 04 02:06:36 crc kubenswrapper[4681]: I0404 02:06:36.619820 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Apr 04 02:06:36 crc kubenswrapper[4681]: I0404 02:06:36.634811 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Apr 04 02:06:36 crc kubenswrapper[4681]: I0404 02:06:36.640087 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Apr 04 02:06:36 crc kubenswrapper[4681]: I0404 02:06:36.659465 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Apr 04 02:06:36 crc kubenswrapper[4681]: I0404 02:06:36.872925 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Apr 04 02:06:36 crc kubenswrapper[4681]: I0404 02:06:36.895942 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Apr 04 02:06:36 crc kubenswrapper[4681]: I0404 02:06:36.953436 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Apr 04 02:06:37 crc kubenswrapper[4681]: I0404 02:06:37.025987 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Apr 04 02:06:37 crc kubenswrapper[4681]: I0404 02:06:37.134471 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Apr 04 02:06:37 crc kubenswrapper[4681]: I0404 02:06:37.230597 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Apr 04 02:06:37 crc kubenswrapper[4681]: I0404 02:06:37.372366 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Apr 04 02:06:37 crc kubenswrapper[4681]: I0404 02:06:37.374522 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Apr 04 02:06:37 crc kubenswrapper[4681]: I0404 02:06:37.438430 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Apr 04 02:06:37 crc kubenswrapper[4681]: I0404 02:06:37.444735 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Apr 04 02:06:37 crc kubenswrapper[4681]: I0404 02:06:37.541569 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Apr 04 02:06:37 crc kubenswrapper[4681]: I0404 02:06:37.753535 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Apr 04 02:06:37 crc kubenswrapper[4681]: I0404 02:06:37.919920 4681 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Apr 04 02:06:38 crc kubenswrapper[4681]: I0404 02:06:38.002065 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Apr 04 02:06:38 crc kubenswrapper[4681]: I0404 02:06:38.060199 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Apr 04 02:06:38 crc kubenswrapper[4681]: I0404 02:06:38.133062 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Apr 04 02:06:38 crc kubenswrapper[4681]: I0404 02:06:38.181194 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Apr 04 02:06:38 crc kubenswrapper[4681]: I0404 02:06:38.200799 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Apr 04 02:06:38 crc kubenswrapper[4681]: I0404 02:06:38.563214 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Apr 04 02:06:38 crc kubenswrapper[4681]: I0404 02:06:38.635466 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Apr 04 02:06:38 crc kubenswrapper[4681]: I0404 02:06:38.680358 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Apr 04 02:06:38 crc kubenswrapper[4681]: I0404 02:06:38.746147 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Apr 04 02:06:38 crc kubenswrapper[4681]: I0404 02:06:38.774643 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Apr 04 02:06:38 crc kubenswrapper[4681]: I0404 02:06:38.786476 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Apr 04 02:06:38 crc kubenswrapper[4681]: I0404 02:06:38.789425 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Apr 04 02:06:38 crc kubenswrapper[4681]: I0404 02:06:38.848741 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Apr 04 02:06:38 crc kubenswrapper[4681]: I0404 02:06:38.862576 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Apr 04 02:06:38 crc kubenswrapper[4681]: I0404 02:06:38.897058 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Apr 04 02:06:38 crc kubenswrapper[4681]: I0404 02:06:38.979595 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Apr 04 02:06:39 crc kubenswrapper[4681]: I0404 02:06:39.067845 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Apr 04 02:06:39 crc kubenswrapper[4681]: I0404 02:06:39.074201 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Apr 04 02:06:39 crc kubenswrapper[4681]: I0404 02:06:39.192811 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Apr 04 02:06:39 crc kubenswrapper[4681]: I0404 02:06:39.207178 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Apr 04 02:06:39 crc kubenswrapper[4681]: I0404 02:06:39.275154 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Apr 04 02:06:39 crc kubenswrapper[4681]: I0404 02:06:39.301153 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Apr 04 02:06:39 crc kubenswrapper[4681]: I0404 02:06:39.304843 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Apr 04 02:06:39 crc kubenswrapper[4681]: I0404 02:06:39.317633 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Apr 04 02:06:39 crc kubenswrapper[4681]: I0404 02:06:39.323844 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Apr 04 02:06:39 crc kubenswrapper[4681]: I0404 02:06:39.585899 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Apr 04 02:06:39 crc kubenswrapper[4681]: I0404 02:06:39.596171 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Apr 04 02:06:39 crc kubenswrapper[4681]: I0404 02:06:39.634417 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Apr 04 02:06:39 crc kubenswrapper[4681]: I0404 02:06:39.640724 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Apr 04 02:06:39 crc kubenswrapper[4681]: I0404 02:06:39.646660 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Apr 04 02:06:39 crc kubenswrapper[4681]: I0404 02:06:39.723850 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Apr 04 02:06:39 crc kubenswrapper[4681]: I0404 02:06:39.732018 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Apr 04 02:06:39 crc kubenswrapper[4681]: I0404 02:06:39.845653 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Apr 04 02:06:39 crc kubenswrapper[4681]: I0404 02:06:39.877969 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Apr 04 02:06:39 crc kubenswrapper[4681]: I0404 02:06:39.903682 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Apr 04 02:06:39 crc kubenswrapper[4681]: I0404 02:06:39.905908 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Apr 04 02:06:39 crc kubenswrapper[4681]: I0404 02:06:39.938832 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Apr 04 02:06:40 crc kubenswrapper[4681]: I0404 02:06:40.050891 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Apr 04 02:06:40 crc kubenswrapper[4681]: I0404 02:06:40.096540 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Apr 04 02:06:40 crc kubenswrapper[4681]: I0404 02:06:40.142613 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Apr 04 02:06:40 crc kubenswrapper[4681]: I0404 02:06:40.193999 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Apr 04 02:06:40 crc kubenswrapper[4681]: I0404 02:06:40.207658 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Apr 04 02:06:40 crc kubenswrapper[4681]: I0404 02:06:40.215898 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Apr 04 02:06:40 crc kubenswrapper[4681]: I0404 02:06:40.302938 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Apr 04 02:06:40 crc kubenswrapper[4681]: I0404 02:06:40.304933 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Apr 04 02:06:40 crc kubenswrapper[4681]: I0404 02:06:40.377333 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Apr 04 02:06:40 crc kubenswrapper[4681]: I0404 02:06:40.454568 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Apr 04 02:06:40 crc kubenswrapper[4681]: I0404 02:06:40.457199 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Apr 04 02:06:40 crc kubenswrapper[4681]: I0404 02:06:40.469728 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Apr 04 02:06:40 crc kubenswrapper[4681]: I0404 02:06:40.498503 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Apr 04 02:06:40 crc kubenswrapper[4681]: I0404 02:06:40.551990 4681 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Apr 04 02:06:40 crc kubenswrapper[4681]: I0404 02:06:40.632650 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Apr 04 02:06:40 crc kubenswrapper[4681]: I0404 02:06:40.708069 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Apr 04 02:06:40 crc kubenswrapper[4681]: I0404 02:06:40.759538 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Apr 04 02:06:40 crc kubenswrapper[4681]: I0404 02:06:40.831752 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Apr 04 02:06:40 crc kubenswrapper[4681]: I0404 02:06:40.862018 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Apr 04 02:06:40 crc kubenswrapper[4681]: I0404 02:06:40.875631 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Apr 04 02:06:40 crc kubenswrapper[4681]: I0404 02:06:40.930954 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.018895 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.084531 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.126933 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.136862 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.155055 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.179013 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.210977 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.224250 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.409709 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.477249 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.482958 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.516295 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.562320 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.593673 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.607558 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.608583 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.634163 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.655442 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.660021 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.712854 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.754025 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.828223 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.832307 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.835348 4681 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.837445 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=38.837431925 podStartE2EDuration="38.837431925s" podCreationTimestamp="2026-04-04 02:06:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:06:23.530675413 +0000 UTC m=+663.196450573" watchObservedRunningTime="2026-04-04 02:06:41.837431925 +0000 UTC m=+681.503207045" Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.840954 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-image-registry/image-registry-7bdb65549f-f4hr5","openshift-kube-controller-manager/kube-controller-manager-crc"] Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.841021 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.841049 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kxzxq","openshift-marketplace/certified-operators-5qn4m","openshift-marketplace/redhat-marketplace-w2zrg","openshift-marketplace/community-operators-m8stk","openshift-marketplace/redhat-operators-dvhf7"] Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.841312 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dvhf7" podUID="b0cbd40c-5c8c-451b-af65-fb67ba867ced" containerName="registry-server" containerID="cri-o://eb030fc38f85a67b03f98de155bb65fe11812a38b3a790fe6470f05ed35fa7e7" gracePeriod=30 Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.841594 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5qn4m" podUID="f00114dc-2aae-4d37-8143-71336f144be3" containerName="registry-server" containerID="cri-o://e0de81439966fae3310cd128018f42d8ad943049b2e0bf38e79c551d10557889" gracePeriod=30 Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.841838 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-w2zrg" podUID="1b3e95cc-25d6-4efd-8828-894657c29bcb" containerName="registry-server" containerID="cri-o://8a01211c8e821b88a6220ad8ba9c9eeb340e74df254c8585fc16ab4b0d3b01e5" gracePeriod=30 Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.841937 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-kxzxq" podUID="664aa862-1bb6-421a-87b9-992ead56694b" containerName="marketplace-operator" containerID="cri-o://bdc413e206247e2853a81056cd41a77940bd28f211f4e44aecce13c5f01b238a" gracePeriod=30 Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.842235 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m8stk" podUID="da41f745-08e9-4d36-ad1d-3b054a4f0a2f" containerName="registry-server" containerID="cri-o://40e51735094189321912b95707e729c950d33622e0f4bcc2d1b0cea4c9b3f59a" gracePeriod=30 Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.886195 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=18.886169197 podStartE2EDuration="18.886169197s" podCreationTimestamp="2026-04-04 02:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:06:41.87830623 +0000 UTC m=+681.544081370" watchObservedRunningTime="2026-04-04 02:06:41.886169197 +0000 UTC m=+681.551944347" Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.900744 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.920778 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=18.92075652 podStartE2EDuration="18.92075652s" podCreationTimestamp="2026-04-04 02:06:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:06:41.914237 +0000 UTC m=+681.580012130" watchObservedRunningTime="2026-04-04 02:06:41.92075652 +0000 UTC m=+681.586531650" Apr 04 02:06:41 crc kubenswrapper[4681]: I0404 02:06:41.998325 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Apr 04 02:06:42 crc kubenswrapper[4681]: I0404 02:06:42.021644 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Apr 04 02:06:42 crc kubenswrapper[4681]: I0404 02:06:42.021877 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Apr 04 02:06:42 crc kubenswrapper[4681]: I0404 02:06:42.118615 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Apr 04 02:06:42 crc kubenswrapper[4681]: I0404 02:06:42.132386 4681 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Apr 04 02:06:42 crc kubenswrapper[4681]: I0404 02:06:42.133720 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Apr 04 02:06:42 crc kubenswrapper[4681]: I0404 02:06:42.196444 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Apr 04 02:06:42 crc kubenswrapper[4681]: I0404 02:06:42.224579 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Apr 04 02:06:42 crc kubenswrapper[4681]: I0404 02:06:42.270853 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Apr 04 02:06:42 crc kubenswrapper[4681]: I0404 02:06:42.314076 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Apr 04 02:06:42 crc kubenswrapper[4681]: I0404 02:06:42.457558 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Apr 04 02:06:42 crc kubenswrapper[4681]: I0404 02:06:42.605859 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Apr 04 02:06:42 crc kubenswrapper[4681]: I0404 02:06:42.608699 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Apr 04 02:06:42 crc kubenswrapper[4681]: I0404 02:06:42.701507 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Apr 04 02:06:42 crc kubenswrapper[4681]: I0404 02:06:42.740781 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Apr 04 02:06:42 crc kubenswrapper[4681]: I0404 02:06:42.769166 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Apr 04 02:06:42 crc kubenswrapper[4681]: I0404 02:06:42.800176 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Apr 04 02:06:42 crc kubenswrapper[4681]: I0404 02:06:42.802599 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Apr 04 02:06:42 crc kubenswrapper[4681]: I0404 02:06:42.833496 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Apr 04 02:06:42 crc kubenswrapper[4681]: I0404 02:06:42.869691 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Apr 04 02:06:42 crc kubenswrapper[4681]: I0404 02:06:42.878905 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Apr 04 02:06:42 crc kubenswrapper[4681]: I0404 02:06:42.975889 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Apr 04 02:06:42 crc kubenswrapper[4681]: I0404 02:06:42.981741 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Apr 04 02:06:42 crc kubenswrapper[4681]: I0404 02:06:42.991867 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.140232 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.184981 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.219972 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="434092c3-92f3-4a1f-833a-872828fdd96e" path="/var/lib/kubelet/pods/434092c3-92f3-4a1f-833a-872828fdd96e/volumes" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.230128 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.271660 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.271927 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.275591 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.311249 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.323392 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.325675 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.329922 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.361814 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.393639 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dvhf7" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.433006 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.464401 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Apr 04 02:06:43 crc kubenswrapper[4681]: E0404 02:06:43.475243 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8a01211c8e821b88a6220ad8ba9c9eeb340e74df254c8585fc16ab4b0d3b01e5 is running failed: container process not found" containerID="8a01211c8e821b88a6220ad8ba9c9eeb340e74df254c8585fc16ab4b0d3b01e5" cmd=["grpc_health_probe","-addr=:50051"] Apr 04 02:06:43 crc kubenswrapper[4681]: E0404 02:06:43.475988 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8a01211c8e821b88a6220ad8ba9c9eeb340e74df254c8585fc16ab4b0d3b01e5 is running failed: container process not found" containerID="8a01211c8e821b88a6220ad8ba9c9eeb340e74df254c8585fc16ab4b0d3b01e5" cmd=["grpc_health_probe","-addr=:50051"] Apr 04 02:06:43 crc kubenswrapper[4681]: E0404 02:06:43.476484 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8a01211c8e821b88a6220ad8ba9c9eeb340e74df254c8585fc16ab4b0d3b01e5 is running failed: container process not found" containerID="8a01211c8e821b88a6220ad8ba9c9eeb340e74df254c8585fc16ab4b0d3b01e5" cmd=["grpc_health_probe","-addr=:50051"] Apr 04 02:06:43 crc kubenswrapper[4681]: E0404 02:06:43.476563 4681 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 8a01211c8e821b88a6220ad8ba9c9eeb340e74df254c8585fc16ab4b0d3b01e5 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-w2zrg" podUID="1b3e95cc-25d6-4efd-8828-894657c29bcb" containerName="registry-server" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.485498 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8stk" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.492061 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w2zrg" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.497494 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kxzxq" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.506380 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5qn4m" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.518045 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.592759 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z87tn\" (UniqueName: \"kubernetes.io/projected/b0cbd40c-5c8c-451b-af65-fb67ba867ced-kube-api-access-z87tn\") pod \"b0cbd40c-5c8c-451b-af65-fb67ba867ced\" (UID: \"b0cbd40c-5c8c-451b-af65-fb67ba867ced\") " Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.592813 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0cbd40c-5c8c-451b-af65-fb67ba867ced-utilities\") pod \"b0cbd40c-5c8c-451b-af65-fb67ba867ced\" (UID: \"b0cbd40c-5c8c-451b-af65-fb67ba867ced\") " Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.592853 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da41f745-08e9-4d36-ad1d-3b054a4f0a2f-utilities\") pod \"da41f745-08e9-4d36-ad1d-3b054a4f0a2f\" (UID: \"da41f745-08e9-4d36-ad1d-3b054a4f0a2f\") " Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.592879 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb4kb\" (UniqueName: \"kubernetes.io/projected/da41f745-08e9-4d36-ad1d-3b054a4f0a2f-kube-api-access-kb4kb\") pod \"da41f745-08e9-4d36-ad1d-3b054a4f0a2f\" (UID: \"da41f745-08e9-4d36-ad1d-3b054a4f0a2f\") " Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.592923 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b3e95cc-25d6-4efd-8828-894657c29bcb-catalog-content\") pod \"1b3e95cc-25d6-4efd-8828-894657c29bcb\" (UID: \"1b3e95cc-25d6-4efd-8828-894657c29bcb\") " Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.592982 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b3e95cc-25d6-4efd-8828-894657c29bcb-utilities\") pod \"1b3e95cc-25d6-4efd-8828-894657c29bcb\" (UID: \"1b3e95cc-25d6-4efd-8828-894657c29bcb\") " Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.593002 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f00114dc-2aae-4d37-8143-71336f144be3-catalog-content\") pod \"f00114dc-2aae-4d37-8143-71336f144be3\" (UID: \"f00114dc-2aae-4d37-8143-71336f144be3\") " Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.593020 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpblw\" (UniqueName: \"kubernetes.io/projected/1b3e95cc-25d6-4efd-8828-894657c29bcb-kube-api-access-qpblw\") pod \"1b3e95cc-25d6-4efd-8828-894657c29bcb\" (UID: \"1b3e95cc-25d6-4efd-8828-894657c29bcb\") " Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.593043 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0cbd40c-5c8c-451b-af65-fb67ba867ced-catalog-content\") pod \"b0cbd40c-5c8c-451b-af65-fb67ba867ced\" (UID: \"b0cbd40c-5c8c-451b-af65-fb67ba867ced\") " Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.593063 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2bfd\" (UniqueName: \"kubernetes.io/projected/f00114dc-2aae-4d37-8143-71336f144be3-kube-api-access-r2bfd\") pod \"f00114dc-2aae-4d37-8143-71336f144be3\" (UID: \"f00114dc-2aae-4d37-8143-71336f144be3\") " Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.593082 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wphgv\" (UniqueName: \"kubernetes.io/projected/664aa862-1bb6-421a-87b9-992ead56694b-kube-api-access-wphgv\") pod \"664aa862-1bb6-421a-87b9-992ead56694b\" (UID: \"664aa862-1bb6-421a-87b9-992ead56694b\") " Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.593102 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da41f745-08e9-4d36-ad1d-3b054a4f0a2f-catalog-content\") pod \"da41f745-08e9-4d36-ad1d-3b054a4f0a2f\" (UID: \"da41f745-08e9-4d36-ad1d-3b054a4f0a2f\") " Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.593121 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f00114dc-2aae-4d37-8143-71336f144be3-utilities\") pod \"f00114dc-2aae-4d37-8143-71336f144be3\" (UID: \"f00114dc-2aae-4d37-8143-71336f144be3\") " Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.593988 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0cbd40c-5c8c-451b-af65-fb67ba867ced-utilities" (OuterVolumeSpecName: "utilities") pod "b0cbd40c-5c8c-451b-af65-fb67ba867ced" (UID: "b0cbd40c-5c8c-451b-af65-fb67ba867ced"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.594442 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da41f745-08e9-4d36-ad1d-3b054a4f0a2f-utilities" (OuterVolumeSpecName: "utilities") pod "da41f745-08e9-4d36-ad1d-3b054a4f0a2f" (UID: "da41f745-08e9-4d36-ad1d-3b054a4f0a2f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.594633 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f00114dc-2aae-4d37-8143-71336f144be3-utilities" (OuterVolumeSpecName: "utilities") pod "f00114dc-2aae-4d37-8143-71336f144be3" (UID: "f00114dc-2aae-4d37-8143-71336f144be3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.595290 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b3e95cc-25d6-4efd-8828-894657c29bcb-utilities" (OuterVolumeSpecName: "utilities") pod "1b3e95cc-25d6-4efd-8828-894657c29bcb" (UID: "1b3e95cc-25d6-4efd-8828-894657c29bcb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.599161 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da41f745-08e9-4d36-ad1d-3b054a4f0a2f-kube-api-access-kb4kb" (OuterVolumeSpecName: "kube-api-access-kb4kb") pod "da41f745-08e9-4d36-ad1d-3b054a4f0a2f" (UID: "da41f745-08e9-4d36-ad1d-3b054a4f0a2f"). InnerVolumeSpecName "kube-api-access-kb4kb". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.599325 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b3e95cc-25d6-4efd-8828-894657c29bcb-kube-api-access-qpblw" (OuterVolumeSpecName: "kube-api-access-qpblw") pod "1b3e95cc-25d6-4efd-8828-894657c29bcb" (UID: "1b3e95cc-25d6-4efd-8828-894657c29bcb"). InnerVolumeSpecName "kube-api-access-qpblw". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.600097 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/664aa862-1bb6-421a-87b9-992ead56694b-kube-api-access-wphgv" (OuterVolumeSpecName: "kube-api-access-wphgv") pod "664aa862-1bb6-421a-87b9-992ead56694b" (UID: "664aa862-1bb6-421a-87b9-992ead56694b"). InnerVolumeSpecName "kube-api-access-wphgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.606139 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0cbd40c-5c8c-451b-af65-fb67ba867ced-kube-api-access-z87tn" (OuterVolumeSpecName: "kube-api-access-z87tn") pod "b0cbd40c-5c8c-451b-af65-fb67ba867ced" (UID: "b0cbd40c-5c8c-451b-af65-fb67ba867ced"). InnerVolumeSpecName "kube-api-access-z87tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.610420 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f00114dc-2aae-4d37-8143-71336f144be3-kube-api-access-r2bfd" (OuterVolumeSpecName: "kube-api-access-r2bfd") pod "f00114dc-2aae-4d37-8143-71336f144be3" (UID: "f00114dc-2aae-4d37-8143-71336f144be3"). InnerVolumeSpecName "kube-api-access-r2bfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.621972 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b3e95cc-25d6-4efd-8828-894657c29bcb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1b3e95cc-25d6-4efd-8828-894657c29bcb" (UID: "1b3e95cc-25d6-4efd-8828-894657c29bcb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.656317 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.657094 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f00114dc-2aae-4d37-8143-71336f144be3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f00114dc-2aae-4d37-8143-71336f144be3" (UID: "f00114dc-2aae-4d37-8143-71336f144be3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.659947 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da41f745-08e9-4d36-ad1d-3b054a4f0a2f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da41f745-08e9-4d36-ad1d-3b054a4f0a2f" (UID: "da41f745-08e9-4d36-ad1d-3b054a4f0a2f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.694194 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/664aa862-1bb6-421a-87b9-992ead56694b-marketplace-operator-metrics\") pod \"664aa862-1bb6-421a-87b9-992ead56694b\" (UID: \"664aa862-1bb6-421a-87b9-992ead56694b\") " Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.694253 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/664aa862-1bb6-421a-87b9-992ead56694b-marketplace-trusted-ca\") pod \"664aa862-1bb6-421a-87b9-992ead56694b\" (UID: \"664aa862-1bb6-421a-87b9-992ead56694b\") " Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.694492 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z87tn\" (UniqueName: \"kubernetes.io/projected/b0cbd40c-5c8c-451b-af65-fb67ba867ced-kube-api-access-z87tn\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.694509 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0cbd40c-5c8c-451b-af65-fb67ba867ced-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.694523 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da41f745-08e9-4d36-ad1d-3b054a4f0a2f-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.694534 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb4kb\" (UniqueName: \"kubernetes.io/projected/da41f745-08e9-4d36-ad1d-3b054a4f0a2f-kube-api-access-kb4kb\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.694545 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1b3e95cc-25d6-4efd-8828-894657c29bcb-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.694554 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1b3e95cc-25d6-4efd-8828-894657c29bcb-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.694564 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f00114dc-2aae-4d37-8143-71336f144be3-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.694575 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpblw\" (UniqueName: \"kubernetes.io/projected/1b3e95cc-25d6-4efd-8828-894657c29bcb-kube-api-access-qpblw\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.694587 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2bfd\" (UniqueName: \"kubernetes.io/projected/f00114dc-2aae-4d37-8143-71336f144be3-kube-api-access-r2bfd\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.694598 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wphgv\" (UniqueName: \"kubernetes.io/projected/664aa862-1bb6-421a-87b9-992ead56694b-kube-api-access-wphgv\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.694609 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da41f745-08e9-4d36-ad1d-3b054a4f0a2f-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.694619 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f00114dc-2aae-4d37-8143-71336f144be3-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.695318 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/664aa862-1bb6-421a-87b9-992ead56694b-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "664aa862-1bb6-421a-87b9-992ead56694b" (UID: "664aa862-1bb6-421a-87b9-992ead56694b"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.696631 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/664aa862-1bb6-421a-87b9-992ead56694b-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "664aa862-1bb6-421a-87b9-992ead56694b" (UID: "664aa862-1bb6-421a-87b9-992ead56694b"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.699402 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.710236 4681 generic.go:334] "Generic (PLEG): container finished" podID="664aa862-1bb6-421a-87b9-992ead56694b" containerID="bdc413e206247e2853a81056cd41a77940bd28f211f4e44aecce13c5f01b238a" exitCode=0 Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.710300 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kxzxq" event={"ID":"664aa862-1bb6-421a-87b9-992ead56694b","Type":"ContainerDied","Data":"bdc413e206247e2853a81056cd41a77940bd28f211f4e44aecce13c5f01b238a"} Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.710384 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kxzxq" event={"ID":"664aa862-1bb6-421a-87b9-992ead56694b","Type":"ContainerDied","Data":"2e10c9c28901d26d2cb700af683b207de6979efa1c472f6bfb446404f5b7b855"} Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.710411 4681 scope.go:117] "RemoveContainer" containerID="bdc413e206247e2853a81056cd41a77940bd28f211f4e44aecce13c5f01b238a" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.710416 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kxzxq" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.719128 4681 generic.go:334] "Generic (PLEG): container finished" podID="da41f745-08e9-4d36-ad1d-3b054a4f0a2f" containerID="40e51735094189321912b95707e729c950d33622e0f4bcc2d1b0cea4c9b3f59a" exitCode=0 Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.719192 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8stk" event={"ID":"da41f745-08e9-4d36-ad1d-3b054a4f0a2f","Type":"ContainerDied","Data":"40e51735094189321912b95707e729c950d33622e0f4bcc2d1b0cea4c9b3f59a"} Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.719220 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8stk" event={"ID":"da41f745-08e9-4d36-ad1d-3b054a4f0a2f","Type":"ContainerDied","Data":"bf1b2ca664a043738a03784b7f337797e99b9d468a8f615dce9358d5a01f51c8"} Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.719312 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8stk" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.720095 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.725289 4681 generic.go:334] "Generic (PLEG): container finished" podID="b0cbd40c-5c8c-451b-af65-fb67ba867ced" containerID="eb030fc38f85a67b03f98de155bb65fe11812a38b3a790fe6470f05ed35fa7e7" exitCode=0 Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.725355 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvhf7" event={"ID":"b0cbd40c-5c8c-451b-af65-fb67ba867ced","Type":"ContainerDied","Data":"eb030fc38f85a67b03f98de155bb65fe11812a38b3a790fe6470f05ed35fa7e7"} Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.725384 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dvhf7" event={"ID":"b0cbd40c-5c8c-451b-af65-fb67ba867ced","Type":"ContainerDied","Data":"bdb8cfad974e2015a6e9a04476c19f3d6c245c02e2c0bb0d9e59b18002cf9cd3"} Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.725470 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dvhf7" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.730305 4681 generic.go:334] "Generic (PLEG): container finished" podID="f00114dc-2aae-4d37-8143-71336f144be3" containerID="e0de81439966fae3310cd128018f42d8ad943049b2e0bf38e79c551d10557889" exitCode=0 Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.730381 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5qn4m" event={"ID":"f00114dc-2aae-4d37-8143-71336f144be3","Type":"ContainerDied","Data":"e0de81439966fae3310cd128018f42d8ad943049b2e0bf38e79c551d10557889"} Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.730408 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5qn4m" event={"ID":"f00114dc-2aae-4d37-8143-71336f144be3","Type":"ContainerDied","Data":"2fc423564d475057b567302bc17716463da1952c821cc09db70634fe92283c2c"} Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.730379 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5qn4m" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.733189 4681 generic.go:334] "Generic (PLEG): container finished" podID="1b3e95cc-25d6-4efd-8828-894657c29bcb" containerID="8a01211c8e821b88a6220ad8ba9c9eeb340e74df254c8585fc16ab4b0d3b01e5" exitCode=0 Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.733225 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w2zrg" event={"ID":"1b3e95cc-25d6-4efd-8828-894657c29bcb","Type":"ContainerDied","Data":"8a01211c8e821b88a6220ad8ba9c9eeb340e74df254c8585fc16ab4b0d3b01e5"} Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.733365 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w2zrg" event={"ID":"1b3e95cc-25d6-4efd-8828-894657c29bcb","Type":"ContainerDied","Data":"f61bf544592be43e66551e2ad2efce665711b9491d6a4baa033d226b9064b82c"} Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.733370 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w2zrg" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.739497 4681 scope.go:117] "RemoveContainer" containerID="2402eb5715d23e9fbc7ffffefaed58a58dacccd94a26b8309a9895842afa1341" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.740864 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kxzxq"] Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.746112 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kxzxq"] Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.746682 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0cbd40c-5c8c-451b-af65-fb67ba867ced-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0cbd40c-5c8c-451b-af65-fb67ba867ced" (UID: "b0cbd40c-5c8c-451b-af65-fb67ba867ced"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.766239 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m8stk"] Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.770963 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m8stk"] Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.775190 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-w2zrg"] Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.778647 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-w2zrg"] Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.783086 4681 scope.go:117] "RemoveContainer" containerID="bdc413e206247e2853a81056cd41a77940bd28f211f4e44aecce13c5f01b238a" Apr 04 02:06:43 crc kubenswrapper[4681]: E0404 02:06:43.783682 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdc413e206247e2853a81056cd41a77940bd28f211f4e44aecce13c5f01b238a\": container with ID starting with bdc413e206247e2853a81056cd41a77940bd28f211f4e44aecce13c5f01b238a not found: ID does not exist" containerID="bdc413e206247e2853a81056cd41a77940bd28f211f4e44aecce13c5f01b238a" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.783722 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdc413e206247e2853a81056cd41a77940bd28f211f4e44aecce13c5f01b238a"} err="failed to get container status \"bdc413e206247e2853a81056cd41a77940bd28f211f4e44aecce13c5f01b238a\": rpc error: code = NotFound desc = could not find container \"bdc413e206247e2853a81056cd41a77940bd28f211f4e44aecce13c5f01b238a\": container with ID starting with bdc413e206247e2853a81056cd41a77940bd28f211f4e44aecce13c5f01b238a not found: ID does not exist" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.783750 4681 scope.go:117] "RemoveContainer" containerID="2402eb5715d23e9fbc7ffffefaed58a58dacccd94a26b8309a9895842afa1341" Apr 04 02:06:43 crc kubenswrapper[4681]: E0404 02:06:43.784080 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2402eb5715d23e9fbc7ffffefaed58a58dacccd94a26b8309a9895842afa1341\": container with ID starting with 2402eb5715d23e9fbc7ffffefaed58a58dacccd94a26b8309a9895842afa1341 not found: ID does not exist" containerID="2402eb5715d23e9fbc7ffffefaed58a58dacccd94a26b8309a9895842afa1341" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.784098 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2402eb5715d23e9fbc7ffffefaed58a58dacccd94a26b8309a9895842afa1341"} err="failed to get container status \"2402eb5715d23e9fbc7ffffefaed58a58dacccd94a26b8309a9895842afa1341\": rpc error: code = NotFound desc = could not find container \"2402eb5715d23e9fbc7ffffefaed58a58dacccd94a26b8309a9895842afa1341\": container with ID starting with 2402eb5715d23e9fbc7ffffefaed58a58dacccd94a26b8309a9895842afa1341 not found: ID does not exist" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.784114 4681 scope.go:117] "RemoveContainer" containerID="40e51735094189321912b95707e729c950d33622e0f4bcc2d1b0cea4c9b3f59a" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.786989 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5qn4m"] Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.791482 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5qn4m"] Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.795335 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0cbd40c-5c8c-451b-af65-fb67ba867ced-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.795360 4681 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/664aa862-1bb6-421a-87b9-992ead56694b-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.795372 4681 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/664aa862-1bb6-421a-87b9-992ead56694b-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.803719 4681 scope.go:117] "RemoveContainer" containerID="d1b6fd38b21cfbb62f48f344d9a73c2ab21d3e3a70fa665b9e0515a46846f917" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.804709 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.817514 4681 scope.go:117] "RemoveContainer" containerID="c437580a9d444045ca6aaa683532dc88323dfeca43cb76045415025b047ace9d" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.830373 4681 scope.go:117] "RemoveContainer" containerID="40e51735094189321912b95707e729c950d33622e0f4bcc2d1b0cea4c9b3f59a" Apr 04 02:06:43 crc kubenswrapper[4681]: E0404 02:06:43.830757 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40e51735094189321912b95707e729c950d33622e0f4bcc2d1b0cea4c9b3f59a\": container with ID starting with 40e51735094189321912b95707e729c950d33622e0f4bcc2d1b0cea4c9b3f59a not found: ID does not exist" containerID="40e51735094189321912b95707e729c950d33622e0f4bcc2d1b0cea4c9b3f59a" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.830803 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40e51735094189321912b95707e729c950d33622e0f4bcc2d1b0cea4c9b3f59a"} err="failed to get container status \"40e51735094189321912b95707e729c950d33622e0f4bcc2d1b0cea4c9b3f59a\": rpc error: code = NotFound desc = could not find container \"40e51735094189321912b95707e729c950d33622e0f4bcc2d1b0cea4c9b3f59a\": container with ID starting with 40e51735094189321912b95707e729c950d33622e0f4bcc2d1b0cea4c9b3f59a not found: ID does not exist" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.830835 4681 scope.go:117] "RemoveContainer" containerID="d1b6fd38b21cfbb62f48f344d9a73c2ab21d3e3a70fa665b9e0515a46846f917" Apr 04 02:06:43 crc kubenswrapper[4681]: E0404 02:06:43.831068 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1b6fd38b21cfbb62f48f344d9a73c2ab21d3e3a70fa665b9e0515a46846f917\": container with ID starting with d1b6fd38b21cfbb62f48f344d9a73c2ab21d3e3a70fa665b9e0515a46846f917 not found: ID does not exist" containerID="d1b6fd38b21cfbb62f48f344d9a73c2ab21d3e3a70fa665b9e0515a46846f917" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.831160 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1b6fd38b21cfbb62f48f344d9a73c2ab21d3e3a70fa665b9e0515a46846f917"} err="failed to get container status \"d1b6fd38b21cfbb62f48f344d9a73c2ab21d3e3a70fa665b9e0515a46846f917\": rpc error: code = NotFound desc = could not find container \"d1b6fd38b21cfbb62f48f344d9a73c2ab21d3e3a70fa665b9e0515a46846f917\": container with ID starting with d1b6fd38b21cfbb62f48f344d9a73c2ab21d3e3a70fa665b9e0515a46846f917 not found: ID does not exist" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.831233 4681 scope.go:117] "RemoveContainer" containerID="c437580a9d444045ca6aaa683532dc88323dfeca43cb76045415025b047ace9d" Apr 04 02:06:43 crc kubenswrapper[4681]: E0404 02:06:43.831665 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c437580a9d444045ca6aaa683532dc88323dfeca43cb76045415025b047ace9d\": container with ID starting with c437580a9d444045ca6aaa683532dc88323dfeca43cb76045415025b047ace9d not found: ID does not exist" containerID="c437580a9d444045ca6aaa683532dc88323dfeca43cb76045415025b047ace9d" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.831721 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c437580a9d444045ca6aaa683532dc88323dfeca43cb76045415025b047ace9d"} err="failed to get container status \"c437580a9d444045ca6aaa683532dc88323dfeca43cb76045415025b047ace9d\": rpc error: code = NotFound desc = could not find container \"c437580a9d444045ca6aaa683532dc88323dfeca43cb76045415025b047ace9d\": container with ID starting with c437580a9d444045ca6aaa683532dc88323dfeca43cb76045415025b047ace9d not found: ID does not exist" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.831748 4681 scope.go:117] "RemoveContainer" containerID="eb030fc38f85a67b03f98de155bb65fe11812a38b3a790fe6470f05ed35fa7e7" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.844452 4681 scope.go:117] "RemoveContainer" containerID="a26d30b52bf9560a1dec69af55c9e93136e5d473550f0d67dc658a6b78f7a9fe" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.858622 4681 scope.go:117] "RemoveContainer" containerID="258fb6d829e1afab622e32830f0c3a1037a67688eae1f24ae2637fcd57409690" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.874769 4681 scope.go:117] "RemoveContainer" containerID="eb030fc38f85a67b03f98de155bb65fe11812a38b3a790fe6470f05ed35fa7e7" Apr 04 02:06:43 crc kubenswrapper[4681]: E0404 02:06:43.875856 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb030fc38f85a67b03f98de155bb65fe11812a38b3a790fe6470f05ed35fa7e7\": container with ID starting with eb030fc38f85a67b03f98de155bb65fe11812a38b3a790fe6470f05ed35fa7e7 not found: ID does not exist" containerID="eb030fc38f85a67b03f98de155bb65fe11812a38b3a790fe6470f05ed35fa7e7" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.875893 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb030fc38f85a67b03f98de155bb65fe11812a38b3a790fe6470f05ed35fa7e7"} err="failed to get container status \"eb030fc38f85a67b03f98de155bb65fe11812a38b3a790fe6470f05ed35fa7e7\": rpc error: code = NotFound desc = could not find container \"eb030fc38f85a67b03f98de155bb65fe11812a38b3a790fe6470f05ed35fa7e7\": container with ID starting with eb030fc38f85a67b03f98de155bb65fe11812a38b3a790fe6470f05ed35fa7e7 not found: ID does not exist" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.875917 4681 scope.go:117] "RemoveContainer" containerID="a26d30b52bf9560a1dec69af55c9e93136e5d473550f0d67dc658a6b78f7a9fe" Apr 04 02:06:43 crc kubenswrapper[4681]: E0404 02:06:43.876248 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a26d30b52bf9560a1dec69af55c9e93136e5d473550f0d67dc658a6b78f7a9fe\": container with ID starting with a26d30b52bf9560a1dec69af55c9e93136e5d473550f0d67dc658a6b78f7a9fe not found: ID does not exist" containerID="a26d30b52bf9560a1dec69af55c9e93136e5d473550f0d67dc658a6b78f7a9fe" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.876282 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a26d30b52bf9560a1dec69af55c9e93136e5d473550f0d67dc658a6b78f7a9fe"} err="failed to get container status \"a26d30b52bf9560a1dec69af55c9e93136e5d473550f0d67dc658a6b78f7a9fe\": rpc error: code = NotFound desc = could not find container \"a26d30b52bf9560a1dec69af55c9e93136e5d473550f0d67dc658a6b78f7a9fe\": container with ID starting with a26d30b52bf9560a1dec69af55c9e93136e5d473550f0d67dc658a6b78f7a9fe not found: ID does not exist" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.876295 4681 scope.go:117] "RemoveContainer" containerID="258fb6d829e1afab622e32830f0c3a1037a67688eae1f24ae2637fcd57409690" Apr 04 02:06:43 crc kubenswrapper[4681]: E0404 02:06:43.876775 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"258fb6d829e1afab622e32830f0c3a1037a67688eae1f24ae2637fcd57409690\": container with ID starting with 258fb6d829e1afab622e32830f0c3a1037a67688eae1f24ae2637fcd57409690 not found: ID does not exist" containerID="258fb6d829e1afab622e32830f0c3a1037a67688eae1f24ae2637fcd57409690" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.876830 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"258fb6d829e1afab622e32830f0c3a1037a67688eae1f24ae2637fcd57409690"} err="failed to get container status \"258fb6d829e1afab622e32830f0c3a1037a67688eae1f24ae2637fcd57409690\": rpc error: code = NotFound desc = could not find container \"258fb6d829e1afab622e32830f0c3a1037a67688eae1f24ae2637fcd57409690\": container with ID starting with 258fb6d829e1afab622e32830f0c3a1037a67688eae1f24ae2637fcd57409690 not found: ID does not exist" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.876870 4681 scope.go:117] "RemoveContainer" containerID="e0de81439966fae3310cd128018f42d8ad943049b2e0bf38e79c551d10557889" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.890308 4681 scope.go:117] "RemoveContainer" containerID="e2c741193f01dc17ca65beff11299aecab86425978ca4f622052f6a5386adb4b" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.905559 4681 scope.go:117] "RemoveContainer" containerID="043873db76115a77c9ee982f597acffbf8aa1a2683288579c41319e06cd8b16a" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.919030 4681 scope.go:117] "RemoveContainer" containerID="e0de81439966fae3310cd128018f42d8ad943049b2e0bf38e79c551d10557889" Apr 04 02:06:43 crc kubenswrapper[4681]: E0404 02:06:43.919451 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0de81439966fae3310cd128018f42d8ad943049b2e0bf38e79c551d10557889\": container with ID starting with e0de81439966fae3310cd128018f42d8ad943049b2e0bf38e79c551d10557889 not found: ID does not exist" containerID="e0de81439966fae3310cd128018f42d8ad943049b2e0bf38e79c551d10557889" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.919487 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0de81439966fae3310cd128018f42d8ad943049b2e0bf38e79c551d10557889"} err="failed to get container status \"e0de81439966fae3310cd128018f42d8ad943049b2e0bf38e79c551d10557889\": rpc error: code = NotFound desc = could not find container \"e0de81439966fae3310cd128018f42d8ad943049b2e0bf38e79c551d10557889\": container with ID starting with e0de81439966fae3310cd128018f42d8ad943049b2e0bf38e79c551d10557889 not found: ID does not exist" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.919512 4681 scope.go:117] "RemoveContainer" containerID="e2c741193f01dc17ca65beff11299aecab86425978ca4f622052f6a5386adb4b" Apr 04 02:06:43 crc kubenswrapper[4681]: E0404 02:06:43.919809 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2c741193f01dc17ca65beff11299aecab86425978ca4f622052f6a5386adb4b\": container with ID starting with e2c741193f01dc17ca65beff11299aecab86425978ca4f622052f6a5386adb4b not found: ID does not exist" containerID="e2c741193f01dc17ca65beff11299aecab86425978ca4f622052f6a5386adb4b" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.919864 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2c741193f01dc17ca65beff11299aecab86425978ca4f622052f6a5386adb4b"} err="failed to get container status \"e2c741193f01dc17ca65beff11299aecab86425978ca4f622052f6a5386adb4b\": rpc error: code = NotFound desc = could not find container \"e2c741193f01dc17ca65beff11299aecab86425978ca4f622052f6a5386adb4b\": container with ID starting with e2c741193f01dc17ca65beff11299aecab86425978ca4f622052f6a5386adb4b not found: ID does not exist" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.919882 4681 scope.go:117] "RemoveContainer" containerID="043873db76115a77c9ee982f597acffbf8aa1a2683288579c41319e06cd8b16a" Apr 04 02:06:43 crc kubenswrapper[4681]: E0404 02:06:43.920181 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"043873db76115a77c9ee982f597acffbf8aa1a2683288579c41319e06cd8b16a\": container with ID starting with 043873db76115a77c9ee982f597acffbf8aa1a2683288579c41319e06cd8b16a not found: ID does not exist" containerID="043873db76115a77c9ee982f597acffbf8aa1a2683288579c41319e06cd8b16a" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.920200 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"043873db76115a77c9ee982f597acffbf8aa1a2683288579c41319e06cd8b16a"} err="failed to get container status \"043873db76115a77c9ee982f597acffbf8aa1a2683288579c41319e06cd8b16a\": rpc error: code = NotFound desc = could not find container \"043873db76115a77c9ee982f597acffbf8aa1a2683288579c41319e06cd8b16a\": container with ID starting with 043873db76115a77c9ee982f597acffbf8aa1a2683288579c41319e06cd8b16a not found: ID does not exist" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.920211 4681 scope.go:117] "RemoveContainer" containerID="8a01211c8e821b88a6220ad8ba9c9eeb340e74df254c8585fc16ab4b0d3b01e5" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.935423 4681 scope.go:117] "RemoveContainer" containerID="56c885ee7f48cda2dbcef6372e3d6eae4c77ec8ef4564570159f151d31dffe0a" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.947372 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.952990 4681 scope.go:117] "RemoveContainer" containerID="ea39a7e416775c2c2210ed4efb462cad9511190e04a2f1b139bd1e75a8dc18b4" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.965215 4681 scope.go:117] "RemoveContainer" containerID="8a01211c8e821b88a6220ad8ba9c9eeb340e74df254c8585fc16ab4b0d3b01e5" Apr 04 02:06:43 crc kubenswrapper[4681]: E0404 02:06:43.965563 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a01211c8e821b88a6220ad8ba9c9eeb340e74df254c8585fc16ab4b0d3b01e5\": container with ID starting with 8a01211c8e821b88a6220ad8ba9c9eeb340e74df254c8585fc16ab4b0d3b01e5 not found: ID does not exist" containerID="8a01211c8e821b88a6220ad8ba9c9eeb340e74df254c8585fc16ab4b0d3b01e5" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.965597 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a01211c8e821b88a6220ad8ba9c9eeb340e74df254c8585fc16ab4b0d3b01e5"} err="failed to get container status \"8a01211c8e821b88a6220ad8ba9c9eeb340e74df254c8585fc16ab4b0d3b01e5\": rpc error: code = NotFound desc = could not find container \"8a01211c8e821b88a6220ad8ba9c9eeb340e74df254c8585fc16ab4b0d3b01e5\": container with ID starting with 8a01211c8e821b88a6220ad8ba9c9eeb340e74df254c8585fc16ab4b0d3b01e5 not found: ID does not exist" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.965620 4681 scope.go:117] "RemoveContainer" containerID="56c885ee7f48cda2dbcef6372e3d6eae4c77ec8ef4564570159f151d31dffe0a" Apr 04 02:06:43 crc kubenswrapper[4681]: E0404 02:06:43.965923 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56c885ee7f48cda2dbcef6372e3d6eae4c77ec8ef4564570159f151d31dffe0a\": container with ID starting with 56c885ee7f48cda2dbcef6372e3d6eae4c77ec8ef4564570159f151d31dffe0a not found: ID does not exist" containerID="56c885ee7f48cda2dbcef6372e3d6eae4c77ec8ef4564570159f151d31dffe0a" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.965974 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56c885ee7f48cda2dbcef6372e3d6eae4c77ec8ef4564570159f151d31dffe0a"} err="failed to get container status \"56c885ee7f48cda2dbcef6372e3d6eae4c77ec8ef4564570159f151d31dffe0a\": rpc error: code = NotFound desc = could not find container \"56c885ee7f48cda2dbcef6372e3d6eae4c77ec8ef4564570159f151d31dffe0a\": container with ID starting with 56c885ee7f48cda2dbcef6372e3d6eae4c77ec8ef4564570159f151d31dffe0a not found: ID does not exist" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.966011 4681 scope.go:117] "RemoveContainer" containerID="ea39a7e416775c2c2210ed4efb462cad9511190e04a2f1b139bd1e75a8dc18b4" Apr 04 02:06:43 crc kubenswrapper[4681]: E0404 02:06:43.966406 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea39a7e416775c2c2210ed4efb462cad9511190e04a2f1b139bd1e75a8dc18b4\": container with ID starting with ea39a7e416775c2c2210ed4efb462cad9511190e04a2f1b139bd1e75a8dc18b4 not found: ID does not exist" containerID="ea39a7e416775c2c2210ed4efb462cad9511190e04a2f1b139bd1e75a8dc18b4" Apr 04 02:06:43 crc kubenswrapper[4681]: I0404 02:06:43.966437 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea39a7e416775c2c2210ed4efb462cad9511190e04a2f1b139bd1e75a8dc18b4"} err="failed to get container status \"ea39a7e416775c2c2210ed4efb462cad9511190e04a2f1b139bd1e75a8dc18b4\": rpc error: code = NotFound desc = could not find container \"ea39a7e416775c2c2210ed4efb462cad9511190e04a2f1b139bd1e75a8dc18b4\": container with ID starting with ea39a7e416775c2c2210ed4efb462cad9511190e04a2f1b139bd1e75a8dc18b4 not found: ID does not exist" Apr 04 02:06:44 crc kubenswrapper[4681]: I0404 02:06:44.059322 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dvhf7"] Apr 04 02:06:44 crc kubenswrapper[4681]: I0404 02:06:44.066524 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dvhf7"] Apr 04 02:06:44 crc kubenswrapper[4681]: I0404 02:06:44.246024 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Apr 04 02:06:44 crc kubenswrapper[4681]: I0404 02:06:44.345931 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Apr 04 02:06:44 crc kubenswrapper[4681]: I0404 02:06:44.387724 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Apr 04 02:06:44 crc kubenswrapper[4681]: I0404 02:06:44.411366 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Apr 04 02:06:44 crc kubenswrapper[4681]: I0404 02:06:44.432367 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Apr 04 02:06:44 crc kubenswrapper[4681]: I0404 02:06:44.435645 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Apr 04 02:06:44 crc kubenswrapper[4681]: I0404 02:06:44.446755 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Apr 04 02:06:44 crc kubenswrapper[4681]: I0404 02:06:44.552786 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Apr 04 02:06:44 crc kubenswrapper[4681]: I0404 02:06:44.730211 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Apr 04 02:06:44 crc kubenswrapper[4681]: I0404 02:06:44.792059 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Apr 04 02:06:44 crc kubenswrapper[4681]: I0404 02:06:44.877012 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Apr 04 02:06:44 crc kubenswrapper[4681]: I0404 02:06:44.885532 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Apr 04 02:06:45 crc kubenswrapper[4681]: I0404 02:06:45.037370 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Apr 04 02:06:45 crc kubenswrapper[4681]: I0404 02:06:45.124495 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Apr 04 02:06:45 crc kubenswrapper[4681]: I0404 02:06:45.207108 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b3e95cc-25d6-4efd-8828-894657c29bcb" path="/var/lib/kubelet/pods/1b3e95cc-25d6-4efd-8828-894657c29bcb/volumes" Apr 04 02:06:45 crc kubenswrapper[4681]: I0404 02:06:45.207899 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="664aa862-1bb6-421a-87b9-992ead56694b" path="/var/lib/kubelet/pods/664aa862-1bb6-421a-87b9-992ead56694b/volumes" Apr 04 02:06:45 crc kubenswrapper[4681]: I0404 02:06:45.208470 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0cbd40c-5c8c-451b-af65-fb67ba867ced" path="/var/lib/kubelet/pods/b0cbd40c-5c8c-451b-af65-fb67ba867ced/volumes" Apr 04 02:06:45 crc kubenswrapper[4681]: I0404 02:06:45.209846 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da41f745-08e9-4d36-ad1d-3b054a4f0a2f" path="/var/lib/kubelet/pods/da41f745-08e9-4d36-ad1d-3b054a4f0a2f/volumes" Apr 04 02:06:45 crc kubenswrapper[4681]: I0404 02:06:45.210591 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f00114dc-2aae-4d37-8143-71336f144be3" path="/var/lib/kubelet/pods/f00114dc-2aae-4d37-8143-71336f144be3/volumes" Apr 04 02:06:45 crc kubenswrapper[4681]: I0404 02:06:45.224594 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:06:45 crc kubenswrapper[4681]: I0404 02:06:45.230109 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Apr 04 02:06:45 crc kubenswrapper[4681]: I0404 02:06:45.281364 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Apr 04 02:06:45 crc kubenswrapper[4681]: I0404 02:06:45.365745 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Apr 04 02:06:45 crc kubenswrapper[4681]: I0404 02:06:45.476312 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Apr 04 02:06:45 crc kubenswrapper[4681]: I0404 02:06:45.486004 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Apr 04 02:06:45 crc kubenswrapper[4681]: I0404 02:06:45.522070 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Apr 04 02:06:45 crc kubenswrapper[4681]: I0404 02:06:45.574735 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Apr 04 02:06:45 crc kubenswrapper[4681]: I0404 02:06:45.674840 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Apr 04 02:06:45 crc kubenswrapper[4681]: I0404 02:06:45.710775 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Apr 04 02:06:45 crc kubenswrapper[4681]: I0404 02:06:45.754196 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Apr 04 02:06:45 crc kubenswrapper[4681]: I0404 02:06:45.766721 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Apr 04 02:06:45 crc kubenswrapper[4681]: I0404 02:06:45.783358 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Apr 04 02:06:45 crc kubenswrapper[4681]: I0404 02:06:45.804923 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Apr 04 02:06:45 crc kubenswrapper[4681]: I0404 02:06:45.808622 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Apr 04 02:06:45 crc kubenswrapper[4681]: I0404 02:06:45.835761 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Apr 04 02:06:45 crc kubenswrapper[4681]: I0404 02:06:45.865508 4681 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Apr 04 02:06:45 crc kubenswrapper[4681]: I0404 02:06:45.989501 4681 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Apr 04 02:06:45 crc kubenswrapper[4681]: I0404 02:06:45.989746 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="aaec8d0ffd277c0e93001246672220ba" containerName="startup-monitor" containerID="cri-o://236e3932c70c15213281c4096f884684cfef0796c1d4c7fd41f46bfd2b0eab0e" gracePeriod=5 Apr 04 02:06:46 crc kubenswrapper[4681]: I0404 02:06:46.009608 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Apr 04 02:06:46 crc kubenswrapper[4681]: I0404 02:06:46.095831 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Apr 04 02:06:46 crc kubenswrapper[4681]: I0404 02:06:46.098785 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Apr 04 02:06:46 crc kubenswrapper[4681]: I0404 02:06:46.128423 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Apr 04 02:06:46 crc kubenswrapper[4681]: I0404 02:06:46.165857 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Apr 04 02:06:46 crc kubenswrapper[4681]: I0404 02:06:46.176937 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Apr 04 02:06:46 crc kubenswrapper[4681]: I0404 02:06:46.311395 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Apr 04 02:06:46 crc kubenswrapper[4681]: I0404 02:06:46.406230 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Apr 04 02:06:46 crc kubenswrapper[4681]: I0404 02:06:46.415808 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Apr 04 02:06:46 crc kubenswrapper[4681]: I0404 02:06:46.455439 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Apr 04 02:06:46 crc kubenswrapper[4681]: I0404 02:06:46.498612 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Apr 04 02:06:46 crc kubenswrapper[4681]: I0404 02:06:46.503580 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Apr 04 02:06:46 crc kubenswrapper[4681]: I0404 02:06:46.603257 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Apr 04 02:06:46 crc kubenswrapper[4681]: I0404 02:06:46.606913 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Apr 04 02:06:46 crc kubenswrapper[4681]: I0404 02:06:46.612171 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Apr 04 02:06:46 crc kubenswrapper[4681]: I0404 02:06:46.650558 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Apr 04 02:06:46 crc kubenswrapper[4681]: I0404 02:06:46.679350 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Apr 04 02:06:46 crc kubenswrapper[4681]: I0404 02:06:46.694790 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Apr 04 02:06:46 crc kubenswrapper[4681]: I0404 02:06:46.782828 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Apr 04 02:06:46 crc kubenswrapper[4681]: I0404 02:06:46.832004 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Apr 04 02:06:46 crc kubenswrapper[4681]: I0404 02:06:46.864061 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Apr 04 02:06:46 crc kubenswrapper[4681]: I0404 02:06:46.957931 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Apr 04 02:06:46 crc kubenswrapper[4681]: I0404 02:06:46.987135 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Apr 04 02:06:47 crc kubenswrapper[4681]: I0404 02:06:47.017384 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Apr 04 02:06:47 crc kubenswrapper[4681]: I0404 02:06:47.107089 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Apr 04 02:06:47 crc kubenswrapper[4681]: I0404 02:06:47.149046 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Apr 04 02:06:47 crc kubenswrapper[4681]: I0404 02:06:47.157881 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Apr 04 02:06:47 crc kubenswrapper[4681]: I0404 02:06:47.163509 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Apr 04 02:06:47 crc kubenswrapper[4681]: I0404 02:06:47.225294 4681 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Apr 04 02:06:47 crc kubenswrapper[4681]: I0404 02:06:47.225340 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="c32a96981201f35bdc64ba062620676a" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Apr 04 02:06:47 crc kubenswrapper[4681]: I0404 02:06:47.229236 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:06:47 crc kubenswrapper[4681]: I0404 02:06:47.236331 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:06:47 crc kubenswrapper[4681]: I0404 02:06:47.287945 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Apr 04 02:06:47 crc kubenswrapper[4681]: I0404 02:06:47.318077 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Apr 04 02:06:47 crc kubenswrapper[4681]: I0404 02:06:47.350537 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Apr 04 02:06:47 crc kubenswrapper[4681]: I0404 02:06:47.376445 4681 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Apr 04 02:06:47 crc kubenswrapper[4681]: I0404 02:06:47.558307 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Apr 04 02:06:47 crc kubenswrapper[4681]: I0404 02:06:47.612042 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Apr 04 02:06:47 crc kubenswrapper[4681]: I0404 02:06:47.641476 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Apr 04 02:06:47 crc kubenswrapper[4681]: I0404 02:06:47.647252 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Apr 04 02:06:47 crc kubenswrapper[4681]: I0404 02:06:47.688684 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Apr 04 02:06:47 crc kubenswrapper[4681]: I0404 02:06:47.712256 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Apr 04 02:06:47 crc kubenswrapper[4681]: I0404 02:06:47.801020 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Apr 04 02:06:47 crc kubenswrapper[4681]: I0404 02:06:47.890720 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Apr 04 02:06:48 crc kubenswrapper[4681]: I0404 02:06:48.025681 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Apr 04 02:06:48 crc kubenswrapper[4681]: I0404 02:06:48.075156 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Apr 04 02:06:48 crc kubenswrapper[4681]: I0404 02:06:48.105896 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Apr 04 02:06:48 crc kubenswrapper[4681]: I0404 02:06:48.108557 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Apr 04 02:06:48 crc kubenswrapper[4681]: I0404 02:06:48.485342 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Apr 04 02:06:48 crc kubenswrapper[4681]: I0404 02:06:48.534714 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Apr 04 02:06:48 crc kubenswrapper[4681]: I0404 02:06:48.583077 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Apr 04 02:06:48 crc kubenswrapper[4681]: I0404 02:06:48.644673 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Apr 04 02:06:48 crc kubenswrapper[4681]: I0404 02:06:48.742944 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Apr 04 02:06:48 crc kubenswrapper[4681]: I0404 02:06:48.785467 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Apr 04 02:06:48 crc kubenswrapper[4681]: I0404 02:06:48.895254 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Apr 04 02:06:49 crc kubenswrapper[4681]: I0404 02:06:49.058703 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Apr 04 02:06:49 crc kubenswrapper[4681]: I0404 02:06:49.172203 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Apr 04 02:06:49 crc kubenswrapper[4681]: I0404 02:06:49.216728 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Apr 04 02:06:49 crc kubenswrapper[4681]: I0404 02:06:49.262823 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Apr 04 02:06:49 crc kubenswrapper[4681]: I0404 02:06:49.305823 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Apr 04 02:06:49 crc kubenswrapper[4681]: I0404 02:06:49.517864 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Apr 04 02:06:49 crc kubenswrapper[4681]: I0404 02:06:49.583109 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Apr 04 02:06:49 crc kubenswrapper[4681]: I0404 02:06:49.724843 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Apr 04 02:06:49 crc kubenswrapper[4681]: I0404 02:06:49.852695 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Apr 04 02:06:50 crc kubenswrapper[4681]: I0404 02:06:50.525233 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Apr 04 02:06:50 crc kubenswrapper[4681]: I0404 02:06:50.629245 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Apr 04 02:06:50 crc kubenswrapper[4681]: I0404 02:06:50.706974 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Apr 04 02:06:51 crc kubenswrapper[4681]: I0404 02:06:51.003813 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Apr 04 02:06:51 crc kubenswrapper[4681]: I0404 02:06:51.564882 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_aaec8d0ffd277c0e93001246672220ba/startup-monitor/0.log" Apr 04 02:06:51 crc kubenswrapper[4681]: I0404 02:06:51.565729 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:06:51 crc kubenswrapper[4681]: I0404 02:06:51.709812 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aaec8d0ffd277c0e93001246672220ba-var-lock\") pod \"aaec8d0ffd277c0e93001246672220ba\" (UID: \"aaec8d0ffd277c0e93001246672220ba\") " Apr 04 02:06:51 crc kubenswrapper[4681]: I0404 02:06:51.709970 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aaec8d0ffd277c0e93001246672220ba-var-lock" (OuterVolumeSpecName: "var-lock") pod "aaec8d0ffd277c0e93001246672220ba" (UID: "aaec8d0ffd277c0e93001246672220ba"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:06:51 crc kubenswrapper[4681]: I0404 02:06:51.710685 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aaec8d0ffd277c0e93001246672220ba-manifests" (OuterVolumeSpecName: "manifests") pod "aaec8d0ffd277c0e93001246672220ba" (UID: "aaec8d0ffd277c0e93001246672220ba"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:06:51 crc kubenswrapper[4681]: I0404 02:06:51.710841 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/aaec8d0ffd277c0e93001246672220ba-manifests\") pod \"aaec8d0ffd277c0e93001246672220ba\" (UID: \"aaec8d0ffd277c0e93001246672220ba\") " Apr 04 02:06:51 crc kubenswrapper[4681]: I0404 02:06:51.710974 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/aaec8d0ffd277c0e93001246672220ba-pod-resource-dir\") pod \"aaec8d0ffd277c0e93001246672220ba\" (UID: \"aaec8d0ffd277c0e93001246672220ba\") " Apr 04 02:06:51 crc kubenswrapper[4681]: I0404 02:06:51.711169 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/aaec8d0ffd277c0e93001246672220ba-var-log\") pod \"aaec8d0ffd277c0e93001246672220ba\" (UID: \"aaec8d0ffd277c0e93001246672220ba\") " Apr 04 02:06:51 crc kubenswrapper[4681]: I0404 02:06:51.711290 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aaec8d0ffd277c0e93001246672220ba-var-log" (OuterVolumeSpecName: "var-log") pod "aaec8d0ffd277c0e93001246672220ba" (UID: "aaec8d0ffd277c0e93001246672220ba"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:06:51 crc kubenswrapper[4681]: I0404 02:06:51.711304 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/aaec8d0ffd277c0e93001246672220ba-resource-dir\") pod \"aaec8d0ffd277c0e93001246672220ba\" (UID: \"aaec8d0ffd277c0e93001246672220ba\") " Apr 04 02:06:51 crc kubenswrapper[4681]: I0404 02:06:51.711462 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aaec8d0ffd277c0e93001246672220ba-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "aaec8d0ffd277c0e93001246672220ba" (UID: "aaec8d0ffd277c0e93001246672220ba"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:06:51 crc kubenswrapper[4681]: I0404 02:06:51.711802 4681 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/aaec8d0ffd277c0e93001246672220ba-manifests\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:51 crc kubenswrapper[4681]: I0404 02:06:51.711834 4681 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/aaec8d0ffd277c0e93001246672220ba-var-log\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:51 crc kubenswrapper[4681]: I0404 02:06:51.711852 4681 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/aaec8d0ffd277c0e93001246672220ba-resource-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:51 crc kubenswrapper[4681]: I0404 02:06:51.711872 4681 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aaec8d0ffd277c0e93001246672220ba-var-lock\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:51 crc kubenswrapper[4681]: I0404 02:06:51.729659 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aaec8d0ffd277c0e93001246672220ba-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "aaec8d0ffd277c0e93001246672220ba" (UID: "aaec8d0ffd277c0e93001246672220ba"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:06:51 crc kubenswrapper[4681]: I0404 02:06:51.807502 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_aaec8d0ffd277c0e93001246672220ba/startup-monitor/0.log" Apr 04 02:06:51 crc kubenswrapper[4681]: I0404 02:06:51.807577 4681 generic.go:334] "Generic (PLEG): container finished" podID="aaec8d0ffd277c0e93001246672220ba" containerID="236e3932c70c15213281c4096f884684cfef0796c1d4c7fd41f46bfd2b0eab0e" exitCode=137 Apr 04 02:06:51 crc kubenswrapper[4681]: I0404 02:06:51.807626 4681 scope.go:117] "RemoveContainer" containerID="236e3932c70c15213281c4096f884684cfef0796c1d4c7fd41f46bfd2b0eab0e" Apr 04 02:06:51 crc kubenswrapper[4681]: I0404 02:06:51.807758 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Apr 04 02:06:51 crc kubenswrapper[4681]: I0404 02:06:51.813655 4681 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/aaec8d0ffd277c0e93001246672220ba-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:06:51 crc kubenswrapper[4681]: I0404 02:06:51.832126 4681 scope.go:117] "RemoveContainer" containerID="236e3932c70c15213281c4096f884684cfef0796c1d4c7fd41f46bfd2b0eab0e" Apr 04 02:06:51 crc kubenswrapper[4681]: E0404 02:06:51.832697 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"236e3932c70c15213281c4096f884684cfef0796c1d4c7fd41f46bfd2b0eab0e\": container with ID starting with 236e3932c70c15213281c4096f884684cfef0796c1d4c7fd41f46bfd2b0eab0e not found: ID does not exist" containerID="236e3932c70c15213281c4096f884684cfef0796c1d4c7fd41f46bfd2b0eab0e" Apr 04 02:06:51 crc kubenswrapper[4681]: I0404 02:06:51.832738 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"236e3932c70c15213281c4096f884684cfef0796c1d4c7fd41f46bfd2b0eab0e"} err="failed to get container status \"236e3932c70c15213281c4096f884684cfef0796c1d4c7fd41f46bfd2b0eab0e\": rpc error: code = NotFound desc = could not find container \"236e3932c70c15213281c4096f884684cfef0796c1d4c7fd41f46bfd2b0eab0e\": container with ID starting with 236e3932c70c15213281c4096f884684cfef0796c1d4c7fd41f46bfd2b0eab0e not found: ID does not exist" Apr 04 02:06:52 crc kubenswrapper[4681]: I0404 02:06:52.247434 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Apr 04 02:06:53 crc kubenswrapper[4681]: I0404 02:06:53.213786 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaec8d0ffd277c0e93001246672220ba" path="/var/lib/kubelet/pods/aaec8d0ffd277c0e93001246672220ba/volumes" Apr 04 02:06:53 crc kubenswrapper[4681]: I0404 02:06:53.214816 4681 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Apr 04 02:06:53 crc kubenswrapper[4681]: I0404 02:06:53.230464 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Apr 04 02:06:53 crc kubenswrapper[4681]: I0404 02:06:53.230516 4681 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="73c6c152-3f5c-45e8-8f53-32ef4469365c" Apr 04 02:06:53 crc kubenswrapper[4681]: I0404 02:06:53.237552 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Apr 04 02:06:53 crc kubenswrapper[4681]: I0404 02:06:53.237591 4681 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="73c6c152-3f5c-45e8-8f53-32ef4469365c" Apr 04 02:06:57 crc kubenswrapper[4681]: I0404 02:06:57.231843 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:06:57 crc kubenswrapper[4681]: I0404 02:06:57.238697 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.623649 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hdr5h"] Apr 04 02:07:11 crc kubenswrapper[4681]: E0404 02:07:11.624297 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaec8d0ffd277c0e93001246672220ba" containerName="startup-monitor" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.624309 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaec8d0ffd277c0e93001246672220ba" containerName="startup-monitor" Apr 04 02:07:11 crc kubenswrapper[4681]: E0404 02:07:11.624317 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab7e3566-522f-4eed-add8-d690d057fb83" containerName="installer" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.624324 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab7e3566-522f-4eed-add8-d690d057fb83" containerName="installer" Apr 04 02:07:11 crc kubenswrapper[4681]: E0404 02:07:11.624334 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da41f745-08e9-4d36-ad1d-3b054a4f0a2f" containerName="registry-server" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.624340 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="da41f745-08e9-4d36-ad1d-3b054a4f0a2f" containerName="registry-server" Apr 04 02:07:11 crc kubenswrapper[4681]: E0404 02:07:11.624349 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0cbd40c-5c8c-451b-af65-fb67ba867ced" containerName="registry-server" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.624357 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0cbd40c-5c8c-451b-af65-fb67ba867ced" containerName="registry-server" Apr 04 02:07:11 crc kubenswrapper[4681]: E0404 02:07:11.624369 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da41f745-08e9-4d36-ad1d-3b054a4f0a2f" containerName="extract-utilities" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.624375 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="da41f745-08e9-4d36-ad1d-3b054a4f0a2f" containerName="extract-utilities" Apr 04 02:07:11 crc kubenswrapper[4681]: E0404 02:07:11.624383 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27afc22e-f982-4252-a9d3-a07bdd837b1c" containerName="installer" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.624388 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="27afc22e-f982-4252-a9d3-a07bdd837b1c" containerName="installer" Apr 04 02:07:11 crc kubenswrapper[4681]: E0404 02:07:11.624396 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b3e95cc-25d6-4efd-8828-894657c29bcb" containerName="extract-utilities" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.624402 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b3e95cc-25d6-4efd-8828-894657c29bcb" containerName="extract-utilities" Apr 04 02:07:11 crc kubenswrapper[4681]: E0404 02:07:11.624430 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b3e95cc-25d6-4efd-8828-894657c29bcb" containerName="extract-content" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.624435 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b3e95cc-25d6-4efd-8828-894657c29bcb" containerName="extract-content" Apr 04 02:07:11 crc kubenswrapper[4681]: E0404 02:07:11.624443 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f00114dc-2aae-4d37-8143-71336f144be3" containerName="extract-utilities" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.624448 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f00114dc-2aae-4d37-8143-71336f144be3" containerName="extract-utilities" Apr 04 02:07:11 crc kubenswrapper[4681]: E0404 02:07:11.624458 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da41f745-08e9-4d36-ad1d-3b054a4f0a2f" containerName="extract-content" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.624464 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="da41f745-08e9-4d36-ad1d-3b054a4f0a2f" containerName="extract-content" Apr 04 02:07:11 crc kubenswrapper[4681]: E0404 02:07:11.624472 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0cbd40c-5c8c-451b-af65-fb67ba867ced" containerName="extract-content" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.624478 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0cbd40c-5c8c-451b-af65-fb67ba867ced" containerName="extract-content" Apr 04 02:07:11 crc kubenswrapper[4681]: E0404 02:07:11.624485 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0cbd40c-5c8c-451b-af65-fb67ba867ced" containerName="extract-utilities" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.624490 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0cbd40c-5c8c-451b-af65-fb67ba867ced" containerName="extract-utilities" Apr 04 02:07:11 crc kubenswrapper[4681]: E0404 02:07:11.624496 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b3e95cc-25d6-4efd-8828-894657c29bcb" containerName="registry-server" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.624502 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b3e95cc-25d6-4efd-8828-894657c29bcb" containerName="registry-server" Apr 04 02:07:11 crc kubenswrapper[4681]: E0404 02:07:11.624510 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f00114dc-2aae-4d37-8143-71336f144be3" containerName="registry-server" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.624516 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f00114dc-2aae-4d37-8143-71336f144be3" containerName="registry-server" Apr 04 02:07:11 crc kubenswrapper[4681]: E0404 02:07:11.624525 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="434092c3-92f3-4a1f-833a-872828fdd96e" containerName="registry" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.624531 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="434092c3-92f3-4a1f-833a-872828fdd96e" containerName="registry" Apr 04 02:07:11 crc kubenswrapper[4681]: E0404 02:07:11.624539 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f00114dc-2aae-4d37-8143-71336f144be3" containerName="extract-content" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.624545 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f00114dc-2aae-4d37-8143-71336f144be3" containerName="extract-content" Apr 04 02:07:11 crc kubenswrapper[4681]: E0404 02:07:11.624554 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="664aa862-1bb6-421a-87b9-992ead56694b" containerName="marketplace-operator" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.624559 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="664aa862-1bb6-421a-87b9-992ead56694b" containerName="marketplace-operator" Apr 04 02:07:11 crc kubenswrapper[4681]: E0404 02:07:11.624567 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="664aa862-1bb6-421a-87b9-992ead56694b" containerName="marketplace-operator" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.624572 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="664aa862-1bb6-421a-87b9-992ead56694b" containerName="marketplace-operator" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.624652 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="da41f745-08e9-4d36-ad1d-3b054a4f0a2f" containerName="registry-server" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.624662 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab7e3566-522f-4eed-add8-d690d057fb83" containerName="installer" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.624670 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="f00114dc-2aae-4d37-8143-71336f144be3" containerName="registry-server" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.624680 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b3e95cc-25d6-4efd-8828-894657c29bcb" containerName="registry-server" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.624688 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="664aa862-1bb6-421a-87b9-992ead56694b" containerName="marketplace-operator" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.624695 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaec8d0ffd277c0e93001246672220ba" containerName="startup-monitor" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.624704 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="434092c3-92f3-4a1f-833a-872828fdd96e" containerName="registry" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.624711 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="27afc22e-f982-4252-a9d3-a07bdd837b1c" containerName="installer" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.624717 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="664aa862-1bb6-421a-87b9-992ead56694b" containerName="marketplace-operator" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.624723 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0cbd40c-5c8c-451b-af65-fb67ba867ced" containerName="registry-server" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.625147 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hdr5h" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.631087 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.631375 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.632336 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.632491 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.648944 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.649748 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hdr5h"] Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.719791 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587806-vm2z8"] Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.720788 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587806-vm2z8" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.722078 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.722609 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.723739 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.736543 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587806-vm2z8"] Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.780725 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76a1fdd0-d5af-45fe-8f41-bed5f036a8e1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hdr5h\" (UID: \"76a1fdd0-d5af-45fe-8f41-bed5f036a8e1\") " pod="openshift-marketplace/marketplace-operator-79b997595-hdr5h" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.780792 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdbbg\" (UniqueName: \"kubernetes.io/projected/76a1fdd0-d5af-45fe-8f41-bed5f036a8e1-kube-api-access-zdbbg\") pod \"marketplace-operator-79b997595-hdr5h\" (UID: \"76a1fdd0-d5af-45fe-8f41-bed5f036a8e1\") " pod="openshift-marketplace/marketplace-operator-79b997595-hdr5h" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.780816 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/76a1fdd0-d5af-45fe-8f41-bed5f036a8e1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hdr5h\" (UID: \"76a1fdd0-d5af-45fe-8f41-bed5f036a8e1\") " pod="openshift-marketplace/marketplace-operator-79b997595-hdr5h" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.881515 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdbbg\" (UniqueName: \"kubernetes.io/projected/76a1fdd0-d5af-45fe-8f41-bed5f036a8e1-kube-api-access-zdbbg\") pod \"marketplace-operator-79b997595-hdr5h\" (UID: \"76a1fdd0-d5af-45fe-8f41-bed5f036a8e1\") " pod="openshift-marketplace/marketplace-operator-79b997595-hdr5h" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.881583 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/76a1fdd0-d5af-45fe-8f41-bed5f036a8e1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hdr5h\" (UID: \"76a1fdd0-d5af-45fe-8f41-bed5f036a8e1\") " pod="openshift-marketplace/marketplace-operator-79b997595-hdr5h" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.881623 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps4mt\" (UniqueName: \"kubernetes.io/projected/1706eb60-e09b-43cd-8c84-6b617ee0deb3-kube-api-access-ps4mt\") pod \"auto-csr-approver-29587806-vm2z8\" (UID: \"1706eb60-e09b-43cd-8c84-6b617ee0deb3\") " pod="openshift-infra/auto-csr-approver-29587806-vm2z8" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.881654 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76a1fdd0-d5af-45fe-8f41-bed5f036a8e1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hdr5h\" (UID: \"76a1fdd0-d5af-45fe-8f41-bed5f036a8e1\") " pod="openshift-marketplace/marketplace-operator-79b997595-hdr5h" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.882642 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76a1fdd0-d5af-45fe-8f41-bed5f036a8e1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hdr5h\" (UID: \"76a1fdd0-d5af-45fe-8f41-bed5f036a8e1\") " pod="openshift-marketplace/marketplace-operator-79b997595-hdr5h" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.892957 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/76a1fdd0-d5af-45fe-8f41-bed5f036a8e1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hdr5h\" (UID: \"76a1fdd0-d5af-45fe-8f41-bed5f036a8e1\") " pod="openshift-marketplace/marketplace-operator-79b997595-hdr5h" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.896934 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdbbg\" (UniqueName: \"kubernetes.io/projected/76a1fdd0-d5af-45fe-8f41-bed5f036a8e1-kube-api-access-zdbbg\") pod \"marketplace-operator-79b997595-hdr5h\" (UID: \"76a1fdd0-d5af-45fe-8f41-bed5f036a8e1\") " pod="openshift-marketplace/marketplace-operator-79b997595-hdr5h" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.943887 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hdr5h" Apr 04 02:07:11 crc kubenswrapper[4681]: I0404 02:07:11.983920 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps4mt\" (UniqueName: \"kubernetes.io/projected/1706eb60-e09b-43cd-8c84-6b617ee0deb3-kube-api-access-ps4mt\") pod \"auto-csr-approver-29587806-vm2z8\" (UID: \"1706eb60-e09b-43cd-8c84-6b617ee0deb3\") " pod="openshift-infra/auto-csr-approver-29587806-vm2z8" Apr 04 02:07:12 crc kubenswrapper[4681]: I0404 02:07:12.025951 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps4mt\" (UniqueName: \"kubernetes.io/projected/1706eb60-e09b-43cd-8c84-6b617ee0deb3-kube-api-access-ps4mt\") pod \"auto-csr-approver-29587806-vm2z8\" (UID: \"1706eb60-e09b-43cd-8c84-6b617ee0deb3\") " pod="openshift-infra/auto-csr-approver-29587806-vm2z8" Apr 04 02:07:12 crc kubenswrapper[4681]: I0404 02:07:12.034890 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587806-vm2z8" Apr 04 02:07:12 crc kubenswrapper[4681]: I0404 02:07:12.189467 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hdr5h"] Apr 04 02:07:12 crc kubenswrapper[4681]: I0404 02:07:12.269212 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587806-vm2z8"] Apr 04 02:07:12 crc kubenswrapper[4681]: W0404 02:07:12.288333 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1706eb60_e09b_43cd_8c84_6b617ee0deb3.slice/crio-7be294b5cfe9f2269f57461655e768b8e2a81f47e33b1c87471809fdc4d27ea2 WatchSource:0}: Error finding container 7be294b5cfe9f2269f57461655e768b8e2a81f47e33b1c87471809fdc4d27ea2: Status 404 returned error can't find the container with id 7be294b5cfe9f2269f57461655e768b8e2a81f47e33b1c87471809fdc4d27ea2 Apr 04 02:07:12 crc kubenswrapper[4681]: I0404 02:07:12.963631 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587806-vm2z8" event={"ID":"1706eb60-e09b-43cd-8c84-6b617ee0deb3","Type":"ContainerStarted","Data":"7be294b5cfe9f2269f57461655e768b8e2a81f47e33b1c87471809fdc4d27ea2"} Apr 04 02:07:12 crc kubenswrapper[4681]: I0404 02:07:12.965563 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hdr5h" event={"ID":"76a1fdd0-d5af-45fe-8f41-bed5f036a8e1","Type":"ContainerStarted","Data":"a65fc598901843dd49a0ddc992ef8eec87e0a145b0b2c835144926bf5a6a4113"} Apr 04 02:07:12 crc kubenswrapper[4681]: I0404 02:07:12.965617 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hdr5h" event={"ID":"76a1fdd0-d5af-45fe-8f41-bed5f036a8e1","Type":"ContainerStarted","Data":"94b9ff613de3968c7d4f3ca20c217b786c431f9277508d3b822e39d69395b07b"} Apr 04 02:07:12 crc kubenswrapper[4681]: I0404 02:07:12.965837 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hdr5h" Apr 04 02:07:12 crc kubenswrapper[4681]: I0404 02:07:12.969296 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hdr5h" Apr 04 02:07:12 crc kubenswrapper[4681]: I0404 02:07:12.986532 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hdr5h" podStartSLOduration=1.9865099659999998 podStartE2EDuration="1.986509966s" podCreationTimestamp="2026-04-04 02:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:07:12.981301943 +0000 UTC m=+712.647077073" watchObservedRunningTime="2026-04-04 02:07:12.986509966 +0000 UTC m=+712.652285086" Apr 04 02:07:13 crc kubenswrapper[4681]: I0404 02:07:13.476226 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-57f89ccc88-6hpw6"] Apr 04 02:07:13 crc kubenswrapper[4681]: I0404 02:07:13.476724 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-57f89ccc88-6hpw6" podUID="16fd856c-870d-4c0b-986c-844ca3a36bbc" containerName="controller-manager" containerID="cri-o://92090438ca0bddd76f7007c0371f194fcacd31d163e6440a4f5468e796849480" gracePeriod=30 Apr 04 02:07:13 crc kubenswrapper[4681]: I0404 02:07:13.571970 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6849fdf946-8tp9h"] Apr 04 02:07:13 crc kubenswrapper[4681]: I0404 02:07:13.572183 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6849fdf946-8tp9h" podUID="b9f92dd1-63f1-471c-b923-5cbf185137ca" containerName="route-controller-manager" containerID="cri-o://078ebec1a4f59189cd33439b2ffcec218bbee0fdfca824943f42c1f481b0ead2" gracePeriod=30 Apr 04 02:07:13 crc kubenswrapper[4681]: I0404 02:07:13.868532 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57f89ccc88-6hpw6" Apr 04 02:07:13 crc kubenswrapper[4681]: I0404 02:07:13.893657 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6849fdf946-8tp9h" Apr 04 02:07:13 crc kubenswrapper[4681]: I0404 02:07:13.973341 4681 generic.go:334] "Generic (PLEG): container finished" podID="16fd856c-870d-4c0b-986c-844ca3a36bbc" containerID="92090438ca0bddd76f7007c0371f194fcacd31d163e6440a4f5468e796849480" exitCode=0 Apr 04 02:07:13 crc kubenswrapper[4681]: I0404 02:07:13.973401 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57f89ccc88-6hpw6" Apr 04 02:07:13 crc kubenswrapper[4681]: I0404 02:07:13.973431 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57f89ccc88-6hpw6" event={"ID":"16fd856c-870d-4c0b-986c-844ca3a36bbc","Type":"ContainerDied","Data":"92090438ca0bddd76f7007c0371f194fcacd31d163e6440a4f5468e796849480"} Apr 04 02:07:13 crc kubenswrapper[4681]: I0404 02:07:13.973866 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57f89ccc88-6hpw6" event={"ID":"16fd856c-870d-4c0b-986c-844ca3a36bbc","Type":"ContainerDied","Data":"a1eb0580bd090a9ea797f43da377a3a5b74fc0c5e81a6ade6294d4c09cd0517e"} Apr 04 02:07:13 crc kubenswrapper[4681]: I0404 02:07:13.973889 4681 scope.go:117] "RemoveContainer" containerID="92090438ca0bddd76f7007c0371f194fcacd31d163e6440a4f5468e796849480" Apr 04 02:07:13 crc kubenswrapper[4681]: I0404 02:07:13.976372 4681 generic.go:334] "Generic (PLEG): container finished" podID="1706eb60-e09b-43cd-8c84-6b617ee0deb3" containerID="14a6c65a0aca260e68eaa2d4a3d2418b9423b654ad7577f011fe33205a4e79bc" exitCode=0 Apr 04 02:07:13 crc kubenswrapper[4681]: I0404 02:07:13.976516 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587806-vm2z8" event={"ID":"1706eb60-e09b-43cd-8c84-6b617ee0deb3","Type":"ContainerDied","Data":"14a6c65a0aca260e68eaa2d4a3d2418b9423b654ad7577f011fe33205a4e79bc"} Apr 04 02:07:13 crc kubenswrapper[4681]: I0404 02:07:13.978544 4681 generic.go:334] "Generic (PLEG): container finished" podID="b9f92dd1-63f1-471c-b923-5cbf185137ca" containerID="078ebec1a4f59189cd33439b2ffcec218bbee0fdfca824943f42c1f481b0ead2" exitCode=0 Apr 04 02:07:13 crc kubenswrapper[4681]: I0404 02:07:13.979189 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6849fdf946-8tp9h" Apr 04 02:07:13 crc kubenswrapper[4681]: I0404 02:07:13.979334 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6849fdf946-8tp9h" event={"ID":"b9f92dd1-63f1-471c-b923-5cbf185137ca","Type":"ContainerDied","Data":"078ebec1a4f59189cd33439b2ffcec218bbee0fdfca824943f42c1f481b0ead2"} Apr 04 02:07:13 crc kubenswrapper[4681]: I0404 02:07:13.979356 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6849fdf946-8tp9h" event={"ID":"b9f92dd1-63f1-471c-b923-5cbf185137ca","Type":"ContainerDied","Data":"c660176f73c992524f61e47457a071e1975accf69b2f3c1a725232eca416456e"} Apr 04 02:07:13 crc kubenswrapper[4681]: I0404 02:07:13.994033 4681 scope.go:117] "RemoveContainer" containerID="92090438ca0bddd76f7007c0371f194fcacd31d163e6440a4f5468e796849480" Apr 04 02:07:13 crc kubenswrapper[4681]: E0404 02:07:13.994543 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92090438ca0bddd76f7007c0371f194fcacd31d163e6440a4f5468e796849480\": container with ID starting with 92090438ca0bddd76f7007c0371f194fcacd31d163e6440a4f5468e796849480 not found: ID does not exist" containerID="92090438ca0bddd76f7007c0371f194fcacd31d163e6440a4f5468e796849480" Apr 04 02:07:13 crc kubenswrapper[4681]: I0404 02:07:13.994590 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92090438ca0bddd76f7007c0371f194fcacd31d163e6440a4f5468e796849480"} err="failed to get container status \"92090438ca0bddd76f7007c0371f194fcacd31d163e6440a4f5468e796849480\": rpc error: code = NotFound desc = could not find container \"92090438ca0bddd76f7007c0371f194fcacd31d163e6440a4f5468e796849480\": container with ID starting with 92090438ca0bddd76f7007c0371f194fcacd31d163e6440a4f5468e796849480 not found: ID does not exist" Apr 04 02:07:13 crc kubenswrapper[4681]: I0404 02:07:13.994625 4681 scope.go:117] "RemoveContainer" containerID="078ebec1a4f59189cd33439b2ffcec218bbee0fdfca824943f42c1f481b0ead2" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.009343 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16fd856c-870d-4c0b-986c-844ca3a36bbc-serving-cert\") pod \"16fd856c-870d-4c0b-986c-844ca3a36bbc\" (UID: \"16fd856c-870d-4c0b-986c-844ca3a36bbc\") " Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.009427 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9f92dd1-63f1-471c-b923-5cbf185137ca-client-ca\") pod \"b9f92dd1-63f1-471c-b923-5cbf185137ca\" (UID: \"b9f92dd1-63f1-471c-b923-5cbf185137ca\") " Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.009469 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9f92dd1-63f1-471c-b923-5cbf185137ca-config\") pod \"b9f92dd1-63f1-471c-b923-5cbf185137ca\" (UID: \"b9f92dd1-63f1-471c-b923-5cbf185137ca\") " Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.009516 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ts6mk\" (UniqueName: \"kubernetes.io/projected/16fd856c-870d-4c0b-986c-844ca3a36bbc-kube-api-access-ts6mk\") pod \"16fd856c-870d-4c0b-986c-844ca3a36bbc\" (UID: \"16fd856c-870d-4c0b-986c-844ca3a36bbc\") " Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.009586 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9f92dd1-63f1-471c-b923-5cbf185137ca-serving-cert\") pod \"b9f92dd1-63f1-471c-b923-5cbf185137ca\" (UID: \"b9f92dd1-63f1-471c-b923-5cbf185137ca\") " Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.009615 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16fd856c-870d-4c0b-986c-844ca3a36bbc-client-ca\") pod \"16fd856c-870d-4c0b-986c-844ca3a36bbc\" (UID: \"16fd856c-870d-4c0b-986c-844ca3a36bbc\") " Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.009656 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9f5z\" (UniqueName: \"kubernetes.io/projected/b9f92dd1-63f1-471c-b923-5cbf185137ca-kube-api-access-l9f5z\") pod \"b9f92dd1-63f1-471c-b923-5cbf185137ca\" (UID: \"b9f92dd1-63f1-471c-b923-5cbf185137ca\") " Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.009696 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16fd856c-870d-4c0b-986c-844ca3a36bbc-proxy-ca-bundles\") pod \"16fd856c-870d-4c0b-986c-844ca3a36bbc\" (UID: \"16fd856c-870d-4c0b-986c-844ca3a36bbc\") " Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.009724 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16fd856c-870d-4c0b-986c-844ca3a36bbc-config\") pod \"16fd856c-870d-4c0b-986c-844ca3a36bbc\" (UID: \"16fd856c-870d-4c0b-986c-844ca3a36bbc\") " Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.010380 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9f92dd1-63f1-471c-b923-5cbf185137ca-client-ca" (OuterVolumeSpecName: "client-ca") pod "b9f92dd1-63f1-471c-b923-5cbf185137ca" (UID: "b9f92dd1-63f1-471c-b923-5cbf185137ca"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.010373 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16fd856c-870d-4c0b-986c-844ca3a36bbc-client-ca" (OuterVolumeSpecName: "client-ca") pod "16fd856c-870d-4c0b-986c-844ca3a36bbc" (UID: "16fd856c-870d-4c0b-986c-844ca3a36bbc"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.010447 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9f92dd1-63f1-471c-b923-5cbf185137ca-config" (OuterVolumeSpecName: "config") pod "b9f92dd1-63f1-471c-b923-5cbf185137ca" (UID: "b9f92dd1-63f1-471c-b923-5cbf185137ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.010458 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16fd856c-870d-4c0b-986c-844ca3a36bbc-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "16fd856c-870d-4c0b-986c-844ca3a36bbc" (UID: "16fd856c-870d-4c0b-986c-844ca3a36bbc"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.010507 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16fd856c-870d-4c0b-986c-844ca3a36bbc-config" (OuterVolumeSpecName: "config") pod "16fd856c-870d-4c0b-986c-844ca3a36bbc" (UID: "16fd856c-870d-4c0b-986c-844ca3a36bbc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.011602 4681 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9f92dd1-63f1-471c-b923-5cbf185137ca-client-ca\") on node \"crc\" DevicePath \"\"" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.011622 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9f92dd1-63f1-471c-b923-5cbf185137ca-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.011633 4681 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16fd856c-870d-4c0b-986c-844ca3a36bbc-client-ca\") on node \"crc\" DevicePath \"\"" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.011643 4681 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16fd856c-870d-4c0b-986c-844ca3a36bbc-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.011654 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16fd856c-870d-4c0b-986c-844ca3a36bbc-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.012100 4681 scope.go:117] "RemoveContainer" containerID="078ebec1a4f59189cd33439b2ffcec218bbee0fdfca824943f42c1f481b0ead2" Apr 04 02:07:14 crc kubenswrapper[4681]: E0404 02:07:14.012803 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"078ebec1a4f59189cd33439b2ffcec218bbee0fdfca824943f42c1f481b0ead2\": container with ID starting with 078ebec1a4f59189cd33439b2ffcec218bbee0fdfca824943f42c1f481b0ead2 not found: ID does not exist" containerID="078ebec1a4f59189cd33439b2ffcec218bbee0fdfca824943f42c1f481b0ead2" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.012842 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"078ebec1a4f59189cd33439b2ffcec218bbee0fdfca824943f42c1f481b0ead2"} err="failed to get container status \"078ebec1a4f59189cd33439b2ffcec218bbee0fdfca824943f42c1f481b0ead2\": rpc error: code = NotFound desc = could not find container \"078ebec1a4f59189cd33439b2ffcec218bbee0fdfca824943f42c1f481b0ead2\": container with ID starting with 078ebec1a4f59189cd33439b2ffcec218bbee0fdfca824943f42c1f481b0ead2 not found: ID does not exist" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.014739 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9f92dd1-63f1-471c-b923-5cbf185137ca-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b9f92dd1-63f1-471c-b923-5cbf185137ca" (UID: "b9f92dd1-63f1-471c-b923-5cbf185137ca"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.014869 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9f92dd1-63f1-471c-b923-5cbf185137ca-kube-api-access-l9f5z" (OuterVolumeSpecName: "kube-api-access-l9f5z") pod "b9f92dd1-63f1-471c-b923-5cbf185137ca" (UID: "b9f92dd1-63f1-471c-b923-5cbf185137ca"). InnerVolumeSpecName "kube-api-access-l9f5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.015424 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16fd856c-870d-4c0b-986c-844ca3a36bbc-kube-api-access-ts6mk" (OuterVolumeSpecName: "kube-api-access-ts6mk") pod "16fd856c-870d-4c0b-986c-844ca3a36bbc" (UID: "16fd856c-870d-4c0b-986c-844ca3a36bbc"). InnerVolumeSpecName "kube-api-access-ts6mk". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.018321 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16fd856c-870d-4c0b-986c-844ca3a36bbc-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16fd856c-870d-4c0b-986c-844ca3a36bbc" (UID: "16fd856c-870d-4c0b-986c-844ca3a36bbc"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.112675 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ts6mk\" (UniqueName: \"kubernetes.io/projected/16fd856c-870d-4c0b-986c-844ca3a36bbc-kube-api-access-ts6mk\") on node \"crc\" DevicePath \"\"" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.112709 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9f92dd1-63f1-471c-b923-5cbf185137ca-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.112721 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9f5z\" (UniqueName: \"kubernetes.io/projected/b9f92dd1-63f1-471c-b923-5cbf185137ca-kube-api-access-l9f5z\") on node \"crc\" DevicePath \"\"" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.112729 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16fd856c-870d-4c0b-986c-844ca3a36bbc-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.299618 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-57f89ccc88-6hpw6"] Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.303384 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-57f89ccc88-6hpw6"] Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.311683 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6849fdf946-8tp9h"] Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.316190 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6849fdf946-8tp9h"] Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.777996 4681 patch_prober.go:28] interesting pod/controller-manager-57f89ccc88-6hpw6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.778334 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-57f89ccc88-6hpw6" podUID="16fd856c-870d-4c0b-986c-844ca3a36bbc" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.808838 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59dc7b9585-9f7m5"] Apr 04 02:07:14 crc kubenswrapper[4681]: E0404 02:07:14.809050 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9f92dd1-63f1-471c-b923-5cbf185137ca" containerName="route-controller-manager" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.809062 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9f92dd1-63f1-471c-b923-5cbf185137ca" containerName="route-controller-manager" Apr 04 02:07:14 crc kubenswrapper[4681]: E0404 02:07:14.809080 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16fd856c-870d-4c0b-986c-844ca3a36bbc" containerName="controller-manager" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.809087 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="16fd856c-870d-4c0b-986c-844ca3a36bbc" containerName="controller-manager" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.809170 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9f92dd1-63f1-471c-b923-5cbf185137ca" containerName="route-controller-manager" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.809184 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="16fd856c-870d-4c0b-986c-844ca3a36bbc" containerName="controller-manager" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.809540 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59dc7b9585-9f7m5" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.811020 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.811453 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.812607 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.813198 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.813410 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.814974 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.819024 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89644a4e-eb0d-4ab4-b8a5-d96df8852998-serving-cert\") pod \"route-controller-manager-59dc7b9585-9f7m5\" (UID: \"89644a4e-eb0d-4ab4-b8a5-d96df8852998\") " pod="openshift-route-controller-manager/route-controller-manager-59dc7b9585-9f7m5" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.819091 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89644a4e-eb0d-4ab4-b8a5-d96df8852998-client-ca\") pod \"route-controller-manager-59dc7b9585-9f7m5\" (UID: \"89644a4e-eb0d-4ab4-b8a5-d96df8852998\") " pod="openshift-route-controller-manager/route-controller-manager-59dc7b9585-9f7m5" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.819169 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89644a4e-eb0d-4ab4-b8a5-d96df8852998-config\") pod \"route-controller-manager-59dc7b9585-9f7m5\" (UID: \"89644a4e-eb0d-4ab4-b8a5-d96df8852998\") " pod="openshift-route-controller-manager/route-controller-manager-59dc7b9585-9f7m5" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.819207 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84654\" (UniqueName: \"kubernetes.io/projected/89644a4e-eb0d-4ab4-b8a5-d96df8852998-kube-api-access-84654\") pod \"route-controller-manager-59dc7b9585-9f7m5\" (UID: \"89644a4e-eb0d-4ab4-b8a5-d96df8852998\") " pod="openshift-route-controller-manager/route-controller-manager-59dc7b9585-9f7m5" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.824659 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59dc7b9585-9f7m5"] Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.864887 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-9c68697c-5frnq"] Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.865748 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9c68697c-5frnq" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.867896 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.868840 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.868839 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.869335 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.869512 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.870119 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.874623 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.877603 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9c68697c-5frnq"] Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.919896 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89644a4e-eb0d-4ab4-b8a5-d96df8852998-config\") pod \"route-controller-manager-59dc7b9585-9f7m5\" (UID: \"89644a4e-eb0d-4ab4-b8a5-d96df8852998\") " pod="openshift-route-controller-manager/route-controller-manager-59dc7b9585-9f7m5" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.919960 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84654\" (UniqueName: \"kubernetes.io/projected/89644a4e-eb0d-4ab4-b8a5-d96df8852998-kube-api-access-84654\") pod \"route-controller-manager-59dc7b9585-9f7m5\" (UID: \"89644a4e-eb0d-4ab4-b8a5-d96df8852998\") " pod="openshift-route-controller-manager/route-controller-manager-59dc7b9585-9f7m5" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.920018 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89644a4e-eb0d-4ab4-b8a5-d96df8852998-serving-cert\") pod \"route-controller-manager-59dc7b9585-9f7m5\" (UID: \"89644a4e-eb0d-4ab4-b8a5-d96df8852998\") " pod="openshift-route-controller-manager/route-controller-manager-59dc7b9585-9f7m5" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.920050 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89644a4e-eb0d-4ab4-b8a5-d96df8852998-client-ca\") pod \"route-controller-manager-59dc7b9585-9f7m5\" (UID: \"89644a4e-eb0d-4ab4-b8a5-d96df8852998\") " pod="openshift-route-controller-manager/route-controller-manager-59dc7b9585-9f7m5" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.921297 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/89644a4e-eb0d-4ab4-b8a5-d96df8852998-client-ca\") pod \"route-controller-manager-59dc7b9585-9f7m5\" (UID: \"89644a4e-eb0d-4ab4-b8a5-d96df8852998\") " pod="openshift-route-controller-manager/route-controller-manager-59dc7b9585-9f7m5" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.922220 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89644a4e-eb0d-4ab4-b8a5-d96df8852998-config\") pod \"route-controller-manager-59dc7b9585-9f7m5\" (UID: \"89644a4e-eb0d-4ab4-b8a5-d96df8852998\") " pod="openshift-route-controller-manager/route-controller-manager-59dc7b9585-9f7m5" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.926622 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89644a4e-eb0d-4ab4-b8a5-d96df8852998-serving-cert\") pod \"route-controller-manager-59dc7b9585-9f7m5\" (UID: \"89644a4e-eb0d-4ab4-b8a5-d96df8852998\") " pod="openshift-route-controller-manager/route-controller-manager-59dc7b9585-9f7m5" Apr 04 02:07:14 crc kubenswrapper[4681]: I0404 02:07:14.941579 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84654\" (UniqueName: \"kubernetes.io/projected/89644a4e-eb0d-4ab4-b8a5-d96df8852998-kube-api-access-84654\") pod \"route-controller-manager-59dc7b9585-9f7m5\" (UID: \"89644a4e-eb0d-4ab4-b8a5-d96df8852998\") " pod="openshift-route-controller-manager/route-controller-manager-59dc7b9585-9f7m5" Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.022577 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2ca98df-d6af-4e15-be1c-4e2316c4e507-config\") pod \"controller-manager-9c68697c-5frnq\" (UID: \"a2ca98df-d6af-4e15-be1c-4e2316c4e507\") " pod="openshift-controller-manager/controller-manager-9c68697c-5frnq" Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.022678 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2ca98df-d6af-4e15-be1c-4e2316c4e507-serving-cert\") pod \"controller-manager-9c68697c-5frnq\" (UID: \"a2ca98df-d6af-4e15-be1c-4e2316c4e507\") " pod="openshift-controller-manager/controller-manager-9c68697c-5frnq" Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.022707 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2ca98df-d6af-4e15-be1c-4e2316c4e507-client-ca\") pod \"controller-manager-9c68697c-5frnq\" (UID: \"a2ca98df-d6af-4e15-be1c-4e2316c4e507\") " pod="openshift-controller-manager/controller-manager-9c68697c-5frnq" Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.022731 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzdqc\" (UniqueName: \"kubernetes.io/projected/a2ca98df-d6af-4e15-be1c-4e2316c4e507-kube-api-access-zzdqc\") pod \"controller-manager-9c68697c-5frnq\" (UID: \"a2ca98df-d6af-4e15-be1c-4e2316c4e507\") " pod="openshift-controller-manager/controller-manager-9c68697c-5frnq" Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.022764 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2ca98df-d6af-4e15-be1c-4e2316c4e507-proxy-ca-bundles\") pod \"controller-manager-9c68697c-5frnq\" (UID: \"a2ca98df-d6af-4e15-be1c-4e2316c4e507\") " pod="openshift-controller-manager/controller-manager-9c68697c-5frnq" Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.124482 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2ca98df-d6af-4e15-be1c-4e2316c4e507-config\") pod \"controller-manager-9c68697c-5frnq\" (UID: \"a2ca98df-d6af-4e15-be1c-4e2316c4e507\") " pod="openshift-controller-manager/controller-manager-9c68697c-5frnq" Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.124606 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2ca98df-d6af-4e15-be1c-4e2316c4e507-serving-cert\") pod \"controller-manager-9c68697c-5frnq\" (UID: \"a2ca98df-d6af-4e15-be1c-4e2316c4e507\") " pod="openshift-controller-manager/controller-manager-9c68697c-5frnq" Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.124637 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2ca98df-d6af-4e15-be1c-4e2316c4e507-client-ca\") pod \"controller-manager-9c68697c-5frnq\" (UID: \"a2ca98df-d6af-4e15-be1c-4e2316c4e507\") " pod="openshift-controller-manager/controller-manager-9c68697c-5frnq" Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.124669 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzdqc\" (UniqueName: \"kubernetes.io/projected/a2ca98df-d6af-4e15-be1c-4e2316c4e507-kube-api-access-zzdqc\") pod \"controller-manager-9c68697c-5frnq\" (UID: \"a2ca98df-d6af-4e15-be1c-4e2316c4e507\") " pod="openshift-controller-manager/controller-manager-9c68697c-5frnq" Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.124732 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2ca98df-d6af-4e15-be1c-4e2316c4e507-proxy-ca-bundles\") pod \"controller-manager-9c68697c-5frnq\" (UID: \"a2ca98df-d6af-4e15-be1c-4e2316c4e507\") " pod="openshift-controller-manager/controller-manager-9c68697c-5frnq" Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.126714 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a2ca98df-d6af-4e15-be1c-4e2316c4e507-proxy-ca-bundles\") pod \"controller-manager-9c68697c-5frnq\" (UID: \"a2ca98df-d6af-4e15-be1c-4e2316c4e507\") " pod="openshift-controller-manager/controller-manager-9c68697c-5frnq" Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.127499 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2ca98df-d6af-4e15-be1c-4e2316c4e507-config\") pod \"controller-manager-9c68697c-5frnq\" (UID: \"a2ca98df-d6af-4e15-be1c-4e2316c4e507\") " pod="openshift-controller-manager/controller-manager-9c68697c-5frnq" Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.127784 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a2ca98df-d6af-4e15-be1c-4e2316c4e507-client-ca\") pod \"controller-manager-9c68697c-5frnq\" (UID: \"a2ca98df-d6af-4e15-be1c-4e2316c4e507\") " pod="openshift-controller-manager/controller-manager-9c68697c-5frnq" Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.130975 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a2ca98df-d6af-4e15-be1c-4e2316c4e507-serving-cert\") pod \"controller-manager-9c68697c-5frnq\" (UID: \"a2ca98df-d6af-4e15-be1c-4e2316c4e507\") " pod="openshift-controller-manager/controller-manager-9c68697c-5frnq" Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.140701 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzdqc\" (UniqueName: \"kubernetes.io/projected/a2ca98df-d6af-4e15-be1c-4e2316c4e507-kube-api-access-zzdqc\") pod \"controller-manager-9c68697c-5frnq\" (UID: \"a2ca98df-d6af-4e15-be1c-4e2316c4e507\") " pod="openshift-controller-manager/controller-manager-9c68697c-5frnq" Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.206942 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16fd856c-870d-4c0b-986c-844ca3a36bbc" path="/var/lib/kubelet/pods/16fd856c-870d-4c0b-986c-844ca3a36bbc/volumes" Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.207770 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9f92dd1-63f1-471c-b923-5cbf185137ca" path="/var/lib/kubelet/pods/b9f92dd1-63f1-471c-b923-5cbf185137ca/volumes" Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.498598 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59dc7b9585-9f7m5" Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.510204 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9c68697c-5frnq" Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.512890 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587806-vm2z8" Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.631782 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps4mt\" (UniqueName: \"kubernetes.io/projected/1706eb60-e09b-43cd-8c84-6b617ee0deb3-kube-api-access-ps4mt\") pod \"1706eb60-e09b-43cd-8c84-6b617ee0deb3\" (UID: \"1706eb60-e09b-43cd-8c84-6b617ee0deb3\") " Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.640468 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1706eb60-e09b-43cd-8c84-6b617ee0deb3-kube-api-access-ps4mt" (OuterVolumeSpecName: "kube-api-access-ps4mt") pod "1706eb60-e09b-43cd-8c84-6b617ee0deb3" (UID: "1706eb60-e09b-43cd-8c84-6b617ee0deb3"). InnerVolumeSpecName "kube-api-access-ps4mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.686854 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59dc7b9585-9f7m5"] Apr 04 02:07:15 crc kubenswrapper[4681]: W0404 02:07:15.694191 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89644a4e_eb0d_4ab4_b8a5_d96df8852998.slice/crio-1cb4f13886a8b73ec40e5c4b2adebdf1fd449577bbd14b4404de842ce49b4c71 WatchSource:0}: Error finding container 1cb4f13886a8b73ec40e5c4b2adebdf1fd449577bbd14b4404de842ce49b4c71: Status 404 returned error can't find the container with id 1cb4f13886a8b73ec40e5c4b2adebdf1fd449577bbd14b4404de842ce49b4c71 Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.721851 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9c68697c-5frnq"] Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.734079 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps4mt\" (UniqueName: \"kubernetes.io/projected/1706eb60-e09b-43cd-8c84-6b617ee0deb3-kube-api-access-ps4mt\") on node \"crc\" DevicePath \"\"" Apr 04 02:07:15 crc kubenswrapper[4681]: W0404 02:07:15.737224 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2ca98df_d6af_4e15_be1c_4e2316c4e507.slice/crio-ed5267346c9f778e36573ed4f63682b18ff48f0d8b3316e949ad9422f1c6e779 WatchSource:0}: Error finding container ed5267346c9f778e36573ed4f63682b18ff48f0d8b3316e949ad9422f1c6e779: Status 404 returned error can't find the container with id ed5267346c9f778e36573ed4f63682b18ff48f0d8b3316e949ad9422f1c6e779 Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.994799 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9c68697c-5frnq" event={"ID":"a2ca98df-d6af-4e15-be1c-4e2316c4e507","Type":"ContainerStarted","Data":"535aca10ce96b86444c545873336554390a92141c042cc7a9693077c9a5ce515"} Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.995353 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9c68697c-5frnq" event={"ID":"a2ca98df-d6af-4e15-be1c-4e2316c4e507","Type":"ContainerStarted","Data":"ed5267346c9f778e36573ed4f63682b18ff48f0d8b3316e949ad9422f1c6e779"} Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.997648 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-9c68697c-5frnq" Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.999617 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59dc7b9585-9f7m5" event={"ID":"89644a4e-eb0d-4ab4-b8a5-d96df8852998","Type":"ContainerStarted","Data":"e6772d95409c6ef7d859e569f915207335d1e128ad7379b4c7a1eaadc7758f82"} Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.999697 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59dc7b9585-9f7m5" event={"ID":"89644a4e-eb0d-4ab4-b8a5-d96df8852998","Type":"ContainerStarted","Data":"1cb4f13886a8b73ec40e5c4b2adebdf1fd449577bbd14b4404de842ce49b4c71"} Apr 04 02:07:15 crc kubenswrapper[4681]: I0404 02:07:15.999743 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-59dc7b9585-9f7m5" Apr 04 02:07:16 crc kubenswrapper[4681]: I0404 02:07:16.003091 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-9c68697c-5frnq" Apr 04 02:07:16 crc kubenswrapper[4681]: I0404 02:07:16.006991 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587806-vm2z8" event={"ID":"1706eb60-e09b-43cd-8c84-6b617ee0deb3","Type":"ContainerDied","Data":"7be294b5cfe9f2269f57461655e768b8e2a81f47e33b1c87471809fdc4d27ea2"} Apr 04 02:07:16 crc kubenswrapper[4681]: I0404 02:07:16.007044 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7be294b5cfe9f2269f57461655e768b8e2a81f47e33b1c87471809fdc4d27ea2" Apr 04 02:07:16 crc kubenswrapper[4681]: I0404 02:07:16.007109 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587806-vm2z8" Apr 04 02:07:16 crc kubenswrapper[4681]: I0404 02:07:16.017214 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-9c68697c-5frnq" podStartSLOduration=3.017191979 podStartE2EDuration="3.017191979s" podCreationTimestamp="2026-04-04 02:07:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:07:16.012989614 +0000 UTC m=+715.678764754" watchObservedRunningTime="2026-04-04 02:07:16.017191979 +0000 UTC m=+715.682967109" Apr 04 02:07:16 crc kubenswrapper[4681]: I0404 02:07:16.056681 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-59dc7b9585-9f7m5" podStartSLOduration=2.056661466 podStartE2EDuration="2.056661466s" podCreationTimestamp="2026-04-04 02:07:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:07:16.053765017 +0000 UTC m=+715.719540147" watchObservedRunningTime="2026-04-04 02:07:16.056661466 +0000 UTC m=+715.722436586" Apr 04 02:07:16 crc kubenswrapper[4681]: I0404 02:07:16.382112 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-59dc7b9585-9f7m5" Apr 04 02:07:16 crc kubenswrapper[4681]: I0404 02:07:16.561254 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587800-xd7wt"] Apr 04 02:07:16 crc kubenswrapper[4681]: I0404 02:07:16.566668 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587800-xd7wt"] Apr 04 02:07:17 crc kubenswrapper[4681]: I0404 02:07:17.210731 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eff11bc-ec44-4492-90f2-c24f4b0438bc" path="/var/lib/kubelet/pods/9eff11bc-ec44-4492-90f2-c24f4b0438bc/volumes" Apr 04 02:07:26 crc kubenswrapper[4681]: I0404 02:07:26.524384 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:07:26 crc kubenswrapper[4681]: I0404 02:07:26.525084 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:07:48 crc kubenswrapper[4681]: I0404 02:07:48.020153 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-8-retry-2-crc"] Apr 04 02:07:48 crc kubenswrapper[4681]: E0404 02:07:48.021946 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1706eb60-e09b-43cd-8c84-6b617ee0deb3" containerName="oc" Apr 04 02:07:48 crc kubenswrapper[4681]: I0404 02:07:48.022017 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1706eb60-e09b-43cd-8c84-6b617ee0deb3" containerName="oc" Apr 04 02:07:48 crc kubenswrapper[4681]: I0404 02:07:48.022309 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="1706eb60-e09b-43cd-8c84-6b617ee0deb3" containerName="oc" Apr 04 02:07:48 crc kubenswrapper[4681]: I0404 02:07:48.023208 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-8-retry-2-crc" Apr 04 02:07:48 crc kubenswrapper[4681]: I0404 02:07:48.026816 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-5vhrm" Apr 04 02:07:48 crc kubenswrapper[4681]: I0404 02:07:48.026887 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Apr 04 02:07:48 crc kubenswrapper[4681]: I0404 02:07:48.040154 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-8-retry-2-crc"] Apr 04 02:07:48 crc kubenswrapper[4681]: I0404 02:07:48.051242 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5efcf7ec-256b-4faa-98da-c9d737c2287b-kube-api-access\") pod \"installer-8-retry-2-crc\" (UID: \"5efcf7ec-256b-4faa-98da-c9d737c2287b\") " pod="openshift-kube-scheduler/installer-8-retry-2-crc" Apr 04 02:07:48 crc kubenswrapper[4681]: I0404 02:07:48.051295 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5efcf7ec-256b-4faa-98da-c9d737c2287b-var-lock\") pod \"installer-8-retry-2-crc\" (UID: \"5efcf7ec-256b-4faa-98da-c9d737c2287b\") " pod="openshift-kube-scheduler/installer-8-retry-2-crc" Apr 04 02:07:48 crc kubenswrapper[4681]: I0404 02:07:48.051346 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5efcf7ec-256b-4faa-98da-c9d737c2287b-kubelet-dir\") pod \"installer-8-retry-2-crc\" (UID: \"5efcf7ec-256b-4faa-98da-c9d737c2287b\") " pod="openshift-kube-scheduler/installer-8-retry-2-crc" Apr 04 02:07:48 crc kubenswrapper[4681]: I0404 02:07:48.152130 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5efcf7ec-256b-4faa-98da-c9d737c2287b-kubelet-dir\") pod \"installer-8-retry-2-crc\" (UID: \"5efcf7ec-256b-4faa-98da-c9d737c2287b\") " pod="openshift-kube-scheduler/installer-8-retry-2-crc" Apr 04 02:07:48 crc kubenswrapper[4681]: I0404 02:07:48.152461 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5efcf7ec-256b-4faa-98da-c9d737c2287b-kubelet-dir\") pod \"installer-8-retry-2-crc\" (UID: \"5efcf7ec-256b-4faa-98da-c9d737c2287b\") " pod="openshift-kube-scheduler/installer-8-retry-2-crc" Apr 04 02:07:48 crc kubenswrapper[4681]: I0404 02:07:48.152488 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5efcf7ec-256b-4faa-98da-c9d737c2287b-kube-api-access\") pod \"installer-8-retry-2-crc\" (UID: \"5efcf7ec-256b-4faa-98da-c9d737c2287b\") " pod="openshift-kube-scheduler/installer-8-retry-2-crc" Apr 04 02:07:48 crc kubenswrapper[4681]: I0404 02:07:48.152552 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5efcf7ec-256b-4faa-98da-c9d737c2287b-var-lock\") pod \"installer-8-retry-2-crc\" (UID: \"5efcf7ec-256b-4faa-98da-c9d737c2287b\") " pod="openshift-kube-scheduler/installer-8-retry-2-crc" Apr 04 02:07:48 crc kubenswrapper[4681]: I0404 02:07:48.152691 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5efcf7ec-256b-4faa-98da-c9d737c2287b-var-lock\") pod \"installer-8-retry-2-crc\" (UID: \"5efcf7ec-256b-4faa-98da-c9d737c2287b\") " pod="openshift-kube-scheduler/installer-8-retry-2-crc" Apr 04 02:07:48 crc kubenswrapper[4681]: I0404 02:07:48.170078 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5efcf7ec-256b-4faa-98da-c9d737c2287b-kube-api-access\") pod \"installer-8-retry-2-crc\" (UID: \"5efcf7ec-256b-4faa-98da-c9d737c2287b\") " pod="openshift-kube-scheduler/installer-8-retry-2-crc" Apr 04 02:07:48 crc kubenswrapper[4681]: I0404 02:07:48.387365 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-8-retry-2-crc" Apr 04 02:07:48 crc kubenswrapper[4681]: I0404 02:07:48.806406 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-8-retry-2-crc"] Apr 04 02:07:49 crc kubenswrapper[4681]: I0404 02:07:49.217959 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-8-retry-2-crc" event={"ID":"5efcf7ec-256b-4faa-98da-c9d737c2287b","Type":"ContainerStarted","Data":"9b3fa424c7d04887a47e6e06f382251d81511976da4b92deb0dace93652b737f"} Apr 04 02:07:49 crc kubenswrapper[4681]: I0404 02:07:49.220050 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-8-retry-2-crc" event={"ID":"5efcf7ec-256b-4faa-98da-c9d737c2287b","Type":"ContainerStarted","Data":"e5ec69697e7258dc4d278df3ee2ad7c9d40fa5e2b390df99fe6dc981c889ef82"} Apr 04 02:07:49 crc kubenswrapper[4681]: I0404 02:07:49.236384 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-8-retry-2-crc" podStartSLOduration=1.236361796 podStartE2EDuration="1.236361796s" podCreationTimestamp="2026-04-04 02:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:07:49.231197065 +0000 UTC m=+748.896972205" watchObservedRunningTime="2026-04-04 02:07:49.236361796 +0000 UTC m=+748.902136936" Apr 04 02:07:56 crc kubenswrapper[4681]: I0404 02:07:56.524153 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:07:56 crc kubenswrapper[4681]: I0404 02:07:56.524823 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:07:59 crc kubenswrapper[4681]: I0404 02:07:59.370120 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rlcg4"] Apr 04 02:07:59 crc kubenswrapper[4681]: I0404 02:07:59.371690 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rlcg4" Apr 04 02:07:59 crc kubenswrapper[4681]: I0404 02:07:59.373537 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Apr 04 02:07:59 crc kubenswrapper[4681]: I0404 02:07:59.381145 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rlcg4"] Apr 04 02:07:59 crc kubenswrapper[4681]: I0404 02:07:59.410445 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfm8s\" (UniqueName: \"kubernetes.io/projected/2ed9fccc-c563-4589-8289-6293e52869e4-kube-api-access-gfm8s\") pod \"redhat-marketplace-rlcg4\" (UID: \"2ed9fccc-c563-4589-8289-6293e52869e4\") " pod="openshift-marketplace/redhat-marketplace-rlcg4" Apr 04 02:07:59 crc kubenswrapper[4681]: I0404 02:07:59.410569 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed9fccc-c563-4589-8289-6293e52869e4-catalog-content\") pod \"redhat-marketplace-rlcg4\" (UID: \"2ed9fccc-c563-4589-8289-6293e52869e4\") " pod="openshift-marketplace/redhat-marketplace-rlcg4" Apr 04 02:07:59 crc kubenswrapper[4681]: I0404 02:07:59.410612 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed9fccc-c563-4589-8289-6293e52869e4-utilities\") pod \"redhat-marketplace-rlcg4\" (UID: \"2ed9fccc-c563-4589-8289-6293e52869e4\") " pod="openshift-marketplace/redhat-marketplace-rlcg4" Apr 04 02:07:59 crc kubenswrapper[4681]: I0404 02:07:59.511354 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed9fccc-c563-4589-8289-6293e52869e4-catalog-content\") pod \"redhat-marketplace-rlcg4\" (UID: \"2ed9fccc-c563-4589-8289-6293e52869e4\") " pod="openshift-marketplace/redhat-marketplace-rlcg4" Apr 04 02:07:59 crc kubenswrapper[4681]: I0404 02:07:59.511411 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed9fccc-c563-4589-8289-6293e52869e4-utilities\") pod \"redhat-marketplace-rlcg4\" (UID: \"2ed9fccc-c563-4589-8289-6293e52869e4\") " pod="openshift-marketplace/redhat-marketplace-rlcg4" Apr 04 02:07:59 crc kubenswrapper[4681]: I0404 02:07:59.511481 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfm8s\" (UniqueName: \"kubernetes.io/projected/2ed9fccc-c563-4589-8289-6293e52869e4-kube-api-access-gfm8s\") pod \"redhat-marketplace-rlcg4\" (UID: \"2ed9fccc-c563-4589-8289-6293e52869e4\") " pod="openshift-marketplace/redhat-marketplace-rlcg4" Apr 04 02:07:59 crc kubenswrapper[4681]: I0404 02:07:59.512050 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2ed9fccc-c563-4589-8289-6293e52869e4-utilities\") pod \"redhat-marketplace-rlcg4\" (UID: \"2ed9fccc-c563-4589-8289-6293e52869e4\") " pod="openshift-marketplace/redhat-marketplace-rlcg4" Apr 04 02:07:59 crc kubenswrapper[4681]: I0404 02:07:59.512221 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2ed9fccc-c563-4589-8289-6293e52869e4-catalog-content\") pod \"redhat-marketplace-rlcg4\" (UID: \"2ed9fccc-c563-4589-8289-6293e52869e4\") " pod="openshift-marketplace/redhat-marketplace-rlcg4" Apr 04 02:07:59 crc kubenswrapper[4681]: I0404 02:07:59.547600 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfm8s\" (UniqueName: \"kubernetes.io/projected/2ed9fccc-c563-4589-8289-6293e52869e4-kube-api-access-gfm8s\") pod \"redhat-marketplace-rlcg4\" (UID: \"2ed9fccc-c563-4589-8289-6293e52869e4\") " pod="openshift-marketplace/redhat-marketplace-rlcg4" Apr 04 02:07:59 crc kubenswrapper[4681]: I0404 02:07:59.579364 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vvbw2"] Apr 04 02:07:59 crc kubenswrapper[4681]: I0404 02:07:59.584429 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvbw2" Apr 04 02:07:59 crc kubenswrapper[4681]: I0404 02:07:59.588028 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Apr 04 02:07:59 crc kubenswrapper[4681]: I0404 02:07:59.590777 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vvbw2"] Apr 04 02:07:59 crc kubenswrapper[4681]: I0404 02:07:59.612839 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc7k4\" (UniqueName: \"kubernetes.io/projected/07348284-8b90-475b-92e4-f92c9a4ec127-kube-api-access-fc7k4\") pod \"redhat-operators-vvbw2\" (UID: \"07348284-8b90-475b-92e4-f92c9a4ec127\") " pod="openshift-marketplace/redhat-operators-vvbw2" Apr 04 02:07:59 crc kubenswrapper[4681]: I0404 02:07:59.612892 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07348284-8b90-475b-92e4-f92c9a4ec127-catalog-content\") pod \"redhat-operators-vvbw2\" (UID: \"07348284-8b90-475b-92e4-f92c9a4ec127\") " pod="openshift-marketplace/redhat-operators-vvbw2" Apr 04 02:07:59 crc kubenswrapper[4681]: I0404 02:07:59.612941 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07348284-8b90-475b-92e4-f92c9a4ec127-utilities\") pod \"redhat-operators-vvbw2\" (UID: \"07348284-8b90-475b-92e4-f92c9a4ec127\") " pod="openshift-marketplace/redhat-operators-vvbw2" Apr 04 02:07:59 crc kubenswrapper[4681]: I0404 02:07:59.688115 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rlcg4" Apr 04 02:07:59 crc kubenswrapper[4681]: I0404 02:07:59.714508 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc7k4\" (UniqueName: \"kubernetes.io/projected/07348284-8b90-475b-92e4-f92c9a4ec127-kube-api-access-fc7k4\") pod \"redhat-operators-vvbw2\" (UID: \"07348284-8b90-475b-92e4-f92c9a4ec127\") " pod="openshift-marketplace/redhat-operators-vvbw2" Apr 04 02:07:59 crc kubenswrapper[4681]: I0404 02:07:59.714679 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07348284-8b90-475b-92e4-f92c9a4ec127-catalog-content\") pod \"redhat-operators-vvbw2\" (UID: \"07348284-8b90-475b-92e4-f92c9a4ec127\") " pod="openshift-marketplace/redhat-operators-vvbw2" Apr 04 02:07:59 crc kubenswrapper[4681]: I0404 02:07:59.714734 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07348284-8b90-475b-92e4-f92c9a4ec127-utilities\") pod \"redhat-operators-vvbw2\" (UID: \"07348284-8b90-475b-92e4-f92c9a4ec127\") " pod="openshift-marketplace/redhat-operators-vvbw2" Apr 04 02:07:59 crc kubenswrapper[4681]: I0404 02:07:59.715209 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07348284-8b90-475b-92e4-f92c9a4ec127-utilities\") pod \"redhat-operators-vvbw2\" (UID: \"07348284-8b90-475b-92e4-f92c9a4ec127\") " pod="openshift-marketplace/redhat-operators-vvbw2" Apr 04 02:07:59 crc kubenswrapper[4681]: I0404 02:07:59.715326 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07348284-8b90-475b-92e4-f92c9a4ec127-catalog-content\") pod \"redhat-operators-vvbw2\" (UID: \"07348284-8b90-475b-92e4-f92c9a4ec127\") " pod="openshift-marketplace/redhat-operators-vvbw2" Apr 04 02:07:59 crc kubenswrapper[4681]: I0404 02:07:59.734696 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc7k4\" (UniqueName: \"kubernetes.io/projected/07348284-8b90-475b-92e4-f92c9a4ec127-kube-api-access-fc7k4\") pod \"redhat-operators-vvbw2\" (UID: \"07348284-8b90-475b-92e4-f92c9a4ec127\") " pod="openshift-marketplace/redhat-operators-vvbw2" Apr 04 02:07:59 crc kubenswrapper[4681]: I0404 02:07:59.914131 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvbw2" Apr 04 02:08:00 crc kubenswrapper[4681]: I0404 02:08:00.096690 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rlcg4"] Apr 04 02:08:00 crc kubenswrapper[4681]: I0404 02:08:00.118208 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vvbw2"] Apr 04 02:08:00 crc kubenswrapper[4681]: W0404 02:08:00.133153 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07348284_8b90_475b_92e4_f92c9a4ec127.slice/crio-dd95564c48318896e31c6b9c7d06054a1ba63afd530920e0970f15c7f79a297c WatchSource:0}: Error finding container dd95564c48318896e31c6b9c7d06054a1ba63afd530920e0970f15c7f79a297c: Status 404 returned error can't find the container with id dd95564c48318896e31c6b9c7d06054a1ba63afd530920e0970f15c7f79a297c Apr 04 02:08:00 crc kubenswrapper[4681]: I0404 02:08:00.142058 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587808-zjwxg"] Apr 04 02:08:00 crc kubenswrapper[4681]: I0404 02:08:00.143502 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587808-zjwxg" Apr 04 02:08:00 crc kubenswrapper[4681]: I0404 02:08:00.148111 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 02:08:00 crc kubenswrapper[4681]: I0404 02:08:00.148425 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 02:08:00 crc kubenswrapper[4681]: I0404 02:08:00.149829 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587808-zjwxg"] Apr 04 02:08:00 crc kubenswrapper[4681]: I0404 02:08:00.150690 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 02:08:00 crc kubenswrapper[4681]: I0404 02:08:00.225752 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpg7c\" (UniqueName: \"kubernetes.io/projected/6587f6a2-6fa3-4c4c-9e71-2f6f48027630-kube-api-access-tpg7c\") pod \"auto-csr-approver-29587808-zjwxg\" (UID: \"6587f6a2-6fa3-4c4c-9e71-2f6f48027630\") " pod="openshift-infra/auto-csr-approver-29587808-zjwxg" Apr 04 02:08:00 crc kubenswrapper[4681]: I0404 02:08:00.306057 4681 generic.go:334] "Generic (PLEG): container finished" podID="2ed9fccc-c563-4589-8289-6293e52869e4" containerID="19cc03ce97285c99d7d5b01a4ed76902d714d5336ac4d09aabaa682660bf698c" exitCode=0 Apr 04 02:08:00 crc kubenswrapper[4681]: I0404 02:08:00.306133 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rlcg4" event={"ID":"2ed9fccc-c563-4589-8289-6293e52869e4","Type":"ContainerDied","Data":"19cc03ce97285c99d7d5b01a4ed76902d714d5336ac4d09aabaa682660bf698c"} Apr 04 02:08:00 crc kubenswrapper[4681]: I0404 02:08:00.306158 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rlcg4" event={"ID":"2ed9fccc-c563-4589-8289-6293e52869e4","Type":"ContainerStarted","Data":"09f92e4bafd34a23ea111a06b6ab9d8d1595ab49e88e68a612b56dfc374305c6"} Apr 04 02:08:00 crc kubenswrapper[4681]: I0404 02:08:00.307436 4681 generic.go:334] "Generic (PLEG): container finished" podID="07348284-8b90-475b-92e4-f92c9a4ec127" containerID="ccca854a3b8b434e8363eb482148f2b720d0c16f10c569cec78bf86f1729154a" exitCode=0 Apr 04 02:08:00 crc kubenswrapper[4681]: I0404 02:08:00.307465 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvbw2" event={"ID":"07348284-8b90-475b-92e4-f92c9a4ec127","Type":"ContainerDied","Data":"ccca854a3b8b434e8363eb482148f2b720d0c16f10c569cec78bf86f1729154a"} Apr 04 02:08:00 crc kubenswrapper[4681]: I0404 02:08:00.307484 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvbw2" event={"ID":"07348284-8b90-475b-92e4-f92c9a4ec127","Type":"ContainerStarted","Data":"dd95564c48318896e31c6b9c7d06054a1ba63afd530920e0970f15c7f79a297c"} Apr 04 02:08:00 crc kubenswrapper[4681]: I0404 02:08:00.326954 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpg7c\" (UniqueName: \"kubernetes.io/projected/6587f6a2-6fa3-4c4c-9e71-2f6f48027630-kube-api-access-tpg7c\") pod \"auto-csr-approver-29587808-zjwxg\" (UID: \"6587f6a2-6fa3-4c4c-9e71-2f6f48027630\") " pod="openshift-infra/auto-csr-approver-29587808-zjwxg" Apr 04 02:08:00 crc kubenswrapper[4681]: I0404 02:08:00.352914 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpg7c\" (UniqueName: \"kubernetes.io/projected/6587f6a2-6fa3-4c4c-9e71-2f6f48027630-kube-api-access-tpg7c\") pod \"auto-csr-approver-29587808-zjwxg\" (UID: \"6587f6a2-6fa3-4c4c-9e71-2f6f48027630\") " pod="openshift-infra/auto-csr-approver-29587808-zjwxg" Apr 04 02:08:00 crc kubenswrapper[4681]: I0404 02:08:00.476273 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587808-zjwxg" Apr 04 02:08:00 crc kubenswrapper[4681]: I0404 02:08:00.962880 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587808-zjwxg"] Apr 04 02:08:00 crc kubenswrapper[4681]: W0404 02:08:00.971116 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6587f6a2_6fa3_4c4c_9e71_2f6f48027630.slice/crio-c0cd95928c3a6ef921421bf75e428935c364d4566172705857fde96c001cc931 WatchSource:0}: Error finding container c0cd95928c3a6ef921421bf75e428935c364d4566172705857fde96c001cc931: Status 404 returned error can't find the container with id c0cd95928c3a6ef921421bf75e428935c364d4566172705857fde96c001cc931 Apr 04 02:08:01 crc kubenswrapper[4681]: I0404 02:08:01.159431 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s7ppd"] Apr 04 02:08:01 crc kubenswrapper[4681]: I0404 02:08:01.160936 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s7ppd" Apr 04 02:08:01 crc kubenswrapper[4681]: I0404 02:08:01.164473 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Apr 04 02:08:01 crc kubenswrapper[4681]: I0404 02:08:01.176098 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s7ppd"] Apr 04 02:08:01 crc kubenswrapper[4681]: I0404 02:08:01.240640 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-955w9\" (UniqueName: \"kubernetes.io/projected/bb4a8188-2d15-4ecb-8b44-46d49acb6dd8-kube-api-access-955w9\") pod \"certified-operators-s7ppd\" (UID: \"bb4a8188-2d15-4ecb-8b44-46d49acb6dd8\") " pod="openshift-marketplace/certified-operators-s7ppd" Apr 04 02:08:01 crc kubenswrapper[4681]: I0404 02:08:01.240831 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb4a8188-2d15-4ecb-8b44-46d49acb6dd8-catalog-content\") pod \"certified-operators-s7ppd\" (UID: \"bb4a8188-2d15-4ecb-8b44-46d49acb6dd8\") " pod="openshift-marketplace/certified-operators-s7ppd" Apr 04 02:08:01 crc kubenswrapper[4681]: I0404 02:08:01.240898 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb4a8188-2d15-4ecb-8b44-46d49acb6dd8-utilities\") pod \"certified-operators-s7ppd\" (UID: \"bb4a8188-2d15-4ecb-8b44-46d49acb6dd8\") " pod="openshift-marketplace/certified-operators-s7ppd" Apr 04 02:08:01 crc kubenswrapper[4681]: I0404 02:08:01.313677 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587808-zjwxg" event={"ID":"6587f6a2-6fa3-4c4c-9e71-2f6f48027630","Type":"ContainerStarted","Data":"c0cd95928c3a6ef921421bf75e428935c364d4566172705857fde96c001cc931"} Apr 04 02:08:01 crc kubenswrapper[4681]: I0404 02:08:01.341686 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb4a8188-2d15-4ecb-8b44-46d49acb6dd8-catalog-content\") pod \"certified-operators-s7ppd\" (UID: \"bb4a8188-2d15-4ecb-8b44-46d49acb6dd8\") " pod="openshift-marketplace/certified-operators-s7ppd" Apr 04 02:08:01 crc kubenswrapper[4681]: I0404 02:08:01.341740 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb4a8188-2d15-4ecb-8b44-46d49acb6dd8-utilities\") pod \"certified-operators-s7ppd\" (UID: \"bb4a8188-2d15-4ecb-8b44-46d49acb6dd8\") " pod="openshift-marketplace/certified-operators-s7ppd" Apr 04 02:08:01 crc kubenswrapper[4681]: I0404 02:08:01.341792 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-955w9\" (UniqueName: \"kubernetes.io/projected/bb4a8188-2d15-4ecb-8b44-46d49acb6dd8-kube-api-access-955w9\") pod \"certified-operators-s7ppd\" (UID: \"bb4a8188-2d15-4ecb-8b44-46d49acb6dd8\") " pod="openshift-marketplace/certified-operators-s7ppd" Apr 04 02:08:01 crc kubenswrapper[4681]: I0404 02:08:01.342775 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb4a8188-2d15-4ecb-8b44-46d49acb6dd8-utilities\") pod \"certified-operators-s7ppd\" (UID: \"bb4a8188-2d15-4ecb-8b44-46d49acb6dd8\") " pod="openshift-marketplace/certified-operators-s7ppd" Apr 04 02:08:01 crc kubenswrapper[4681]: I0404 02:08:01.342853 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb4a8188-2d15-4ecb-8b44-46d49acb6dd8-catalog-content\") pod \"certified-operators-s7ppd\" (UID: \"bb4a8188-2d15-4ecb-8b44-46d49acb6dd8\") " pod="openshift-marketplace/certified-operators-s7ppd" Apr 04 02:08:01 crc kubenswrapper[4681]: I0404 02:08:01.366543 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-955w9\" (UniqueName: \"kubernetes.io/projected/bb4a8188-2d15-4ecb-8b44-46d49acb6dd8-kube-api-access-955w9\") pod \"certified-operators-s7ppd\" (UID: \"bb4a8188-2d15-4ecb-8b44-46d49acb6dd8\") " pod="openshift-marketplace/certified-operators-s7ppd" Apr 04 02:08:01 crc kubenswrapper[4681]: I0404 02:08:01.497504 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s7ppd" Apr 04 02:08:01 crc kubenswrapper[4681]: I0404 02:08:01.706415 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s7ppd"] Apr 04 02:08:01 crc kubenswrapper[4681]: W0404 02:08:01.724125 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb4a8188_2d15_4ecb_8b44_46d49acb6dd8.slice/crio-05352b4f0bdbbdb259a995b24dd03d23ae661da34c1b037ce80eeeb35a816524 WatchSource:0}: Error finding container 05352b4f0bdbbdb259a995b24dd03d23ae661da34c1b037ce80eeeb35a816524: Status 404 returned error can't find the container with id 05352b4f0bdbbdb259a995b24dd03d23ae661da34c1b037ce80eeeb35a816524 Apr 04 02:08:02 crc kubenswrapper[4681]: I0404 02:08:02.161532 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pc5lk"] Apr 04 02:08:02 crc kubenswrapper[4681]: I0404 02:08:02.163299 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pc5lk" Apr 04 02:08:02 crc kubenswrapper[4681]: I0404 02:08:02.167844 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Apr 04 02:08:02 crc kubenswrapper[4681]: I0404 02:08:02.178494 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pc5lk"] Apr 04 02:08:02 crc kubenswrapper[4681]: I0404 02:08:02.253778 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5161cee-bf19-458a-95a3-8cf593f8f78c-utilities\") pod \"community-operators-pc5lk\" (UID: \"f5161cee-bf19-458a-95a3-8cf593f8f78c\") " pod="openshift-marketplace/community-operators-pc5lk" Apr 04 02:08:02 crc kubenswrapper[4681]: I0404 02:08:02.253828 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5161cee-bf19-458a-95a3-8cf593f8f78c-catalog-content\") pod \"community-operators-pc5lk\" (UID: \"f5161cee-bf19-458a-95a3-8cf593f8f78c\") " pod="openshift-marketplace/community-operators-pc5lk" Apr 04 02:08:02 crc kubenswrapper[4681]: I0404 02:08:02.253846 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcfbg\" (UniqueName: \"kubernetes.io/projected/f5161cee-bf19-458a-95a3-8cf593f8f78c-kube-api-access-gcfbg\") pod \"community-operators-pc5lk\" (UID: \"f5161cee-bf19-458a-95a3-8cf593f8f78c\") " pod="openshift-marketplace/community-operators-pc5lk" Apr 04 02:08:02 crc kubenswrapper[4681]: I0404 02:08:02.320187 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587808-zjwxg" event={"ID":"6587f6a2-6fa3-4c4c-9e71-2f6f48027630","Type":"ContainerStarted","Data":"d023473e56730b16c4e3656681287626b938bcaea2055858c4e97a52f3d03cd3"} Apr 04 02:08:02 crc kubenswrapper[4681]: I0404 02:08:02.321962 4681 generic.go:334] "Generic (PLEG): container finished" podID="2ed9fccc-c563-4589-8289-6293e52869e4" containerID="24994f5c193eec21465b1e3e54762c6b4eeca122da15cfc7bb3f7ad762193b7b" exitCode=0 Apr 04 02:08:02 crc kubenswrapper[4681]: I0404 02:08:02.322015 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rlcg4" event={"ID":"2ed9fccc-c563-4589-8289-6293e52869e4","Type":"ContainerDied","Data":"24994f5c193eec21465b1e3e54762c6b4eeca122da15cfc7bb3f7ad762193b7b"} Apr 04 02:08:02 crc kubenswrapper[4681]: I0404 02:08:02.323947 4681 generic.go:334] "Generic (PLEG): container finished" podID="bb4a8188-2d15-4ecb-8b44-46d49acb6dd8" containerID="8841fb6dddb7bde5bbb0240a23ad2f9742a2ee048dd2ed7e167cc66d0db9f0db" exitCode=0 Apr 04 02:08:02 crc kubenswrapper[4681]: I0404 02:08:02.324020 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s7ppd" event={"ID":"bb4a8188-2d15-4ecb-8b44-46d49acb6dd8","Type":"ContainerDied","Data":"8841fb6dddb7bde5bbb0240a23ad2f9742a2ee048dd2ed7e167cc66d0db9f0db"} Apr 04 02:08:02 crc kubenswrapper[4681]: I0404 02:08:02.324046 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s7ppd" event={"ID":"bb4a8188-2d15-4ecb-8b44-46d49acb6dd8","Type":"ContainerStarted","Data":"05352b4f0bdbbdb259a995b24dd03d23ae661da34c1b037ce80eeeb35a816524"} Apr 04 02:08:02 crc kubenswrapper[4681]: I0404 02:08:02.327795 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvbw2" event={"ID":"07348284-8b90-475b-92e4-f92c9a4ec127","Type":"ContainerDied","Data":"d137a4e6d64e95cec0849ce7f8fa93f9d1c7c220f3344d044102ba8cc6b78018"} Apr 04 02:08:02 crc kubenswrapper[4681]: I0404 02:08:02.327731 4681 generic.go:334] "Generic (PLEG): container finished" podID="07348284-8b90-475b-92e4-f92c9a4ec127" containerID="d137a4e6d64e95cec0849ce7f8fa93f9d1c7c220f3344d044102ba8cc6b78018" exitCode=0 Apr 04 02:08:02 crc kubenswrapper[4681]: I0404 02:08:02.341020 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29587808-zjwxg" podStartSLOduration=1.4063639430000001 podStartE2EDuration="2.341004468s" podCreationTimestamp="2026-04-04 02:08:00 +0000 UTC" firstStartedPulling="2026-04-04 02:08:00.97315532 +0000 UTC m=+760.638930440" lastFinishedPulling="2026-04-04 02:08:01.907795845 +0000 UTC m=+761.573570965" observedRunningTime="2026-04-04 02:08:02.337728609 +0000 UTC m=+762.003503729" watchObservedRunningTime="2026-04-04 02:08:02.341004468 +0000 UTC m=+762.006779588" Apr 04 02:08:02 crc kubenswrapper[4681]: I0404 02:08:02.356348 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5161cee-bf19-458a-95a3-8cf593f8f78c-utilities\") pod \"community-operators-pc5lk\" (UID: \"f5161cee-bf19-458a-95a3-8cf593f8f78c\") " pod="openshift-marketplace/community-operators-pc5lk" Apr 04 02:08:02 crc kubenswrapper[4681]: I0404 02:08:02.356412 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5161cee-bf19-458a-95a3-8cf593f8f78c-catalog-content\") pod \"community-operators-pc5lk\" (UID: \"f5161cee-bf19-458a-95a3-8cf593f8f78c\") " pod="openshift-marketplace/community-operators-pc5lk" Apr 04 02:08:02 crc kubenswrapper[4681]: I0404 02:08:02.356439 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcfbg\" (UniqueName: \"kubernetes.io/projected/f5161cee-bf19-458a-95a3-8cf593f8f78c-kube-api-access-gcfbg\") pod \"community-operators-pc5lk\" (UID: \"f5161cee-bf19-458a-95a3-8cf593f8f78c\") " pod="openshift-marketplace/community-operators-pc5lk" Apr 04 02:08:02 crc kubenswrapper[4681]: I0404 02:08:02.357518 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5161cee-bf19-458a-95a3-8cf593f8f78c-catalog-content\") pod \"community-operators-pc5lk\" (UID: \"f5161cee-bf19-458a-95a3-8cf593f8f78c\") " pod="openshift-marketplace/community-operators-pc5lk" Apr 04 02:08:02 crc kubenswrapper[4681]: I0404 02:08:02.357738 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5161cee-bf19-458a-95a3-8cf593f8f78c-utilities\") pod \"community-operators-pc5lk\" (UID: \"f5161cee-bf19-458a-95a3-8cf593f8f78c\") " pod="openshift-marketplace/community-operators-pc5lk" Apr 04 02:08:02 crc kubenswrapper[4681]: I0404 02:08:02.379498 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcfbg\" (UniqueName: \"kubernetes.io/projected/f5161cee-bf19-458a-95a3-8cf593f8f78c-kube-api-access-gcfbg\") pod \"community-operators-pc5lk\" (UID: \"f5161cee-bf19-458a-95a3-8cf593f8f78c\") " pod="openshift-marketplace/community-operators-pc5lk" Apr 04 02:08:02 crc kubenswrapper[4681]: I0404 02:08:02.548861 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pc5lk" Apr 04 02:08:02 crc kubenswrapper[4681]: I0404 02:08:02.717144 4681 scope.go:117] "RemoveContainer" containerID="e7c19c368ce0b475b003861af2dbc2fc182f3be264c38c46aae9c596492d72e1" Apr 04 02:08:02 crc kubenswrapper[4681]: I0404 02:08:02.730555 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pc5lk"] Apr 04 02:08:02 crc kubenswrapper[4681]: W0404 02:08:02.738839 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5161cee_bf19_458a_95a3_8cf593f8f78c.slice/crio-96b1ebc01085795e395d7bd0d0943ada2b7d676dcc5a3a769dc9e6f2f8e01a4d WatchSource:0}: Error finding container 96b1ebc01085795e395d7bd0d0943ada2b7d676dcc5a3a769dc9e6f2f8e01a4d: Status 404 returned error can't find the container with id 96b1ebc01085795e395d7bd0d0943ada2b7d676dcc5a3a769dc9e6f2f8e01a4d Apr 04 02:08:03 crc kubenswrapper[4681]: I0404 02:08:03.335626 4681 generic.go:334] "Generic (PLEG): container finished" podID="f5161cee-bf19-458a-95a3-8cf593f8f78c" containerID="8b8afcee64363a8aa55ead60e7c68e103d88cc0a8ef415dd6db67e9da562e909" exitCode=0 Apr 04 02:08:03 crc kubenswrapper[4681]: I0404 02:08:03.335815 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc5lk" event={"ID":"f5161cee-bf19-458a-95a3-8cf593f8f78c","Type":"ContainerDied","Data":"8b8afcee64363a8aa55ead60e7c68e103d88cc0a8ef415dd6db67e9da562e909"} Apr 04 02:08:03 crc kubenswrapper[4681]: I0404 02:08:03.335921 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc5lk" event={"ID":"f5161cee-bf19-458a-95a3-8cf593f8f78c","Type":"ContainerStarted","Data":"96b1ebc01085795e395d7bd0d0943ada2b7d676dcc5a3a769dc9e6f2f8e01a4d"} Apr 04 02:08:03 crc kubenswrapper[4681]: I0404 02:08:03.337712 4681 generic.go:334] "Generic (PLEG): container finished" podID="6587f6a2-6fa3-4c4c-9e71-2f6f48027630" containerID="d023473e56730b16c4e3656681287626b938bcaea2055858c4e97a52f3d03cd3" exitCode=0 Apr 04 02:08:03 crc kubenswrapper[4681]: I0404 02:08:03.337819 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587808-zjwxg" event={"ID":"6587f6a2-6fa3-4c4c-9e71-2f6f48027630","Type":"ContainerDied","Data":"d023473e56730b16c4e3656681287626b938bcaea2055858c4e97a52f3d03cd3"} Apr 04 02:08:04 crc kubenswrapper[4681]: I0404 02:08:04.345571 4681 generic.go:334] "Generic (PLEG): container finished" podID="bb4a8188-2d15-4ecb-8b44-46d49acb6dd8" containerID="43f24813985ab6517e72ef74bde98dd62d1d2c35e34021fd52fc104e8858d9ea" exitCode=0 Apr 04 02:08:04 crc kubenswrapper[4681]: I0404 02:08:04.345666 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s7ppd" event={"ID":"bb4a8188-2d15-4ecb-8b44-46d49acb6dd8","Type":"ContainerDied","Data":"43f24813985ab6517e72ef74bde98dd62d1d2c35e34021fd52fc104e8858d9ea"} Apr 04 02:08:04 crc kubenswrapper[4681]: I0404 02:08:04.347916 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvbw2" event={"ID":"07348284-8b90-475b-92e4-f92c9a4ec127","Type":"ContainerStarted","Data":"94243e6e07f6f551504b531f58d2b3e3939a1f05d60c18e14958264eb5ae0c67"} Apr 04 02:08:04 crc kubenswrapper[4681]: I0404 02:08:04.350918 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rlcg4" event={"ID":"2ed9fccc-c563-4589-8289-6293e52869e4","Type":"ContainerStarted","Data":"90ab1e99af83d020b812027a0bec78cec321ef134254c9b6ad1853648721a49a"} Apr 04 02:08:04 crc kubenswrapper[4681]: I0404 02:08:04.385024 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vvbw2" podStartSLOduration=2.249372616 podStartE2EDuration="5.385008857s" podCreationTimestamp="2026-04-04 02:07:59 +0000 UTC" firstStartedPulling="2026-04-04 02:08:00.308608884 +0000 UTC m=+759.974384004" lastFinishedPulling="2026-04-04 02:08:03.444245125 +0000 UTC m=+763.110020245" observedRunningTime="2026-04-04 02:08:04.381701697 +0000 UTC m=+764.047476817" watchObservedRunningTime="2026-04-04 02:08:04.385008857 +0000 UTC m=+764.050783977" Apr 04 02:08:04 crc kubenswrapper[4681]: I0404 02:08:04.403151 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rlcg4" podStartSLOduration=2.473610956 podStartE2EDuration="5.403133592s" podCreationTimestamp="2026-04-04 02:07:59 +0000 UTC" firstStartedPulling="2026-04-04 02:08:00.307539905 +0000 UTC m=+759.973315045" lastFinishedPulling="2026-04-04 02:08:03.237062561 +0000 UTC m=+762.902837681" observedRunningTime="2026-04-04 02:08:04.398829234 +0000 UTC m=+764.064604354" watchObservedRunningTime="2026-04-04 02:08:04.403133592 +0000 UTC m=+764.068908712" Apr 04 02:08:04 crc kubenswrapper[4681]: I0404 02:08:04.606706 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587808-zjwxg" Apr 04 02:08:04 crc kubenswrapper[4681]: I0404 02:08:04.786165 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpg7c\" (UniqueName: \"kubernetes.io/projected/6587f6a2-6fa3-4c4c-9e71-2f6f48027630-kube-api-access-tpg7c\") pod \"6587f6a2-6fa3-4c4c-9e71-2f6f48027630\" (UID: \"6587f6a2-6fa3-4c4c-9e71-2f6f48027630\") " Apr 04 02:08:04 crc kubenswrapper[4681]: I0404 02:08:04.792942 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6587f6a2-6fa3-4c4c-9e71-2f6f48027630-kube-api-access-tpg7c" (OuterVolumeSpecName: "kube-api-access-tpg7c") pod "6587f6a2-6fa3-4c4c-9e71-2f6f48027630" (UID: "6587f6a2-6fa3-4c4c-9e71-2f6f48027630"). InnerVolumeSpecName "kube-api-access-tpg7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:08:04 crc kubenswrapper[4681]: I0404 02:08:04.888175 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpg7c\" (UniqueName: \"kubernetes.io/projected/6587f6a2-6fa3-4c4c-9e71-2f6f48027630-kube-api-access-tpg7c\") on node \"crc\" DevicePath \"\"" Apr 04 02:08:05 crc kubenswrapper[4681]: I0404 02:08:05.376083 4681 generic.go:334] "Generic (PLEG): container finished" podID="f5161cee-bf19-458a-95a3-8cf593f8f78c" containerID="0989534b8c361a45bd6db494244538e26e03658047eabace05009057ab3f1b8a" exitCode=0 Apr 04 02:08:05 crc kubenswrapper[4681]: I0404 02:08:05.376167 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc5lk" event={"ID":"f5161cee-bf19-458a-95a3-8cf593f8f78c","Type":"ContainerDied","Data":"0989534b8c361a45bd6db494244538e26e03658047eabace05009057ab3f1b8a"} Apr 04 02:08:05 crc kubenswrapper[4681]: I0404 02:08:05.386014 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s7ppd" event={"ID":"bb4a8188-2d15-4ecb-8b44-46d49acb6dd8","Type":"ContainerStarted","Data":"8dcd26269c9d08716dbdbb27de9ef4b8ecb798e85a907f8cfccf338375d9a2c9"} Apr 04 02:08:05 crc kubenswrapper[4681]: I0404 02:08:05.391420 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587808-zjwxg" event={"ID":"6587f6a2-6fa3-4c4c-9e71-2f6f48027630","Type":"ContainerDied","Data":"c0cd95928c3a6ef921421bf75e428935c364d4566172705857fde96c001cc931"} Apr 04 02:08:05 crc kubenswrapper[4681]: I0404 02:08:05.391453 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0cd95928c3a6ef921421bf75e428935c364d4566172705857fde96c001cc931" Apr 04 02:08:05 crc kubenswrapper[4681]: I0404 02:08:05.391513 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587808-zjwxg" Apr 04 02:08:05 crc kubenswrapper[4681]: I0404 02:08:05.404848 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587802-dwq82"] Apr 04 02:08:05 crc kubenswrapper[4681]: I0404 02:08:05.409944 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587802-dwq82"] Apr 04 02:08:05 crc kubenswrapper[4681]: I0404 02:08:05.410188 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s7ppd" podStartSLOduration=1.771515985 podStartE2EDuration="4.410170433s" podCreationTimestamp="2026-04-04 02:08:01 +0000 UTC" firstStartedPulling="2026-04-04 02:08:02.325431152 +0000 UTC m=+761.991206272" lastFinishedPulling="2026-04-04 02:08:04.96408558 +0000 UTC m=+764.629860720" observedRunningTime="2026-04-04 02:08:05.40892293 +0000 UTC m=+765.074698040" watchObservedRunningTime="2026-04-04 02:08:05.410170433 +0000 UTC m=+765.075945563" Apr 04 02:08:06 crc kubenswrapper[4681]: I0404 02:08:06.401523 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pc5lk" event={"ID":"f5161cee-bf19-458a-95a3-8cf593f8f78c","Type":"ContainerStarted","Data":"b95725979a024dc739c3941932503d7f486231a4214d7ffa9971f3e54aa1fc18"} Apr 04 02:08:06 crc kubenswrapper[4681]: I0404 02:08:06.419901 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pc5lk" podStartSLOduration=1.928279835 podStartE2EDuration="4.419882799s" podCreationTimestamp="2026-04-04 02:08:02 +0000 UTC" firstStartedPulling="2026-04-04 02:08:03.424219339 +0000 UTC m=+763.089994459" lastFinishedPulling="2026-04-04 02:08:05.915822303 +0000 UTC m=+765.581597423" observedRunningTime="2026-04-04 02:08:06.417321509 +0000 UTC m=+766.083096629" watchObservedRunningTime="2026-04-04 02:08:06.419882799 +0000 UTC m=+766.085657919" Apr 04 02:08:07 crc kubenswrapper[4681]: I0404 02:08:07.207027 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa5d5179-84c8-46fd-9328-9016b6b13714" path="/var/lib/kubelet/pods/fa5d5179-84c8-46fd-9328-9016b6b13714/volumes" Apr 04 02:08:09 crc kubenswrapper[4681]: I0404 02:08:09.688926 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rlcg4" Apr 04 02:08:09 crc kubenswrapper[4681]: I0404 02:08:09.689345 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rlcg4" Apr 04 02:08:09 crc kubenswrapper[4681]: I0404 02:08:09.743826 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rlcg4" Apr 04 02:08:09 crc kubenswrapper[4681]: I0404 02:08:09.914690 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vvbw2" Apr 04 02:08:09 crc kubenswrapper[4681]: I0404 02:08:09.914753 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vvbw2" Apr 04 02:08:10 crc kubenswrapper[4681]: I0404 02:08:10.482396 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rlcg4" Apr 04 02:08:10 crc kubenswrapper[4681]: I0404 02:08:10.976153 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vvbw2" podUID="07348284-8b90-475b-92e4-f92c9a4ec127" containerName="registry-server" probeResult="failure" output=< Apr 04 02:08:10 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Apr 04 02:08:10 crc kubenswrapper[4681]: > Apr 04 02:08:11 crc kubenswrapper[4681]: I0404 02:08:11.498247 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s7ppd" Apr 04 02:08:11 crc kubenswrapper[4681]: I0404 02:08:11.498667 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s7ppd" Apr 04 02:08:11 crc kubenswrapper[4681]: I0404 02:08:11.575253 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s7ppd" Apr 04 02:08:12 crc kubenswrapper[4681]: I0404 02:08:12.477971 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s7ppd" Apr 04 02:08:12 crc kubenswrapper[4681]: I0404 02:08:12.549879 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pc5lk" Apr 04 02:08:12 crc kubenswrapper[4681]: I0404 02:08:12.549939 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pc5lk" Apr 04 02:08:12 crc kubenswrapper[4681]: I0404 02:08:12.597756 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pc5lk" Apr 04 02:08:13 crc kubenswrapper[4681]: I0404 02:08:13.507099 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pc5lk" Apr 04 02:08:19 crc kubenswrapper[4681]: I0404 02:08:19.992506 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vvbw2" Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.031587 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vvbw2" Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.137140 4681 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.137688 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="815516d0756bb9282f4d0a28cef72670" containerName="kube-scheduler" containerID="cri-o://012e418fce41527250e1c1400c7bf40551f1ac955e2eb3fd80ae37a130e0a833" gracePeriod=30 Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.137755 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="815516d0756bb9282f4d0a28cef72670" containerName="kube-scheduler-recovery-controller" containerID="cri-o://22263690eb4d10133ad56476528d9c6f8145ae364a06c6223dd2feb0e10112f7" gracePeriod=30 Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.137781 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="815516d0756bb9282f4d0a28cef72670" containerName="kube-scheduler-cert-syncer" containerID="cri-o://32fb2fe503d9f87d5ab9fe349e8bd659ee722bf28615eced1cd647933ba65151" gracePeriod=30 Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.140129 4681 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Apr 04 02:08:20 crc kubenswrapper[4681]: E0404 02:08:20.140426 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6587f6a2-6fa3-4c4c-9e71-2f6f48027630" containerName="oc" Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.140448 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6587f6a2-6fa3-4c4c-9e71-2f6f48027630" containerName="oc" Apr 04 02:08:20 crc kubenswrapper[4681]: E0404 02:08:20.140465 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815516d0756bb9282f4d0a28cef72670" containerName="kube-scheduler-cert-syncer" Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.140473 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="815516d0756bb9282f4d0a28cef72670" containerName="kube-scheduler-cert-syncer" Apr 04 02:08:20 crc kubenswrapper[4681]: E0404 02:08:20.140486 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815516d0756bb9282f4d0a28cef72670" containerName="kube-scheduler" Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.140495 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="815516d0756bb9282f4d0a28cef72670" containerName="kube-scheduler" Apr 04 02:08:20 crc kubenswrapper[4681]: E0404 02:08:20.140511 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815516d0756bb9282f4d0a28cef72670" containerName="wait-for-host-port" Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.140519 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="815516d0756bb9282f4d0a28cef72670" containerName="wait-for-host-port" Apr 04 02:08:20 crc kubenswrapper[4681]: E0404 02:08:20.140535 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="815516d0756bb9282f4d0a28cef72670" containerName="kube-scheduler-recovery-controller" Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.140545 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="815516d0756bb9282f4d0a28cef72670" containerName="kube-scheduler-recovery-controller" Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.140661 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="815516d0756bb9282f4d0a28cef72670" containerName="kube-scheduler-recovery-controller" Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.140676 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="815516d0756bb9282f4d0a28cef72670" containerName="kube-scheduler-cert-syncer" Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.140684 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="6587f6a2-6fa3-4c4c-9e71-2f6f48027630" containerName="oc" Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.140694 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="815516d0756bb9282f4d0a28cef72670" containerName="kube-scheduler" Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.230051 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d8fd3797d07faa04d98c33c6c96ee09f-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"d8fd3797d07faa04d98c33c6c96ee09f\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.230139 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d8fd3797d07faa04d98c33c6c96ee09f-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"d8fd3797d07faa04d98c33c6c96ee09f\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.332129 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d8fd3797d07faa04d98c33c6c96ee09f-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"d8fd3797d07faa04d98c33c6c96ee09f\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.332492 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d8fd3797d07faa04d98c33c6c96ee09f-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"d8fd3797d07faa04d98c33c6c96ee09f\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.332575 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d8fd3797d07faa04d98c33c6c96ee09f-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"d8fd3797d07faa04d98c33c6c96ee09f\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.332615 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d8fd3797d07faa04d98c33c6c96ee09f-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"d8fd3797d07faa04d98c33c6c96ee09f\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.501346 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_815516d0756bb9282f4d0a28cef72670/kube-scheduler-cert-syncer/0.log" Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.502378 4681 generic.go:334] "Generic (PLEG): container finished" podID="815516d0756bb9282f4d0a28cef72670" containerID="22263690eb4d10133ad56476528d9c6f8145ae364a06c6223dd2feb0e10112f7" exitCode=0 Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.502416 4681 generic.go:334] "Generic (PLEG): container finished" podID="815516d0756bb9282f4d0a28cef72670" containerID="32fb2fe503d9f87d5ab9fe349e8bd659ee722bf28615eced1cd647933ba65151" exitCode=2 Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.502430 4681 generic.go:334] "Generic (PLEG): container finished" podID="815516d0756bb9282f4d0a28cef72670" containerID="012e418fce41527250e1c1400c7bf40551f1ac955e2eb3fd80ae37a130e0a833" exitCode=0 Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.503966 4681 generic.go:334] "Generic (PLEG): container finished" podID="5efcf7ec-256b-4faa-98da-c9d737c2287b" containerID="9b3fa424c7d04887a47e6e06f382251d81511976da4b92deb0dace93652b737f" exitCode=0 Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.504012 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-8-retry-2-crc" event={"ID":"5efcf7ec-256b-4faa-98da-c9d737c2287b","Type":"ContainerDied","Data":"9b3fa424c7d04887a47e6e06f382251d81511976da4b92deb0dace93652b737f"} Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.508414 4681 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" oldPodUID="815516d0756bb9282f4d0a28cef72670" podUID="d8fd3797d07faa04d98c33c6c96ee09f" Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.820816 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_815516d0756bb9282f4d0a28cef72670/kube-scheduler-cert-syncer/0.log" Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.822983 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.826370 4681 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" oldPodUID="815516d0756bb9282f4d0a28cef72670" podUID="d8fd3797d07faa04d98c33c6c96ee09f" Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.943119 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/815516d0756bb9282f4d0a28cef72670-cert-dir\") pod \"815516d0756bb9282f4d0a28cef72670\" (UID: \"815516d0756bb9282f4d0a28cef72670\") " Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.943254 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/815516d0756bb9282f4d0a28cef72670-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "815516d0756bb9282f4d0a28cef72670" (UID: "815516d0756bb9282f4d0a28cef72670"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.943338 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/815516d0756bb9282f4d0a28cef72670-resource-dir\") pod \"815516d0756bb9282f4d0a28cef72670\" (UID: \"815516d0756bb9282f4d0a28cef72670\") " Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.943414 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/815516d0756bb9282f4d0a28cef72670-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "815516d0756bb9282f4d0a28cef72670" (UID: "815516d0756bb9282f4d0a28cef72670"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.943604 4681 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/815516d0756bb9282f4d0a28cef72670-resource-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:08:20 crc kubenswrapper[4681]: I0404 02:08:20.943617 4681 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/815516d0756bb9282f4d0a28cef72670-cert-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:08:21 crc kubenswrapper[4681]: I0404 02:08:21.220506 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="815516d0756bb9282f4d0a28cef72670" path="/var/lib/kubelet/pods/815516d0756bb9282f4d0a28cef72670/volumes" Apr 04 02:08:21 crc kubenswrapper[4681]: I0404 02:08:21.513677 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-crc_815516d0756bb9282f4d0a28cef72670/kube-scheduler-cert-syncer/0.log" Apr 04 02:08:21 crc kubenswrapper[4681]: I0404 02:08:21.517209 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 02:08:21 crc kubenswrapper[4681]: I0404 02:08:21.517316 4681 scope.go:117] "RemoveContainer" containerID="22263690eb4d10133ad56476528d9c6f8145ae364a06c6223dd2feb0e10112f7" Apr 04 02:08:21 crc kubenswrapper[4681]: I0404 02:08:21.525752 4681 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" oldPodUID="815516d0756bb9282f4d0a28cef72670" podUID="d8fd3797d07faa04d98c33c6c96ee09f" Apr 04 02:08:21 crc kubenswrapper[4681]: I0404 02:08:21.539149 4681 scope.go:117] "RemoveContainer" containerID="32fb2fe503d9f87d5ab9fe349e8bd659ee722bf28615eced1cd647933ba65151" Apr 04 02:08:21 crc kubenswrapper[4681]: I0404 02:08:21.560599 4681 scope.go:117] "RemoveContainer" containerID="012e418fce41527250e1c1400c7bf40551f1ac955e2eb3fd80ae37a130e0a833" Apr 04 02:08:21 crc kubenswrapper[4681]: I0404 02:08:21.597463 4681 scope.go:117] "RemoveContainer" containerID="246e89242f8191d98ded35cd67c72567f0754ffb51ac295a462acf01eaf8b85f" Apr 04 02:08:21 crc kubenswrapper[4681]: I0404 02:08:21.827495 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-8-retry-2-crc" Apr 04 02:08:21 crc kubenswrapper[4681]: I0404 02:08:21.858184 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5efcf7ec-256b-4faa-98da-c9d737c2287b-var-lock\") pod \"5efcf7ec-256b-4faa-98da-c9d737c2287b\" (UID: \"5efcf7ec-256b-4faa-98da-c9d737c2287b\") " Apr 04 02:08:21 crc kubenswrapper[4681]: I0404 02:08:21.858539 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5efcf7ec-256b-4faa-98da-c9d737c2287b-kubelet-dir\") pod \"5efcf7ec-256b-4faa-98da-c9d737c2287b\" (UID: \"5efcf7ec-256b-4faa-98da-c9d737c2287b\") " Apr 04 02:08:21 crc kubenswrapper[4681]: I0404 02:08:21.858403 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5efcf7ec-256b-4faa-98da-c9d737c2287b-var-lock" (OuterVolumeSpecName: "var-lock") pod "5efcf7ec-256b-4faa-98da-c9d737c2287b" (UID: "5efcf7ec-256b-4faa-98da-c9d737c2287b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:08:21 crc kubenswrapper[4681]: I0404 02:08:21.858614 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5efcf7ec-256b-4faa-98da-c9d737c2287b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5efcf7ec-256b-4faa-98da-c9d737c2287b" (UID: "5efcf7ec-256b-4faa-98da-c9d737c2287b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:08:21 crc kubenswrapper[4681]: I0404 02:08:21.858824 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5efcf7ec-256b-4faa-98da-c9d737c2287b-kube-api-access\") pod \"5efcf7ec-256b-4faa-98da-c9d737c2287b\" (UID: \"5efcf7ec-256b-4faa-98da-c9d737c2287b\") " Apr 04 02:08:21 crc kubenswrapper[4681]: I0404 02:08:21.859166 4681 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5efcf7ec-256b-4faa-98da-c9d737c2287b-var-lock\") on node \"crc\" DevicePath \"\"" Apr 04 02:08:21 crc kubenswrapper[4681]: I0404 02:08:21.859254 4681 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5efcf7ec-256b-4faa-98da-c9d737c2287b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:08:21 crc kubenswrapper[4681]: I0404 02:08:21.871591 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5efcf7ec-256b-4faa-98da-c9d737c2287b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5efcf7ec-256b-4faa-98da-c9d737c2287b" (UID: "5efcf7ec-256b-4faa-98da-c9d737c2287b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:08:21 crc kubenswrapper[4681]: I0404 02:08:21.960678 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5efcf7ec-256b-4faa-98da-c9d737c2287b-kube-api-access\") on node \"crc\" DevicePath \"\"" Apr 04 02:08:22 crc kubenswrapper[4681]: I0404 02:08:22.524878 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-8-retry-2-crc" event={"ID":"5efcf7ec-256b-4faa-98da-c9d737c2287b","Type":"ContainerDied","Data":"e5ec69697e7258dc4d278df3ee2ad7c9d40fa5e2b390df99fe6dc981c889ef82"} Apr 04 02:08:22 crc kubenswrapper[4681]: I0404 02:08:22.524926 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-8-retry-2-crc" Apr 04 02:08:22 crc kubenswrapper[4681]: I0404 02:08:22.524935 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5ec69697e7258dc4d278df3ee2ad7c9d40fa5e2b390df99fe6dc981c889ef82" Apr 04 02:08:26 crc kubenswrapper[4681]: I0404 02:08:26.524246 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:08:26 crc kubenswrapper[4681]: I0404 02:08:26.524913 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:08:26 crc kubenswrapper[4681]: I0404 02:08:26.524976 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 02:08:26 crc kubenswrapper[4681]: I0404 02:08:26.525782 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9cdb4a37ebc45c431b49d8569b090b8ea3b25e9985ca40aa49f8ebf3ea0f3152"} pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Apr 04 02:08:26 crc kubenswrapper[4681]: I0404 02:08:26.525892 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" containerID="cri-o://9cdb4a37ebc45c431b49d8569b090b8ea3b25e9985ca40aa49f8ebf3ea0f3152" gracePeriod=600 Apr 04 02:08:27 crc kubenswrapper[4681]: I0404 02:08:27.572835 4681 generic.go:334] "Generic (PLEG): container finished" podID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerID="9cdb4a37ebc45c431b49d8569b090b8ea3b25e9985ca40aa49f8ebf3ea0f3152" exitCode=0 Apr 04 02:08:27 crc kubenswrapper[4681]: I0404 02:08:27.573351 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerDied","Data":"9cdb4a37ebc45c431b49d8569b090b8ea3b25e9985ca40aa49f8ebf3ea0f3152"} Apr 04 02:08:27 crc kubenswrapper[4681]: I0404 02:08:27.573405 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerStarted","Data":"6461d3b377cf8bfb047c484785d5de06041b8d0e8bb34eec33f278db844fd42a"} Apr 04 02:08:27 crc kubenswrapper[4681]: I0404 02:08:27.573439 4681 scope.go:117] "RemoveContainer" containerID="c85362a63d53f1caf92cf1cf160f8c227b257437b2ac80c0232b940eca17eb43" Apr 04 02:08:32 crc kubenswrapper[4681]: I0404 02:08:32.200024 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 02:08:32 crc kubenswrapper[4681]: I0404 02:08:32.215066 4681 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="b3ed8155-ee0e-4dbb-9fe9-345db1ee27ee" Apr 04 02:08:32 crc kubenswrapper[4681]: I0404 02:08:32.215103 4681 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="b3ed8155-ee0e-4dbb-9fe9-345db1ee27ee" Apr 04 02:08:32 crc kubenswrapper[4681]: I0404 02:08:32.221320 4681 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 02:08:32 crc kubenswrapper[4681]: I0404 02:08:32.223825 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Apr 04 02:08:32 crc kubenswrapper[4681]: I0404 02:08:32.229019 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Apr 04 02:08:32 crc kubenswrapper[4681]: I0404 02:08:32.242411 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 02:08:32 crc kubenswrapper[4681]: I0404 02:08:32.245060 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Apr 04 02:08:32 crc kubenswrapper[4681]: I0404 02:08:32.605371 4681 generic.go:334] "Generic (PLEG): container finished" podID="d8fd3797d07faa04d98c33c6c96ee09f" containerID="4db1e946d81bcd39cc8e094538a3fb5341622269dc07ba2a599a235afe471b2a" exitCode=0 Apr 04 02:08:32 crc kubenswrapper[4681]: I0404 02:08:32.605420 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"d8fd3797d07faa04d98c33c6c96ee09f","Type":"ContainerDied","Data":"4db1e946d81bcd39cc8e094538a3fb5341622269dc07ba2a599a235afe471b2a"} Apr 04 02:08:32 crc kubenswrapper[4681]: I0404 02:08:32.606396 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"d8fd3797d07faa04d98c33c6c96ee09f","Type":"ContainerStarted","Data":"cebe43573eb6cf2f80e64e6da68bf09399ec0bef1dbefd2657818607f3c41698"} Apr 04 02:08:33 crc kubenswrapper[4681]: I0404 02:08:33.615854 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"d8fd3797d07faa04d98c33c6c96ee09f","Type":"ContainerStarted","Data":"d111a247eae09be562f1442717a55e5682004c61750aef08492b30de75b9af32"} Apr 04 02:08:33 crc kubenswrapper[4681]: I0404 02:08:33.615893 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"d8fd3797d07faa04d98c33c6c96ee09f","Type":"ContainerStarted","Data":"18edaf5f18eb45426e0b9ebcee31a3c5d792892eb475d07dcd61092063d2eddd"} Apr 04 02:08:33 crc kubenswrapper[4681]: I0404 02:08:33.615903 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"d8fd3797d07faa04d98c33c6c96ee09f","Type":"ContainerStarted","Data":"a3daa01f75685bab27f1a416cc135eeef54f44d99beef197d7b53c8167d288d1"} Apr 04 02:08:33 crc kubenswrapper[4681]: I0404 02:08:33.616148 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 02:08:33 crc kubenswrapper[4681]: I0404 02:08:33.632143 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=1.6321242150000002 podStartE2EDuration="1.632124215s" podCreationTimestamp="2026-04-04 02:08:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:08:33.630547192 +0000 UTC m=+793.296322322" watchObservedRunningTime="2026-04-04 02:08:33.632124215 +0000 UTC m=+793.297899335" Apr 04 02:09:02 crc kubenswrapper[4681]: I0404 02:09:02.795457 4681 scope.go:117] "RemoveContainer" containerID="b0650801cceed6327b0324fea67a6a55d0b3e40b16733b5321987db267f75ec6" Apr 04 02:09:19 crc kubenswrapper[4681]: I0404 02:09:19.787659 4681 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Apr 04 02:09:20 crc kubenswrapper[4681]: I0404 02:09:20.139323 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xpn96"] Apr 04 02:09:20 crc kubenswrapper[4681]: I0404 02:09:20.140003 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver" containerID="cri-o://0ccf459dde3bb6380a86fadcdf1e4be0d0d77704eb533b1f0f3b8a5a51272132" gracePeriod=120 Apr 04 02:09:20 crc kubenswrapper[4681]: I0404 02:09:20.140236 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver-check-endpoints" containerID="cri-o://92f929c341ccb7c4858283441c3001e4601d8287f04bb983d7276beb95b55533" gracePeriod=120 Apr 04 02:09:20 crc kubenswrapper[4681]: I0404 02:09:20.216771 4681 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xpn96 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Apr 04 02:09:20 crc kubenswrapper[4681]: [+]log ok Apr 04 02:09:20 crc kubenswrapper[4681]: [+]etcd excluded: ok Apr 04 02:09:20 crc kubenswrapper[4681]: [+]etcd-readiness excluded: ok Apr 04 02:09:20 crc kubenswrapper[4681]: [+]poststarthook/start-apiserver-admission-initializer ok Apr 04 02:09:20 crc kubenswrapper[4681]: [+]informer-sync ok Apr 04 02:09:20 crc kubenswrapper[4681]: [+]poststarthook/generic-apiserver-start-informers ok Apr 04 02:09:20 crc kubenswrapper[4681]: [+]poststarthook/max-in-flight-filter ok Apr 04 02:09:20 crc kubenswrapper[4681]: [+]poststarthook/storage-object-count-tracker-hook ok Apr 04 02:09:20 crc kubenswrapper[4681]: [+]poststarthook/image.openshift.io-apiserver-caches ok Apr 04 02:09:20 crc kubenswrapper[4681]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Apr 04 02:09:20 crc kubenswrapper[4681]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Apr 04 02:09:20 crc kubenswrapper[4681]: [+]poststarthook/project.openshift.io-projectcache ok Apr 04 02:09:20 crc kubenswrapper[4681]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Apr 04 02:09:20 crc kubenswrapper[4681]: [+]poststarthook/openshift.io-startinformers ok Apr 04 02:09:20 crc kubenswrapper[4681]: [+]poststarthook/openshift.io-restmapperupdater ok Apr 04 02:09:20 crc kubenswrapper[4681]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Apr 04 02:09:20 crc kubenswrapper[4681]: [-]shutdown failed: reason withheld Apr 04 02:09:20 crc kubenswrapper[4681]: readyz check failed Apr 04 02:09:20 crc kubenswrapper[4681]: I0404 02:09:20.216829 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 04 02:09:20 crc kubenswrapper[4681]: I0404 02:09:20.969670 4681 generic.go:334] "Generic (PLEG): container finished" podID="5da93ec3-d19f-40d9-97f1-994998839180" containerID="92f929c341ccb7c4858283441c3001e4601d8287f04bb983d7276beb95b55533" exitCode=0 Apr 04 02:09:20 crc kubenswrapper[4681]: I0404 02:09:20.969706 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" event={"ID":"5da93ec3-d19f-40d9-97f1-994998839180","Type":"ContainerDied","Data":"92f929c341ccb7c4858283441c3001e4601d8287f04bb983d7276beb95b55533"} Apr 04 02:09:22 crc kubenswrapper[4681]: I0404 02:09:22.248140 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Apr 04 02:09:25 crc kubenswrapper[4681]: I0404 02:09:25.216730 4681 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xpn96 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Apr 04 02:09:25 crc kubenswrapper[4681]: [+]log ok Apr 04 02:09:25 crc kubenswrapper[4681]: [+]etcd excluded: ok Apr 04 02:09:25 crc kubenswrapper[4681]: [+]etcd-readiness excluded: ok Apr 04 02:09:25 crc kubenswrapper[4681]: [+]poststarthook/start-apiserver-admission-initializer ok Apr 04 02:09:25 crc kubenswrapper[4681]: [+]informer-sync ok Apr 04 02:09:25 crc kubenswrapper[4681]: [+]poststarthook/generic-apiserver-start-informers ok Apr 04 02:09:25 crc kubenswrapper[4681]: [+]poststarthook/max-in-flight-filter ok Apr 04 02:09:25 crc kubenswrapper[4681]: [+]poststarthook/storage-object-count-tracker-hook ok Apr 04 02:09:25 crc kubenswrapper[4681]: [+]poststarthook/image.openshift.io-apiserver-caches ok Apr 04 02:09:25 crc kubenswrapper[4681]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Apr 04 02:09:25 crc kubenswrapper[4681]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Apr 04 02:09:25 crc kubenswrapper[4681]: [+]poststarthook/project.openshift.io-projectcache ok Apr 04 02:09:25 crc kubenswrapper[4681]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Apr 04 02:09:25 crc kubenswrapper[4681]: [+]poststarthook/openshift.io-startinformers ok Apr 04 02:09:25 crc kubenswrapper[4681]: [+]poststarthook/openshift.io-restmapperupdater ok Apr 04 02:09:25 crc kubenswrapper[4681]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Apr 04 02:09:25 crc kubenswrapper[4681]: [-]shutdown failed: reason withheld Apr 04 02:09:25 crc kubenswrapper[4681]: readyz check failed Apr 04 02:09:25 crc kubenswrapper[4681]: I0404 02:09:25.216829 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 04 02:09:30 crc kubenswrapper[4681]: I0404 02:09:30.217454 4681 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xpn96 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Apr 04 02:09:30 crc kubenswrapper[4681]: [+]log ok Apr 04 02:09:30 crc kubenswrapper[4681]: [+]etcd excluded: ok Apr 04 02:09:30 crc kubenswrapper[4681]: [+]etcd-readiness excluded: ok Apr 04 02:09:30 crc kubenswrapper[4681]: [+]poststarthook/start-apiserver-admission-initializer ok Apr 04 02:09:30 crc kubenswrapper[4681]: [+]informer-sync ok Apr 04 02:09:30 crc kubenswrapper[4681]: [+]poststarthook/generic-apiserver-start-informers ok Apr 04 02:09:30 crc kubenswrapper[4681]: [+]poststarthook/max-in-flight-filter ok Apr 04 02:09:30 crc kubenswrapper[4681]: [+]poststarthook/storage-object-count-tracker-hook ok Apr 04 02:09:30 crc kubenswrapper[4681]: [+]poststarthook/image.openshift.io-apiserver-caches ok Apr 04 02:09:30 crc kubenswrapper[4681]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Apr 04 02:09:30 crc kubenswrapper[4681]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Apr 04 02:09:30 crc kubenswrapper[4681]: [+]poststarthook/project.openshift.io-projectcache ok Apr 04 02:09:30 crc kubenswrapper[4681]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Apr 04 02:09:30 crc kubenswrapper[4681]: [+]poststarthook/openshift.io-startinformers ok Apr 04 02:09:30 crc kubenswrapper[4681]: [+]poststarthook/openshift.io-restmapperupdater ok Apr 04 02:09:30 crc kubenswrapper[4681]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Apr 04 02:09:30 crc kubenswrapper[4681]: [-]shutdown failed: reason withheld Apr 04 02:09:30 crc kubenswrapper[4681]: readyz check failed Apr 04 02:09:30 crc kubenswrapper[4681]: I0404 02:09:30.217900 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 04 02:09:30 crc kubenswrapper[4681]: I0404 02:09:30.218029 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 02:09:35 crc kubenswrapper[4681]: I0404 02:09:35.218622 4681 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xpn96 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Apr 04 02:09:35 crc kubenswrapper[4681]: [+]log ok Apr 04 02:09:35 crc kubenswrapper[4681]: [+]etcd excluded: ok Apr 04 02:09:35 crc kubenswrapper[4681]: [+]etcd-readiness excluded: ok Apr 04 02:09:35 crc kubenswrapper[4681]: [+]poststarthook/start-apiserver-admission-initializer ok Apr 04 02:09:35 crc kubenswrapper[4681]: [+]informer-sync ok Apr 04 02:09:35 crc kubenswrapper[4681]: [+]poststarthook/generic-apiserver-start-informers ok Apr 04 02:09:35 crc kubenswrapper[4681]: [+]poststarthook/max-in-flight-filter ok Apr 04 02:09:35 crc kubenswrapper[4681]: [+]poststarthook/storage-object-count-tracker-hook ok Apr 04 02:09:35 crc kubenswrapper[4681]: [+]poststarthook/image.openshift.io-apiserver-caches ok Apr 04 02:09:35 crc kubenswrapper[4681]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Apr 04 02:09:35 crc kubenswrapper[4681]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Apr 04 02:09:35 crc kubenswrapper[4681]: [+]poststarthook/project.openshift.io-projectcache ok Apr 04 02:09:35 crc kubenswrapper[4681]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Apr 04 02:09:35 crc kubenswrapper[4681]: [+]poststarthook/openshift.io-startinformers ok Apr 04 02:09:35 crc kubenswrapper[4681]: [+]poststarthook/openshift.io-restmapperupdater ok Apr 04 02:09:35 crc kubenswrapper[4681]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Apr 04 02:09:35 crc kubenswrapper[4681]: [-]shutdown failed: reason withheld Apr 04 02:09:35 crc kubenswrapper[4681]: readyz check failed Apr 04 02:09:35 crc kubenswrapper[4681]: I0404 02:09:35.219093 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 04 02:09:40 crc kubenswrapper[4681]: I0404 02:09:40.220180 4681 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xpn96 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Apr 04 02:09:40 crc kubenswrapper[4681]: [+]log ok Apr 04 02:09:40 crc kubenswrapper[4681]: [+]etcd excluded: ok Apr 04 02:09:40 crc kubenswrapper[4681]: [+]etcd-readiness excluded: ok Apr 04 02:09:40 crc kubenswrapper[4681]: [+]poststarthook/start-apiserver-admission-initializer ok Apr 04 02:09:40 crc kubenswrapper[4681]: [+]informer-sync ok Apr 04 02:09:40 crc kubenswrapper[4681]: [+]poststarthook/generic-apiserver-start-informers ok Apr 04 02:09:40 crc kubenswrapper[4681]: [+]poststarthook/max-in-flight-filter ok Apr 04 02:09:40 crc kubenswrapper[4681]: [+]poststarthook/storage-object-count-tracker-hook ok Apr 04 02:09:40 crc kubenswrapper[4681]: [+]poststarthook/image.openshift.io-apiserver-caches ok Apr 04 02:09:40 crc kubenswrapper[4681]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Apr 04 02:09:40 crc kubenswrapper[4681]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Apr 04 02:09:40 crc kubenswrapper[4681]: [+]poststarthook/project.openshift.io-projectcache ok Apr 04 02:09:40 crc kubenswrapper[4681]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Apr 04 02:09:40 crc kubenswrapper[4681]: [+]poststarthook/openshift.io-startinformers ok Apr 04 02:09:40 crc kubenswrapper[4681]: [+]poststarthook/openshift.io-restmapperupdater ok Apr 04 02:09:40 crc kubenswrapper[4681]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Apr 04 02:09:40 crc kubenswrapper[4681]: [-]shutdown failed: reason withheld Apr 04 02:09:40 crc kubenswrapper[4681]: readyz check failed Apr 04 02:09:40 crc kubenswrapper[4681]: I0404 02:09:40.220872 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 04 02:09:45 crc kubenswrapper[4681]: I0404 02:09:45.219226 4681 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xpn96 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Apr 04 02:09:45 crc kubenswrapper[4681]: [+]log ok Apr 04 02:09:45 crc kubenswrapper[4681]: [+]etcd excluded: ok Apr 04 02:09:45 crc kubenswrapper[4681]: [+]etcd-readiness excluded: ok Apr 04 02:09:45 crc kubenswrapper[4681]: [+]poststarthook/start-apiserver-admission-initializer ok Apr 04 02:09:45 crc kubenswrapper[4681]: [+]informer-sync ok Apr 04 02:09:45 crc kubenswrapper[4681]: [+]poststarthook/generic-apiserver-start-informers ok Apr 04 02:09:45 crc kubenswrapper[4681]: [+]poststarthook/max-in-flight-filter ok Apr 04 02:09:45 crc kubenswrapper[4681]: [+]poststarthook/storage-object-count-tracker-hook ok Apr 04 02:09:45 crc kubenswrapper[4681]: [+]poststarthook/image.openshift.io-apiserver-caches ok Apr 04 02:09:45 crc kubenswrapper[4681]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Apr 04 02:09:45 crc kubenswrapper[4681]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Apr 04 02:09:45 crc kubenswrapper[4681]: [+]poststarthook/project.openshift.io-projectcache ok Apr 04 02:09:45 crc kubenswrapper[4681]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Apr 04 02:09:45 crc kubenswrapper[4681]: [+]poststarthook/openshift.io-startinformers ok Apr 04 02:09:45 crc kubenswrapper[4681]: [+]poststarthook/openshift.io-restmapperupdater ok Apr 04 02:09:45 crc kubenswrapper[4681]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Apr 04 02:09:45 crc kubenswrapper[4681]: [-]shutdown failed: reason withheld Apr 04 02:09:45 crc kubenswrapper[4681]: readyz check failed Apr 04 02:09:45 crc kubenswrapper[4681]: I0404 02:09:45.219664 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 04 02:09:50 crc kubenswrapper[4681]: I0404 02:09:50.218546 4681 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xpn96 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Apr 04 02:09:50 crc kubenswrapper[4681]: [+]log ok Apr 04 02:09:50 crc kubenswrapper[4681]: [+]etcd excluded: ok Apr 04 02:09:50 crc kubenswrapper[4681]: [+]etcd-readiness excluded: ok Apr 04 02:09:50 crc kubenswrapper[4681]: [+]poststarthook/start-apiserver-admission-initializer ok Apr 04 02:09:50 crc kubenswrapper[4681]: [+]informer-sync ok Apr 04 02:09:50 crc kubenswrapper[4681]: [+]poststarthook/generic-apiserver-start-informers ok Apr 04 02:09:50 crc kubenswrapper[4681]: [+]poststarthook/max-in-flight-filter ok Apr 04 02:09:50 crc kubenswrapper[4681]: [+]poststarthook/storage-object-count-tracker-hook ok Apr 04 02:09:50 crc kubenswrapper[4681]: [+]poststarthook/image.openshift.io-apiserver-caches ok Apr 04 02:09:50 crc kubenswrapper[4681]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Apr 04 02:09:50 crc kubenswrapper[4681]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Apr 04 02:09:50 crc kubenswrapper[4681]: [+]poststarthook/project.openshift.io-projectcache ok Apr 04 02:09:50 crc kubenswrapper[4681]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Apr 04 02:09:50 crc kubenswrapper[4681]: [+]poststarthook/openshift.io-startinformers ok Apr 04 02:09:50 crc kubenswrapper[4681]: [+]poststarthook/openshift.io-restmapperupdater ok Apr 04 02:09:50 crc kubenswrapper[4681]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Apr 04 02:09:50 crc kubenswrapper[4681]: [-]shutdown failed: reason withheld Apr 04 02:09:50 crc kubenswrapper[4681]: readyz check failed Apr 04 02:09:50 crc kubenswrapper[4681]: I0404 02:09:50.218988 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 04 02:09:55 crc kubenswrapper[4681]: I0404 02:09:55.218160 4681 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xpn96 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Apr 04 02:09:55 crc kubenswrapper[4681]: [+]log ok Apr 04 02:09:55 crc kubenswrapper[4681]: [+]etcd excluded: ok Apr 04 02:09:55 crc kubenswrapper[4681]: [+]etcd-readiness excluded: ok Apr 04 02:09:55 crc kubenswrapper[4681]: [+]poststarthook/start-apiserver-admission-initializer ok Apr 04 02:09:55 crc kubenswrapper[4681]: [+]informer-sync ok Apr 04 02:09:55 crc kubenswrapper[4681]: [+]poststarthook/generic-apiserver-start-informers ok Apr 04 02:09:55 crc kubenswrapper[4681]: [+]poststarthook/max-in-flight-filter ok Apr 04 02:09:55 crc kubenswrapper[4681]: [+]poststarthook/storage-object-count-tracker-hook ok Apr 04 02:09:55 crc kubenswrapper[4681]: [+]poststarthook/image.openshift.io-apiserver-caches ok Apr 04 02:09:55 crc kubenswrapper[4681]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Apr 04 02:09:55 crc kubenswrapper[4681]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Apr 04 02:09:55 crc kubenswrapper[4681]: [+]poststarthook/project.openshift.io-projectcache ok Apr 04 02:09:55 crc kubenswrapper[4681]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Apr 04 02:09:55 crc kubenswrapper[4681]: [+]poststarthook/openshift.io-startinformers ok Apr 04 02:09:55 crc kubenswrapper[4681]: [+]poststarthook/openshift.io-restmapperupdater ok Apr 04 02:09:55 crc kubenswrapper[4681]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Apr 04 02:09:55 crc kubenswrapper[4681]: [-]shutdown failed: reason withheld Apr 04 02:09:55 crc kubenswrapper[4681]: readyz check failed Apr 04 02:09:55 crc kubenswrapper[4681]: I0404 02:09:55.218573 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 04 02:10:00 crc kubenswrapper[4681]: I0404 02:10:00.135078 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587810-4s29h"] Apr 04 02:10:00 crc kubenswrapper[4681]: E0404 02:10:00.135670 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5efcf7ec-256b-4faa-98da-c9d737c2287b" containerName="installer" Apr 04 02:10:00 crc kubenswrapper[4681]: I0404 02:10:00.135688 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="5efcf7ec-256b-4faa-98da-c9d737c2287b" containerName="installer" Apr 04 02:10:00 crc kubenswrapper[4681]: I0404 02:10:00.135807 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="5efcf7ec-256b-4faa-98da-c9d737c2287b" containerName="installer" Apr 04 02:10:00 crc kubenswrapper[4681]: I0404 02:10:00.136288 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587810-4s29h" Apr 04 02:10:00 crc kubenswrapper[4681]: I0404 02:10:00.138934 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 02:10:00 crc kubenswrapper[4681]: I0404 02:10:00.138939 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 02:10:00 crc kubenswrapper[4681]: I0404 02:10:00.140455 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 02:10:00 crc kubenswrapper[4681]: I0404 02:10:00.149259 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587810-4s29h"] Apr 04 02:10:00 crc kubenswrapper[4681]: I0404 02:10:00.216782 4681 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xpn96 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Apr 04 02:10:00 crc kubenswrapper[4681]: [+]log ok Apr 04 02:10:00 crc kubenswrapper[4681]: [+]etcd excluded: ok Apr 04 02:10:00 crc kubenswrapper[4681]: [+]etcd-readiness excluded: ok Apr 04 02:10:00 crc kubenswrapper[4681]: [+]poststarthook/start-apiserver-admission-initializer ok Apr 04 02:10:00 crc kubenswrapper[4681]: [+]informer-sync ok Apr 04 02:10:00 crc kubenswrapper[4681]: [+]poststarthook/generic-apiserver-start-informers ok Apr 04 02:10:00 crc kubenswrapper[4681]: [+]poststarthook/max-in-flight-filter ok Apr 04 02:10:00 crc kubenswrapper[4681]: [+]poststarthook/storage-object-count-tracker-hook ok Apr 04 02:10:00 crc kubenswrapper[4681]: [+]poststarthook/image.openshift.io-apiserver-caches ok Apr 04 02:10:00 crc kubenswrapper[4681]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Apr 04 02:10:00 crc kubenswrapper[4681]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Apr 04 02:10:00 crc kubenswrapper[4681]: [+]poststarthook/project.openshift.io-projectcache ok Apr 04 02:10:00 crc kubenswrapper[4681]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Apr 04 02:10:00 crc kubenswrapper[4681]: [+]poststarthook/openshift.io-startinformers ok Apr 04 02:10:00 crc kubenswrapper[4681]: [+]poststarthook/openshift.io-restmapperupdater ok Apr 04 02:10:00 crc kubenswrapper[4681]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Apr 04 02:10:00 crc kubenswrapper[4681]: [-]shutdown failed: reason withheld Apr 04 02:10:00 crc kubenswrapper[4681]: readyz check failed Apr 04 02:10:00 crc kubenswrapper[4681]: I0404 02:10:00.216847 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 04 02:10:00 crc kubenswrapper[4681]: I0404 02:10:00.255728 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9gvq\" (UniqueName: \"kubernetes.io/projected/6bb9cb84-0f19-49e1-8c8a-f8394d24935b-kube-api-access-f9gvq\") pod \"auto-csr-approver-29587810-4s29h\" (UID: \"6bb9cb84-0f19-49e1-8c8a-f8394d24935b\") " pod="openshift-infra/auto-csr-approver-29587810-4s29h" Apr 04 02:10:00 crc kubenswrapper[4681]: I0404 02:10:00.356848 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9gvq\" (UniqueName: \"kubernetes.io/projected/6bb9cb84-0f19-49e1-8c8a-f8394d24935b-kube-api-access-f9gvq\") pod \"auto-csr-approver-29587810-4s29h\" (UID: \"6bb9cb84-0f19-49e1-8c8a-f8394d24935b\") " pod="openshift-infra/auto-csr-approver-29587810-4s29h" Apr 04 02:10:00 crc kubenswrapper[4681]: I0404 02:10:00.386231 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9gvq\" (UniqueName: \"kubernetes.io/projected/6bb9cb84-0f19-49e1-8c8a-f8394d24935b-kube-api-access-f9gvq\") pod \"auto-csr-approver-29587810-4s29h\" (UID: \"6bb9cb84-0f19-49e1-8c8a-f8394d24935b\") " pod="openshift-infra/auto-csr-approver-29587810-4s29h" Apr 04 02:10:00 crc kubenswrapper[4681]: I0404 02:10:00.463577 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587810-4s29h" Apr 04 02:10:00 crc kubenswrapper[4681]: I0404 02:10:00.874276 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587810-4s29h"] Apr 04 02:10:00 crc kubenswrapper[4681]: I0404 02:10:00.882904 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 04 02:10:01 crc kubenswrapper[4681]: I0404 02:10:01.248223 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587810-4s29h" event={"ID":"6bb9cb84-0f19-49e1-8c8a-f8394d24935b","Type":"ContainerStarted","Data":"4ee0e4c9b12d4fa8ea3eefb7c87ed8bb8ca9d3bd84f26a7087b68443e4447fc8"} Apr 04 02:10:04 crc kubenswrapper[4681]: I0404 02:10:04.272755 4681 generic.go:334] "Generic (PLEG): container finished" podID="6bb9cb84-0f19-49e1-8c8a-f8394d24935b" containerID="b0f838f6731e25e64a6c8e686d39f8b7f91432cf81314f15a4aae78d06a103a2" exitCode=0 Apr 04 02:10:04 crc kubenswrapper[4681]: I0404 02:10:04.272821 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587810-4s29h" event={"ID":"6bb9cb84-0f19-49e1-8c8a-f8394d24935b","Type":"ContainerDied","Data":"b0f838f6731e25e64a6c8e686d39f8b7f91432cf81314f15a4aae78d06a103a2"} Apr 04 02:10:05 crc kubenswrapper[4681]: I0404 02:10:05.219192 4681 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xpn96 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Apr 04 02:10:05 crc kubenswrapper[4681]: [+]log ok Apr 04 02:10:05 crc kubenswrapper[4681]: [+]etcd excluded: ok Apr 04 02:10:05 crc kubenswrapper[4681]: [+]etcd-readiness excluded: ok Apr 04 02:10:05 crc kubenswrapper[4681]: [+]poststarthook/start-apiserver-admission-initializer ok Apr 04 02:10:05 crc kubenswrapper[4681]: [+]informer-sync ok Apr 04 02:10:05 crc kubenswrapper[4681]: [+]poststarthook/generic-apiserver-start-informers ok Apr 04 02:10:05 crc kubenswrapper[4681]: [+]poststarthook/max-in-flight-filter ok Apr 04 02:10:05 crc kubenswrapper[4681]: [+]poststarthook/storage-object-count-tracker-hook ok Apr 04 02:10:05 crc kubenswrapper[4681]: [+]poststarthook/image.openshift.io-apiserver-caches ok Apr 04 02:10:05 crc kubenswrapper[4681]: [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok Apr 04 02:10:05 crc kubenswrapper[4681]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Apr 04 02:10:05 crc kubenswrapper[4681]: [+]poststarthook/project.openshift.io-projectcache ok Apr 04 02:10:05 crc kubenswrapper[4681]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Apr 04 02:10:05 crc kubenswrapper[4681]: [+]poststarthook/openshift.io-startinformers ok Apr 04 02:10:05 crc kubenswrapper[4681]: [+]poststarthook/openshift.io-restmapperupdater ok Apr 04 02:10:05 crc kubenswrapper[4681]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Apr 04 02:10:05 crc kubenswrapper[4681]: [-]shutdown failed: reason withheld Apr 04 02:10:05 crc kubenswrapper[4681]: readyz check failed Apr 04 02:10:05 crc kubenswrapper[4681]: I0404 02:10:05.220763 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Apr 04 02:10:05 crc kubenswrapper[4681]: I0404 02:10:05.600148 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587810-4s29h" Apr 04 02:10:05 crc kubenswrapper[4681]: I0404 02:10:05.625998 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9gvq\" (UniqueName: \"kubernetes.io/projected/6bb9cb84-0f19-49e1-8c8a-f8394d24935b-kube-api-access-f9gvq\") pod \"6bb9cb84-0f19-49e1-8c8a-f8394d24935b\" (UID: \"6bb9cb84-0f19-49e1-8c8a-f8394d24935b\") " Apr 04 02:10:05 crc kubenswrapper[4681]: I0404 02:10:05.632986 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bb9cb84-0f19-49e1-8c8a-f8394d24935b-kube-api-access-f9gvq" (OuterVolumeSpecName: "kube-api-access-f9gvq") pod "6bb9cb84-0f19-49e1-8c8a-f8394d24935b" (UID: "6bb9cb84-0f19-49e1-8c8a-f8394d24935b"). InnerVolumeSpecName "kube-api-access-f9gvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:10:05 crc kubenswrapper[4681]: I0404 02:10:05.727179 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9gvq\" (UniqueName: \"kubernetes.io/projected/6bb9cb84-0f19-49e1-8c8a-f8394d24935b-kube-api-access-f9gvq\") on node \"crc\" DevicePath \"\"" Apr 04 02:10:06 crc kubenswrapper[4681]: I0404 02:10:06.286690 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587810-4s29h" event={"ID":"6bb9cb84-0f19-49e1-8c8a-f8394d24935b","Type":"ContainerDied","Data":"4ee0e4c9b12d4fa8ea3eefb7c87ed8bb8ca9d3bd84f26a7087b68443e4447fc8"} Apr 04 02:10:06 crc kubenswrapper[4681]: I0404 02:10:06.286736 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587810-4s29h" Apr 04 02:10:06 crc kubenswrapper[4681]: I0404 02:10:06.286739 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ee0e4c9b12d4fa8ea3eefb7c87ed8bb8ca9d3bd84f26a7087b68443e4447fc8" Apr 04 02:10:06 crc kubenswrapper[4681]: I0404 02:10:06.675809 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587804-rq24v"] Apr 04 02:10:06 crc kubenswrapper[4681]: I0404 02:10:06.684392 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587804-rq24v"] Apr 04 02:10:07 crc kubenswrapper[4681]: I0404 02:10:07.211066 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38b3eda4-d536-4b71-990d-56a7d574b4dc" path="/var/lib/kubelet/pods/38b3eda4-d536-4b71-990d-56a7d574b4dc/volumes" Apr 04 02:10:10 crc kubenswrapper[4681]: I0404 02:10:10.212753 4681 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xpn96 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Apr 04 02:10:10 crc kubenswrapper[4681]: I0404 02:10:10.212829 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.5:8443: connect: connection refused" Apr 04 02:10:15 crc kubenswrapper[4681]: I0404 02:10:15.212511 4681 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xpn96 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Apr 04 02:10:15 crc kubenswrapper[4681]: I0404 02:10:15.212999 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.5:8443: connect: connection refused" Apr 04 02:10:20 crc kubenswrapper[4681]: I0404 02:10:20.211990 4681 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xpn96 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Apr 04 02:10:20 crc kubenswrapper[4681]: I0404 02:10:20.212486 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.5:8443: connect: connection refused" Apr 04 02:10:25 crc kubenswrapper[4681]: I0404 02:10:25.215203 4681 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xpn96 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Apr 04 02:10:25 crc kubenswrapper[4681]: I0404 02:10:25.215709 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.5:8443: connect: connection refused" Apr 04 02:10:30 crc kubenswrapper[4681]: I0404 02:10:30.212629 4681 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xpn96 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Apr 04 02:10:30 crc kubenswrapper[4681]: I0404 02:10:30.214502 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.5:8443: connect: connection refused" Apr 04 02:10:35 crc kubenswrapper[4681]: I0404 02:10:35.213065 4681 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xpn96 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Apr 04 02:10:35 crc kubenswrapper[4681]: I0404 02:10:35.213429 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.5:8443: connect: connection refused" Apr 04 02:10:40 crc kubenswrapper[4681]: I0404 02:10:40.212691 4681 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xpn96 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Apr 04 02:10:40 crc kubenswrapper[4681]: I0404 02:10:40.213092 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.5:8443: connect: connection refused" Apr 04 02:10:45 crc kubenswrapper[4681]: I0404 02:10:45.212549 4681 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xpn96 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Apr 04 02:10:45 crc kubenswrapper[4681]: I0404 02:10:45.212976 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.5:8443: connect: connection refused" Apr 04 02:10:50 crc kubenswrapper[4681]: I0404 02:10:50.212687 4681 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xpn96 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Apr 04 02:10:50 crc kubenswrapper[4681]: I0404 02:10:50.213076 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.5:8443: connect: connection refused" Apr 04 02:10:55 crc kubenswrapper[4681]: I0404 02:10:55.212102 4681 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xpn96 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Apr 04 02:10:55 crc kubenswrapper[4681]: I0404 02:10:55.213532 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.5:8443: connect: connection refused" Apr 04 02:10:56 crc kubenswrapper[4681]: I0404 02:10:56.523876 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:10:56 crc kubenswrapper[4681]: I0404 02:10:56.523951 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:11:00 crc kubenswrapper[4681]: I0404 02:11:00.212637 4681 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xpn96 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Apr 04 02:11:00 crc kubenswrapper[4681]: I0404 02:11:00.212974 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.5:8443: connect: connection refused" Apr 04 02:11:02 crc kubenswrapper[4681]: I0404 02:11:02.889309 4681 scope.go:117] "RemoveContainer" containerID="265177d99a0a8387ef416d770ef00f96d777916cd945fdcf9a2cb6dc0c3b21bb" Apr 04 02:11:05 crc kubenswrapper[4681]: I0404 02:11:05.212737 4681 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xpn96 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Apr 04 02:11:05 crc kubenswrapper[4681]: I0404 02:11:05.213084 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.5:8443: connect: connection refused" Apr 04 02:11:10 crc kubenswrapper[4681]: I0404 02:11:10.212112 4681 patch_prober.go:28] interesting pod/apiserver-76f77b778f-xpn96 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Apr 04 02:11:10 crc kubenswrapper[4681]: I0404 02:11:10.212407 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/readyz?exclude=etcd&exclude=etcd-readiness\": dial tcp 10.217.0.5:8443: connect: connection refused" Apr 04 02:11:10 crc kubenswrapper[4681]: I0404 02:11:10.717664 4681 generic.go:334] "Generic (PLEG): container finished" podID="5da93ec3-d19f-40d9-97f1-994998839180" containerID="0ccf459dde3bb6380a86fadcdf1e4be0d0d77704eb533b1f0f3b8a5a51272132" exitCode=0 Apr 04 02:11:10 crc kubenswrapper[4681]: I0404 02:11:10.717723 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" event={"ID":"5da93ec3-d19f-40d9-97f1-994998839180","Type":"ContainerDied","Data":"0ccf459dde3bb6380a86fadcdf1e4be0d0d77704eb533b1f0f3b8a5a51272132"} Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.020223 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.052974 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-94f957976-5w4r9"] Apr 04 02:11:11 crc kubenswrapper[4681]: E0404 02:11:11.053280 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="fix-audit-permissions" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.053297 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="fix-audit-permissions" Apr 04 02:11:11 crc kubenswrapper[4681]: E0404 02:11:11.053321 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver-check-endpoints" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.053330 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver-check-endpoints" Apr 04 02:11:11 crc kubenswrapper[4681]: E0404 02:11:11.053348 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bb9cb84-0f19-49e1-8c8a-f8394d24935b" containerName="oc" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.053356 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bb9cb84-0f19-49e1-8c8a-f8394d24935b" containerName="oc" Apr 04 02:11:11 crc kubenswrapper[4681]: E0404 02:11:11.053370 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.053378 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.053491 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver-check-endpoints" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.053509 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="5da93ec3-d19f-40d9-97f1-994998839180" containerName="openshift-apiserver" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.053527 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bb9cb84-0f19-49e1-8c8a-f8394d24935b" containerName="oc" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.054407 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.074591 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-94f957976-5w4r9"] Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.189283 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5da93ec3-d19f-40d9-97f1-994998839180-etcd-client\") pod \"5da93ec3-d19f-40d9-97f1-994998839180\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.189336 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5da93ec3-d19f-40d9-97f1-994998839180-node-pullsecrets\") pod \"5da93ec3-d19f-40d9-97f1-994998839180\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.189396 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5da93ec3-d19f-40d9-97f1-994998839180-audit\") pod \"5da93ec3-d19f-40d9-97f1-994998839180\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.189430 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5da93ec3-d19f-40d9-97f1-994998839180-encryption-config\") pod \"5da93ec3-d19f-40d9-97f1-994998839180\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.189446 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5da93ec3-d19f-40d9-97f1-994998839180-serving-cert\") pod \"5da93ec3-d19f-40d9-97f1-994998839180\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.189440 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5da93ec3-d19f-40d9-97f1-994998839180-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "5da93ec3-d19f-40d9-97f1-994998839180" (UID: "5da93ec3-d19f-40d9-97f1-994998839180"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.189478 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrhwv\" (UniqueName: \"kubernetes.io/projected/5da93ec3-d19f-40d9-97f1-994998839180-kube-api-access-xrhwv\") pod \"5da93ec3-d19f-40d9-97f1-994998839180\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.189496 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5da93ec3-d19f-40d9-97f1-994998839180-audit-dir\") pod \"5da93ec3-d19f-40d9-97f1-994998839180\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.189511 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5da93ec3-d19f-40d9-97f1-994998839180-trusted-ca-bundle\") pod \"5da93ec3-d19f-40d9-97f1-994998839180\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.189526 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5da93ec3-d19f-40d9-97f1-994998839180-config\") pod \"5da93ec3-d19f-40d9-97f1-994998839180\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.189544 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5da93ec3-d19f-40d9-97f1-994998839180-image-import-ca\") pod \"5da93ec3-d19f-40d9-97f1-994998839180\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.189570 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5da93ec3-d19f-40d9-97f1-994998839180-etcd-serving-ca\") pod \"5da93ec3-d19f-40d9-97f1-994998839180\" (UID: \"5da93ec3-d19f-40d9-97f1-994998839180\") " Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.189832 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-serving-cert\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.189866 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-encryption-config\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.189894 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-config\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.189911 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-node-pullsecrets\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.189935 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-audit\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.189967 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-audit-dir\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.189985 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-etcd-client\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.190004 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-image-import-ca\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.190036 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-trusted-ca-bundle\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.190052 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-etcd-serving-ca\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.190066 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnnp9\" (UniqueName: \"kubernetes.io/projected/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-kube-api-access-vnnp9\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.190101 4681 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5da93ec3-d19f-40d9-97f1-994998839180-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.190326 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5da93ec3-d19f-40d9-97f1-994998839180-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5da93ec3-d19f-40d9-97f1-994998839180" (UID: "5da93ec3-d19f-40d9-97f1-994998839180"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.190853 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5da93ec3-d19f-40d9-97f1-994998839180-config" (OuterVolumeSpecName: "config") pod "5da93ec3-d19f-40d9-97f1-994998839180" (UID: "5da93ec3-d19f-40d9-97f1-994998839180"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.190886 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5da93ec3-d19f-40d9-97f1-994998839180-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "5da93ec3-d19f-40d9-97f1-994998839180" (UID: "5da93ec3-d19f-40d9-97f1-994998839180"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.190913 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5da93ec3-d19f-40d9-97f1-994998839180-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "5da93ec3-d19f-40d9-97f1-994998839180" (UID: "5da93ec3-d19f-40d9-97f1-994998839180"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.190906 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5da93ec3-d19f-40d9-97f1-994998839180-audit" (OuterVolumeSpecName: "audit") pod "5da93ec3-d19f-40d9-97f1-994998839180" (UID: "5da93ec3-d19f-40d9-97f1-994998839180"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.191230 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5da93ec3-d19f-40d9-97f1-994998839180-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "5da93ec3-d19f-40d9-97f1-994998839180" (UID: "5da93ec3-d19f-40d9-97f1-994998839180"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.194235 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5da93ec3-d19f-40d9-97f1-994998839180-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "5da93ec3-d19f-40d9-97f1-994998839180" (UID: "5da93ec3-d19f-40d9-97f1-994998839180"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.194286 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5da93ec3-d19f-40d9-97f1-994998839180-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5da93ec3-d19f-40d9-97f1-994998839180" (UID: "5da93ec3-d19f-40d9-97f1-994998839180"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.194315 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5da93ec3-d19f-40d9-97f1-994998839180-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "5da93ec3-d19f-40d9-97f1-994998839180" (UID: "5da93ec3-d19f-40d9-97f1-994998839180"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.194456 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5da93ec3-d19f-40d9-97f1-994998839180-kube-api-access-xrhwv" (OuterVolumeSpecName: "kube-api-access-xrhwv") pod "5da93ec3-d19f-40d9-97f1-994998839180" (UID: "5da93ec3-d19f-40d9-97f1-994998839180"). InnerVolumeSpecName "kube-api-access-xrhwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.291378 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-image-import-ca\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.291518 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-trusted-ca-bundle\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.291565 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-etcd-serving-ca\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.291602 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnnp9\" (UniqueName: \"kubernetes.io/projected/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-kube-api-access-vnnp9\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.291670 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-serving-cert\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.291768 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-encryption-config\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.291853 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-config\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.291889 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-node-pullsecrets\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.291918 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-audit\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.291967 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-audit-dir\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.292006 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-etcd-client\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.292068 4681 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5da93ec3-d19f-40d9-97f1-994998839180-etcd-client\") on node \"crc\" DevicePath \"\"" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.292089 4681 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/5da93ec3-d19f-40d9-97f1-994998839180-audit\") on node \"crc\" DevicePath \"\"" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.292106 4681 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5da93ec3-d19f-40d9-97f1-994998839180-encryption-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.292124 4681 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5da93ec3-d19f-40d9-97f1-994998839180-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.292141 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrhwv\" (UniqueName: \"kubernetes.io/projected/5da93ec3-d19f-40d9-97f1-994998839180-kube-api-access-xrhwv\") on node \"crc\" DevicePath \"\"" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.292159 4681 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5da93ec3-d19f-40d9-97f1-994998839180-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.292177 4681 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5da93ec3-d19f-40d9-97f1-994998839180-audit-dir\") on node \"crc\" DevicePath \"\"" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.292194 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5da93ec3-d19f-40d9-97f1-994998839180-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.292212 4681 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/5da93ec3-d19f-40d9-97f1-994998839180-image-import-ca\") on node \"crc\" DevicePath \"\"" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.292231 4681 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5da93ec3-d19f-40d9-97f1-994998839180-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.292416 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-image-import-ca\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.292470 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-node-pullsecrets\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.292854 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-audit-dir\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.293351 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-config\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.293484 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-audit\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.293745 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-trusted-ca-bundle\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.293889 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-etcd-serving-ca\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.303904 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-encryption-config\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.304343 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-etcd-client\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.304749 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-serving-cert\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.313136 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnnp9\" (UniqueName: \"kubernetes.io/projected/6a54414b-7a3a-4cf7-bcb5-d5fab4cee370-kube-api-access-vnnp9\") pod \"apiserver-94f957976-5w4r9\" (UID: \"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370\") " pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.372596 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.725592 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" event={"ID":"5da93ec3-d19f-40d9-97f1-994998839180","Type":"ContainerDied","Data":"8829d19903458c45eb0ab9c5aedc01fd8312ee86f1718766167a94281efb0d41"} Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.725818 4681 scope.go:117] "RemoveContainer" containerID="92f929c341ccb7c4858283441c3001e4601d8287f04bb983d7276beb95b55533" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.725658 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-xpn96" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.745439 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xpn96"] Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.750333 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-xpn96"] Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.757476 4681 scope.go:117] "RemoveContainer" containerID="0ccf459dde3bb6380a86fadcdf1e4be0d0d77704eb533b1f0f3b8a5a51272132" Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.772167 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-94f957976-5w4r9"] Apr 04 02:11:11 crc kubenswrapper[4681]: I0404 02:11:11.773189 4681 scope.go:117] "RemoveContainer" containerID="7fd6a52dc69a2290ee5d872d48e3ed38f746111abac9762955c9fbc7e8e81ec4" Apr 04 02:11:11 crc kubenswrapper[4681]: W0404 02:11:11.777942 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a54414b_7a3a_4cf7_bcb5_d5fab4cee370.slice/crio-42cbb343dc6f452d6ad5a00d9365c7585d82576a4e09008c026aa6124cec4069 WatchSource:0}: Error finding container 42cbb343dc6f452d6ad5a00d9365c7585d82576a4e09008c026aa6124cec4069: Status 404 returned error can't find the container with id 42cbb343dc6f452d6ad5a00d9365c7585d82576a4e09008c026aa6124cec4069 Apr 04 02:11:12 crc kubenswrapper[4681]: I0404 02:11:12.733940 4681 generic.go:334] "Generic (PLEG): container finished" podID="6a54414b-7a3a-4cf7-bcb5-d5fab4cee370" containerID="9bb4c6003810d17fe367e9a534d1e5eb9fc20e5155ac2bdf60698d690880d46f" exitCode=0 Apr 04 02:11:12 crc kubenswrapper[4681]: I0404 02:11:12.734364 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-94f957976-5w4r9" event={"ID":"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370","Type":"ContainerDied","Data":"9bb4c6003810d17fe367e9a534d1e5eb9fc20e5155ac2bdf60698d690880d46f"} Apr 04 02:11:12 crc kubenswrapper[4681]: I0404 02:11:12.734414 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-94f957976-5w4r9" event={"ID":"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370","Type":"ContainerStarted","Data":"42cbb343dc6f452d6ad5a00d9365c7585d82576a4e09008c026aa6124cec4069"} Apr 04 02:11:13 crc kubenswrapper[4681]: I0404 02:11:13.208606 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5da93ec3-d19f-40d9-97f1-994998839180" path="/var/lib/kubelet/pods/5da93ec3-d19f-40d9-97f1-994998839180/volumes" Apr 04 02:11:13 crc kubenswrapper[4681]: I0404 02:11:13.743453 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-94f957976-5w4r9" event={"ID":"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370","Type":"ContainerStarted","Data":"5f75a78eafb7a180fca68e6abf70b0897e86a173e63c0d0011f73ef7dac3c34d"} Apr 04 02:11:13 crc kubenswrapper[4681]: I0404 02:11:13.743497 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-94f957976-5w4r9" event={"ID":"6a54414b-7a3a-4cf7-bcb5-d5fab4cee370","Type":"ContainerStarted","Data":"296907a6080f31940e7fe764ac2cf188de28e11d0fcd57f977aa12c029cb269c"} Apr 04 02:11:13 crc kubenswrapper[4681]: I0404 02:11:13.774736 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-94f957976-5w4r9" podStartSLOduration=113.774717909 podStartE2EDuration="1m53.774717909s" podCreationTimestamp="2026-04-04 02:09:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:11:13.772593739 +0000 UTC m=+953.438368859" watchObservedRunningTime="2026-04-04 02:11:13.774717909 +0000 UTC m=+953.440493029" Apr 04 02:11:16 crc kubenswrapper[4681]: I0404 02:11:16.373133 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:16 crc kubenswrapper[4681]: I0404 02:11:16.373510 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:16 crc kubenswrapper[4681]: I0404 02:11:16.380513 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:16 crc kubenswrapper[4681]: I0404 02:11:16.765660 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-94f957976-5w4r9" Apr 04 02:11:26 crc kubenswrapper[4681]: I0404 02:11:26.524210 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:11:26 crc kubenswrapper[4681]: I0404 02:11:26.524822 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:11:56 crc kubenswrapper[4681]: I0404 02:11:56.525107 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:11:56 crc kubenswrapper[4681]: I0404 02:11:56.525946 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:11:56 crc kubenswrapper[4681]: I0404 02:11:56.526031 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 02:11:56 crc kubenswrapper[4681]: I0404 02:11:56.527033 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6461d3b377cf8bfb047c484785d5de06041b8d0e8bb34eec33f278db844fd42a"} pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Apr 04 02:11:56 crc kubenswrapper[4681]: I0404 02:11:56.527138 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" containerID="cri-o://6461d3b377cf8bfb047c484785d5de06041b8d0e8bb34eec33f278db844fd42a" gracePeriod=600 Apr 04 02:11:57 crc kubenswrapper[4681]: I0404 02:11:57.245791 4681 generic.go:334] "Generic (PLEG): container finished" podID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerID="6461d3b377cf8bfb047c484785d5de06041b8d0e8bb34eec33f278db844fd42a" exitCode=0 Apr 04 02:11:57 crc kubenswrapper[4681]: I0404 02:11:57.245884 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerDied","Data":"6461d3b377cf8bfb047c484785d5de06041b8d0e8bb34eec33f278db844fd42a"} Apr 04 02:11:57 crc kubenswrapper[4681]: I0404 02:11:57.246926 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerStarted","Data":"e2e43546dbe2461b9e3426a18769af831106cbff42f433957bda33adde473ed0"} Apr 04 02:11:57 crc kubenswrapper[4681]: I0404 02:11:57.246970 4681 scope.go:117] "RemoveContainer" containerID="9cdb4a37ebc45c431b49d8569b090b8ea3b25e9985ca40aa49f8ebf3ea0f3152" Apr 04 02:12:00 crc kubenswrapper[4681]: I0404 02:12:00.133074 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587812-7nb9k"] Apr 04 02:12:00 crc kubenswrapper[4681]: I0404 02:12:00.134239 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587812-7nb9k" Apr 04 02:12:00 crc kubenswrapper[4681]: I0404 02:12:00.136835 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 02:12:00 crc kubenswrapper[4681]: I0404 02:12:00.137185 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 02:12:00 crc kubenswrapper[4681]: I0404 02:12:00.137407 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 02:12:00 crc kubenswrapper[4681]: I0404 02:12:00.141863 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587812-7nb9k"] Apr 04 02:12:00 crc kubenswrapper[4681]: I0404 02:12:00.291397 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mfqx\" (UniqueName: \"kubernetes.io/projected/4fa244c5-86ba-46d3-95de-975a1789cf9d-kube-api-access-9mfqx\") pod \"auto-csr-approver-29587812-7nb9k\" (UID: \"4fa244c5-86ba-46d3-95de-975a1789cf9d\") " pod="openshift-infra/auto-csr-approver-29587812-7nb9k" Apr 04 02:12:00 crc kubenswrapper[4681]: I0404 02:12:00.392512 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mfqx\" (UniqueName: \"kubernetes.io/projected/4fa244c5-86ba-46d3-95de-975a1789cf9d-kube-api-access-9mfqx\") pod \"auto-csr-approver-29587812-7nb9k\" (UID: \"4fa244c5-86ba-46d3-95de-975a1789cf9d\") " pod="openshift-infra/auto-csr-approver-29587812-7nb9k" Apr 04 02:12:00 crc kubenswrapper[4681]: I0404 02:12:00.414701 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mfqx\" (UniqueName: \"kubernetes.io/projected/4fa244c5-86ba-46d3-95de-975a1789cf9d-kube-api-access-9mfqx\") pod \"auto-csr-approver-29587812-7nb9k\" (UID: \"4fa244c5-86ba-46d3-95de-975a1789cf9d\") " pod="openshift-infra/auto-csr-approver-29587812-7nb9k" Apr 04 02:12:00 crc kubenswrapper[4681]: I0404 02:12:00.449654 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587812-7nb9k" Apr 04 02:12:00 crc kubenswrapper[4681]: I0404 02:12:00.893645 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587812-7nb9k"] Apr 04 02:12:01 crc kubenswrapper[4681]: I0404 02:12:01.277337 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587812-7nb9k" event={"ID":"4fa244c5-86ba-46d3-95de-975a1789cf9d","Type":"ContainerStarted","Data":"d109ca7a6293a03cf3d6ff6859377019f09e80f6133ba5e623ba416715d109f0"} Apr 04 02:12:02 crc kubenswrapper[4681]: E0404 02:12:02.882987 4681 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fa244c5_86ba_46d3_95de_975a1789cf9d.slice/crio-conmon-ac90fea8b09e8ac7eafdf636d72d3114175dd78ea36a2f7b4c5adffe8be30c3f.scope\": RecentStats: unable to find data in memory cache]" Apr 04 02:12:03 crc kubenswrapper[4681]: I0404 02:12:03.291553 4681 generic.go:334] "Generic (PLEG): container finished" podID="4fa244c5-86ba-46d3-95de-975a1789cf9d" containerID="ac90fea8b09e8ac7eafdf636d72d3114175dd78ea36a2f7b4c5adffe8be30c3f" exitCode=0 Apr 04 02:12:03 crc kubenswrapper[4681]: I0404 02:12:03.291626 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587812-7nb9k" event={"ID":"4fa244c5-86ba-46d3-95de-975a1789cf9d","Type":"ContainerDied","Data":"ac90fea8b09e8ac7eafdf636d72d3114175dd78ea36a2f7b4c5adffe8be30c3f"} Apr 04 02:12:04 crc kubenswrapper[4681]: I0404 02:12:04.542563 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587812-7nb9k" Apr 04 02:12:04 crc kubenswrapper[4681]: I0404 02:12:04.553840 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mfqx\" (UniqueName: \"kubernetes.io/projected/4fa244c5-86ba-46d3-95de-975a1789cf9d-kube-api-access-9mfqx\") pod \"4fa244c5-86ba-46d3-95de-975a1789cf9d\" (UID: \"4fa244c5-86ba-46d3-95de-975a1789cf9d\") " Apr 04 02:12:04 crc kubenswrapper[4681]: I0404 02:12:04.559864 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fa244c5-86ba-46d3-95de-975a1789cf9d-kube-api-access-9mfqx" (OuterVolumeSpecName: "kube-api-access-9mfqx") pod "4fa244c5-86ba-46d3-95de-975a1789cf9d" (UID: "4fa244c5-86ba-46d3-95de-975a1789cf9d"). InnerVolumeSpecName "kube-api-access-9mfqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:12:04 crc kubenswrapper[4681]: I0404 02:12:04.655661 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mfqx\" (UniqueName: \"kubernetes.io/projected/4fa244c5-86ba-46d3-95de-975a1789cf9d-kube-api-access-9mfqx\") on node \"crc\" DevicePath \"\"" Apr 04 02:12:05 crc kubenswrapper[4681]: I0404 02:12:05.308504 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587812-7nb9k" event={"ID":"4fa244c5-86ba-46d3-95de-975a1789cf9d","Type":"ContainerDied","Data":"d109ca7a6293a03cf3d6ff6859377019f09e80f6133ba5e623ba416715d109f0"} Apr 04 02:12:05 crc kubenswrapper[4681]: I0404 02:12:05.308549 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d109ca7a6293a03cf3d6ff6859377019f09e80f6133ba5e623ba416715d109f0" Apr 04 02:12:05 crc kubenswrapper[4681]: I0404 02:12:05.308566 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587812-7nb9k" Apr 04 02:12:05 crc kubenswrapper[4681]: I0404 02:12:05.592501 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587806-vm2z8"] Apr 04 02:12:05 crc kubenswrapper[4681]: I0404 02:12:05.596720 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587806-vm2z8"] Apr 04 02:12:05 crc kubenswrapper[4681]: I0404 02:12:05.695527 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-2vkfl"] Apr 04 02:12:05 crc kubenswrapper[4681]: E0404 02:12:05.695784 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa244c5-86ba-46d3-95de-975a1789cf9d" containerName="oc" Apr 04 02:12:05 crc kubenswrapper[4681]: I0404 02:12:05.695798 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa244c5-86ba-46d3-95de-975a1789cf9d" containerName="oc" Apr 04 02:12:05 crc kubenswrapper[4681]: I0404 02:12:05.695930 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fa244c5-86ba-46d3-95de-975a1789cf9d" containerName="oc" Apr 04 02:12:05 crc kubenswrapper[4681]: I0404 02:12:05.696382 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2vkfl" Apr 04 02:12:05 crc kubenswrapper[4681]: I0404 02:12:05.698149 4681 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-nwwp6" Apr 04 02:12:05 crc kubenswrapper[4681]: I0404 02:12:05.698196 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Apr 04 02:12:05 crc kubenswrapper[4681]: I0404 02:12:05.700124 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Apr 04 02:12:05 crc kubenswrapper[4681]: I0404 02:12:05.708823 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-b7xpz"] Apr 04 02:12:05 crc kubenswrapper[4681]: I0404 02:12:05.709786 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-b7xpz" Apr 04 02:12:05 crc kubenswrapper[4681]: I0404 02:12:05.713355 4681 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-tgx8t" Apr 04 02:12:05 crc kubenswrapper[4681]: I0404 02:12:05.717727 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-2vkfl"] Apr 04 02:12:05 crc kubenswrapper[4681]: I0404 02:12:05.728889 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-6ppdd"] Apr 04 02:12:05 crc kubenswrapper[4681]: I0404 02:12:05.729917 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-6ppdd" Apr 04 02:12:05 crc kubenswrapper[4681]: I0404 02:12:05.735038 4681 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-b7854" Apr 04 02:12:05 crc kubenswrapper[4681]: I0404 02:12:05.739145 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-b7xpz"] Apr 04 02:12:05 crc kubenswrapper[4681]: I0404 02:12:05.752317 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-6ppdd"] Apr 04 02:12:05 crc kubenswrapper[4681]: I0404 02:12:05.767051 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bldwh\" (UniqueName: \"kubernetes.io/projected/9fde43ea-36ff-4f94-ba5e-8e1ea1338b1e-kube-api-access-bldwh\") pod \"cert-manager-cainjector-cf98fcc89-2vkfl\" (UID: \"9fde43ea-36ff-4f94-ba5e-8e1ea1338b1e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-2vkfl" Apr 04 02:12:05 crc kubenswrapper[4681]: I0404 02:12:05.767101 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbgxk\" (UniqueName: \"kubernetes.io/projected/4d2e304b-f02c-427a-b2a2-f1e8cc7efb70-kube-api-access-mbgxk\") pod \"cert-manager-webhook-687f57d79b-6ppdd\" (UID: \"4d2e304b-f02c-427a-b2a2-f1e8cc7efb70\") " pod="cert-manager/cert-manager-webhook-687f57d79b-6ppdd" Apr 04 02:12:05 crc kubenswrapper[4681]: I0404 02:12:05.767131 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt7fb\" (UniqueName: \"kubernetes.io/projected/d71066c7-07f6-471d-9d8d-6746b3f229e9-kube-api-access-wt7fb\") pod \"cert-manager-858654f9db-b7xpz\" (UID: \"d71066c7-07f6-471d-9d8d-6746b3f229e9\") " pod="cert-manager/cert-manager-858654f9db-b7xpz" Apr 04 02:12:05 crc kubenswrapper[4681]: I0404 02:12:05.867485 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bldwh\" (UniqueName: \"kubernetes.io/projected/9fde43ea-36ff-4f94-ba5e-8e1ea1338b1e-kube-api-access-bldwh\") pod \"cert-manager-cainjector-cf98fcc89-2vkfl\" (UID: \"9fde43ea-36ff-4f94-ba5e-8e1ea1338b1e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-2vkfl" Apr 04 02:12:05 crc kubenswrapper[4681]: I0404 02:12:05.867817 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbgxk\" (UniqueName: \"kubernetes.io/projected/4d2e304b-f02c-427a-b2a2-f1e8cc7efb70-kube-api-access-mbgxk\") pod \"cert-manager-webhook-687f57d79b-6ppdd\" (UID: \"4d2e304b-f02c-427a-b2a2-f1e8cc7efb70\") " pod="cert-manager/cert-manager-webhook-687f57d79b-6ppdd" Apr 04 02:12:05 crc kubenswrapper[4681]: I0404 02:12:05.867960 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt7fb\" (UniqueName: \"kubernetes.io/projected/d71066c7-07f6-471d-9d8d-6746b3f229e9-kube-api-access-wt7fb\") pod \"cert-manager-858654f9db-b7xpz\" (UID: \"d71066c7-07f6-471d-9d8d-6746b3f229e9\") " pod="cert-manager/cert-manager-858654f9db-b7xpz" Apr 04 02:12:05 crc kubenswrapper[4681]: I0404 02:12:05.884408 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt7fb\" (UniqueName: \"kubernetes.io/projected/d71066c7-07f6-471d-9d8d-6746b3f229e9-kube-api-access-wt7fb\") pod \"cert-manager-858654f9db-b7xpz\" (UID: \"d71066c7-07f6-471d-9d8d-6746b3f229e9\") " pod="cert-manager/cert-manager-858654f9db-b7xpz" Apr 04 02:12:05 crc kubenswrapper[4681]: I0404 02:12:05.884885 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbgxk\" (UniqueName: \"kubernetes.io/projected/4d2e304b-f02c-427a-b2a2-f1e8cc7efb70-kube-api-access-mbgxk\") pod \"cert-manager-webhook-687f57d79b-6ppdd\" (UID: \"4d2e304b-f02c-427a-b2a2-f1e8cc7efb70\") " pod="cert-manager/cert-manager-webhook-687f57d79b-6ppdd" Apr 04 02:12:05 crc kubenswrapper[4681]: I0404 02:12:05.886169 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bldwh\" (UniqueName: \"kubernetes.io/projected/9fde43ea-36ff-4f94-ba5e-8e1ea1338b1e-kube-api-access-bldwh\") pod \"cert-manager-cainjector-cf98fcc89-2vkfl\" (UID: \"9fde43ea-36ff-4f94-ba5e-8e1ea1338b1e\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-2vkfl" Apr 04 02:12:06 crc kubenswrapper[4681]: I0404 02:12:06.011112 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2vkfl" Apr 04 02:12:06 crc kubenswrapper[4681]: I0404 02:12:06.022662 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-b7xpz" Apr 04 02:12:06 crc kubenswrapper[4681]: I0404 02:12:06.047081 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-6ppdd" Apr 04 02:12:06 crc kubenswrapper[4681]: I0404 02:12:06.445301 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-b7xpz"] Apr 04 02:12:06 crc kubenswrapper[4681]: I0404 02:12:06.474689 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-2vkfl"] Apr 04 02:12:06 crc kubenswrapper[4681]: W0404 02:12:06.480426 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9fde43ea_36ff_4f94_ba5e_8e1ea1338b1e.slice/crio-fde388d6e00602669f368d54448cec626c22b593fad6416950ce93c6a95a12c0 WatchSource:0}: Error finding container fde388d6e00602669f368d54448cec626c22b593fad6416950ce93c6a95a12c0: Status 404 returned error can't find the container with id fde388d6e00602669f368d54448cec626c22b593fad6416950ce93c6a95a12c0 Apr 04 02:12:06 crc kubenswrapper[4681]: I0404 02:12:06.508986 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-6ppdd"] Apr 04 02:12:06 crc kubenswrapper[4681]: W0404 02:12:06.510459 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d2e304b_f02c_427a_b2a2_f1e8cc7efb70.slice/crio-4cf1d87a558f2bdaf5fd6c4104e23dc34788790f3a06555a3d96e31079c168f1 WatchSource:0}: Error finding container 4cf1d87a558f2bdaf5fd6c4104e23dc34788790f3a06555a3d96e31079c168f1: Status 404 returned error can't find the container with id 4cf1d87a558f2bdaf5fd6c4104e23dc34788790f3a06555a3d96e31079c168f1 Apr 04 02:12:07 crc kubenswrapper[4681]: I0404 02:12:07.208844 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1706eb60-e09b-43cd-8c84-6b617ee0deb3" path="/var/lib/kubelet/pods/1706eb60-e09b-43cd-8c84-6b617ee0deb3/volumes" Apr 04 02:12:07 crc kubenswrapper[4681]: I0404 02:12:07.322164 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2vkfl" event={"ID":"9fde43ea-36ff-4f94-ba5e-8e1ea1338b1e","Type":"ContainerStarted","Data":"fde388d6e00602669f368d54448cec626c22b593fad6416950ce93c6a95a12c0"} Apr 04 02:12:07 crc kubenswrapper[4681]: I0404 02:12:07.323172 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-6ppdd" event={"ID":"4d2e304b-f02c-427a-b2a2-f1e8cc7efb70","Type":"ContainerStarted","Data":"4cf1d87a558f2bdaf5fd6c4104e23dc34788790f3a06555a3d96e31079c168f1"} Apr 04 02:12:07 crc kubenswrapper[4681]: I0404 02:12:07.323847 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-b7xpz" event={"ID":"d71066c7-07f6-471d-9d8d-6746b3f229e9","Type":"ContainerStarted","Data":"5f6e6291ac0835954ac6858a7ddc07e57c15b35719264b53acf736a81c0e8be0"} Apr 04 02:12:11 crc kubenswrapper[4681]: I0404 02:12:11.354041 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-b7xpz" event={"ID":"d71066c7-07f6-471d-9d8d-6746b3f229e9","Type":"ContainerStarted","Data":"35c2235d99b75bca107e0656b96a0f2d69ab35147bef1063aad3e4606a2dd266"} Apr 04 02:12:11 crc kubenswrapper[4681]: I0404 02:12:11.355708 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2vkfl" event={"ID":"9fde43ea-36ff-4f94-ba5e-8e1ea1338b1e","Type":"ContainerStarted","Data":"bf01fa1a77dced157041b1a2e9d7eac4e4aadbc66a955459aa2434b68dbb17bc"} Apr 04 02:12:11 crc kubenswrapper[4681]: I0404 02:12:11.357302 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-6ppdd" event={"ID":"4d2e304b-f02c-427a-b2a2-f1e8cc7efb70","Type":"ContainerStarted","Data":"868bf5b9af5103d6a2ebee6c0182dcf80c12b0c1fa943498b3401f32c718e8fb"} Apr 04 02:12:11 crc kubenswrapper[4681]: I0404 02:12:11.357458 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-6ppdd" Apr 04 02:12:11 crc kubenswrapper[4681]: I0404 02:12:11.378171 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-b7xpz" podStartSLOduration=1.965742895 podStartE2EDuration="6.378150216s" podCreationTimestamp="2026-04-04 02:12:05 +0000 UTC" firstStartedPulling="2026-04-04 02:12:06.451515247 +0000 UTC m=+1006.117290367" lastFinishedPulling="2026-04-04 02:12:10.863922558 +0000 UTC m=+1010.529697688" observedRunningTime="2026-04-04 02:12:11.372562918 +0000 UTC m=+1011.038338048" watchObservedRunningTime="2026-04-04 02:12:11.378150216 +0000 UTC m=+1011.043925356" Apr 04 02:12:11 crc kubenswrapper[4681]: I0404 02:12:11.391019 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-2vkfl" podStartSLOduration=1.952794229 podStartE2EDuration="6.390997839s" podCreationTimestamp="2026-04-04 02:12:05 +0000 UTC" firstStartedPulling="2026-04-04 02:12:06.482524494 +0000 UTC m=+1006.148299614" lastFinishedPulling="2026-04-04 02:12:10.920728104 +0000 UTC m=+1010.586503224" observedRunningTime="2026-04-04 02:12:11.386576604 +0000 UTC m=+1011.052351744" watchObservedRunningTime="2026-04-04 02:12:11.390997839 +0000 UTC m=+1011.056772979" Apr 04 02:12:11 crc kubenswrapper[4681]: I0404 02:12:11.412922 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-6ppdd" podStartSLOduration=2.060749123 podStartE2EDuration="6.412894329s" podCreationTimestamp="2026-04-04 02:12:05 +0000 UTC" firstStartedPulling="2026-04-04 02:12:06.511780072 +0000 UTC m=+1006.177555192" lastFinishedPulling="2026-04-04 02:12:10.863925238 +0000 UTC m=+1010.529700398" observedRunningTime="2026-04-04 02:12:11.407950258 +0000 UTC m=+1011.073725418" watchObservedRunningTime="2026-04-04 02:12:11.412894329 +0000 UTC m=+1011.078669459" Apr 04 02:12:15 crc kubenswrapper[4681]: I0404 02:12:15.761697 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cntwc"] Apr 04 02:12:15 crc kubenswrapper[4681]: I0404 02:12:15.762669 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="ovn-controller" containerID="cri-o://efab081b44f2dca4164229bbc1cc45821aa0e010e6a68f83087c3f2eff932840" gracePeriod=30 Apr 04 02:12:15 crc kubenswrapper[4681]: I0404 02:12:15.763124 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="ovn-acl-logging" containerID="cri-o://781945902b239ddb43e807866e874c9772627c2a89a780bb78915f9d56be3af7" gracePeriod=30 Apr 04 02:12:15 crc kubenswrapper[4681]: I0404 02:12:15.763184 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="sbdb" containerID="cri-o://4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2" gracePeriod=30 Apr 04 02:12:15 crc kubenswrapper[4681]: I0404 02:12:15.763247 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="kube-rbac-proxy-node" containerID="cri-o://d6e4534b33b6e5ef0b5215513f32180fc0ca211032267b4a8c4234f8898f568b" gracePeriod=30 Apr 04 02:12:15 crc kubenswrapper[4681]: I0404 02:12:15.763283 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="northd" containerID="cri-o://1eaf3363b57f81c1d30955e2f079097ee67801f30ce59a31744cbafe32358635" gracePeriod=30 Apr 04 02:12:15 crc kubenswrapper[4681]: I0404 02:12:15.763251 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="nbdb" containerID="cri-o://0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4" gracePeriod=30 Apr 04 02:12:15 crc kubenswrapper[4681]: I0404 02:12:15.763320 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://4d2840027a88428593a03be2fe95869f9aec36765ef0a70345f5e7832bbe6500" gracePeriod=30 Apr 04 02:12:15 crc kubenswrapper[4681]: I0404 02:12:15.789019 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="ovnkube-controller" containerID="cri-o://7eb98cfe208dd9452114fa20dd0da34ba28dc788cd640b4539e2b080908b2f51" gracePeriod=30 Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.050333 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-6ppdd" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.392645 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-w5wbs_cab7ffc5-0101-48b8-87ab-de8324bacc38/kube-multus/1.log" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.393232 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-w5wbs_cab7ffc5-0101-48b8-87ab-de8324bacc38/kube-multus/0.log" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.393337 4681 generic.go:334] "Generic (PLEG): container finished" podID="cab7ffc5-0101-48b8-87ab-de8324bacc38" containerID="7bd7f38b8f100b2680bf0d1741ec304c194c7ef2b97d0e033338fb1b9ed11e00" exitCode=2 Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.393471 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-w5wbs" event={"ID":"cab7ffc5-0101-48b8-87ab-de8324bacc38","Type":"ContainerDied","Data":"7bd7f38b8f100b2680bf0d1741ec304c194c7ef2b97d0e033338fb1b9ed11e00"} Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.393553 4681 scope.go:117] "RemoveContainer" containerID="78d4ed55792b7833dd8cad5be40b937608ded3aa5614e7dd9d241ba6915bcbdb" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.394671 4681 scope.go:117] "RemoveContainer" containerID="7bd7f38b8f100b2680bf0d1741ec304c194c7ef2b97d0e033338fb1b9ed11e00" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.398178 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cntwc_d004639b-c07a-4401-8588-8af4ed981db3/ovnkube-controller/2.log" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.399041 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cntwc_d004639b-c07a-4401-8588-8af4ed981db3/ovn-acl-logging/1.log" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.403501 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cntwc_d004639b-c07a-4401-8588-8af4ed981db3/ovn-acl-logging/0.log" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.405996 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cntwc_d004639b-c07a-4401-8588-8af4ed981db3/ovn-controller/0.log" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.406763 4681 generic.go:334] "Generic (PLEG): container finished" podID="d004639b-c07a-4401-8588-8af4ed981db3" containerID="7eb98cfe208dd9452114fa20dd0da34ba28dc788cd640b4539e2b080908b2f51" exitCode=0 Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.406805 4681 generic.go:334] "Generic (PLEG): container finished" podID="d004639b-c07a-4401-8588-8af4ed981db3" containerID="781945902b239ddb43e807866e874c9772627c2a89a780bb78915f9d56be3af7" exitCode=143 Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.406819 4681 generic.go:334] "Generic (PLEG): container finished" podID="d004639b-c07a-4401-8588-8af4ed981db3" containerID="4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2" exitCode=0 Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.406832 4681 generic.go:334] "Generic (PLEG): container finished" podID="d004639b-c07a-4401-8588-8af4ed981db3" containerID="0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4" exitCode=0 Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.406843 4681 generic.go:334] "Generic (PLEG): container finished" podID="d004639b-c07a-4401-8588-8af4ed981db3" containerID="1eaf3363b57f81c1d30955e2f079097ee67801f30ce59a31744cbafe32358635" exitCode=0 Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.406855 4681 generic.go:334] "Generic (PLEG): container finished" podID="d004639b-c07a-4401-8588-8af4ed981db3" containerID="4d2840027a88428593a03be2fe95869f9aec36765ef0a70345f5e7832bbe6500" exitCode=0 Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.406866 4681 generic.go:334] "Generic (PLEG): container finished" podID="d004639b-c07a-4401-8588-8af4ed981db3" containerID="d6e4534b33b6e5ef0b5215513f32180fc0ca211032267b4a8c4234f8898f568b" exitCode=0 Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.406877 4681 generic.go:334] "Generic (PLEG): container finished" podID="d004639b-c07a-4401-8588-8af4ed981db3" containerID="efab081b44f2dca4164229bbc1cc45821aa0e010e6a68f83087c3f2eff932840" exitCode=143 Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.406906 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" event={"ID":"d004639b-c07a-4401-8588-8af4ed981db3","Type":"ContainerDied","Data":"7eb98cfe208dd9452114fa20dd0da34ba28dc788cd640b4539e2b080908b2f51"} Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.406941 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" event={"ID":"d004639b-c07a-4401-8588-8af4ed981db3","Type":"ContainerDied","Data":"781945902b239ddb43e807866e874c9772627c2a89a780bb78915f9d56be3af7"} Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.406964 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" event={"ID":"d004639b-c07a-4401-8588-8af4ed981db3","Type":"ContainerDied","Data":"4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2"} Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.406980 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" event={"ID":"d004639b-c07a-4401-8588-8af4ed981db3","Type":"ContainerDied","Data":"0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4"} Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.406994 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" event={"ID":"d004639b-c07a-4401-8588-8af4ed981db3","Type":"ContainerDied","Data":"1eaf3363b57f81c1d30955e2f079097ee67801f30ce59a31744cbafe32358635"} Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.407007 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" event={"ID":"d004639b-c07a-4401-8588-8af4ed981db3","Type":"ContainerDied","Data":"4d2840027a88428593a03be2fe95869f9aec36765ef0a70345f5e7832bbe6500"} Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.407020 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" event={"ID":"d004639b-c07a-4401-8588-8af4ed981db3","Type":"ContainerDied","Data":"d6e4534b33b6e5ef0b5215513f32180fc0ca211032267b4a8c4234f8898f568b"} Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.407038 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" event={"ID":"d004639b-c07a-4401-8588-8af4ed981db3","Type":"ContainerDied","Data":"efab081b44f2dca4164229bbc1cc45821aa0e010e6a68f83087c3f2eff932840"} Apr 04 02:12:16 crc kubenswrapper[4681]: E0404 02:12:16.564057 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2 is running failed: container process not found" containerID="4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Apr 04 02:12:16 crc kubenswrapper[4681]: E0404 02:12:16.564319 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4 is running failed: container process not found" containerID="0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Apr 04 02:12:16 crc kubenswrapper[4681]: E0404 02:12:16.564924 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4 is running failed: container process not found" containerID="0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Apr 04 02:12:16 crc kubenswrapper[4681]: E0404 02:12:16.565010 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2 is running failed: container process not found" containerID="4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Apr 04 02:12:16 crc kubenswrapper[4681]: E0404 02:12:16.565236 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4 is running failed: container process not found" containerID="0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Apr 04 02:12:16 crc kubenswrapper[4681]: E0404 02:12:16.565349 4681 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="nbdb" Apr 04 02:12:16 crc kubenswrapper[4681]: E0404 02:12:16.565642 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2 is running failed: container process not found" containerID="4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Apr 04 02:12:16 crc kubenswrapper[4681]: E0404 02:12:16.565687 4681 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2 is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="sbdb" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.595615 4681 scope.go:117] "RemoveContainer" containerID="63ae47ecb180b30ba0d105b197dfb996fd35e622dcf11c5eacfbb0cc0ddafea9" Apr 04 02:12:16 crc kubenswrapper[4681]: E0404 02:12:16.785480 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63ae47ecb180b30ba0d105b197dfb996fd35e622dcf11c5eacfbb0cc0ddafea9\": container with ID starting with 63ae47ecb180b30ba0d105b197dfb996fd35e622dcf11c5eacfbb0cc0ddafea9 not found: ID does not exist" containerID="63ae47ecb180b30ba0d105b197dfb996fd35e622dcf11c5eacfbb0cc0ddafea9" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.785556 4681 scope.go:117] "RemoveContainer" containerID="88384a096ba35de77fe905c618d6700477502a7f5d8dad360e3783f908234037" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.786408 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cntwc_d004639b-c07a-4401-8588-8af4ed981db3/ovn-acl-logging/1.log" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.788501 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cntwc_d004639b-c07a-4401-8588-8af4ed981db3/ovn-controller/0.log" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.788922 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.848715 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4rmkn"] Apr 04 02:12:16 crc kubenswrapper[4681]: E0404 02:12:16.848998 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="ovn-acl-logging" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.849015 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="ovn-acl-logging" Apr 04 02:12:16 crc kubenswrapper[4681]: E0404 02:12:16.849031 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="ovnkube-controller" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.849039 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="ovnkube-controller" Apr 04 02:12:16 crc kubenswrapper[4681]: E0404 02:12:16.849048 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="sbdb" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.849056 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="sbdb" Apr 04 02:12:16 crc kubenswrapper[4681]: E0404 02:12:16.849066 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="northd" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.849073 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="northd" Apr 04 02:12:16 crc kubenswrapper[4681]: E0404 02:12:16.849082 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="ovnkube-controller" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.849090 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="ovnkube-controller" Apr 04 02:12:16 crc kubenswrapper[4681]: E0404 02:12:16.849100 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="ovnkube-controller" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.849108 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="ovnkube-controller" Apr 04 02:12:16 crc kubenswrapper[4681]: E0404 02:12:16.849123 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="ovnkube-controller" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.849130 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="ovnkube-controller" Apr 04 02:12:16 crc kubenswrapper[4681]: E0404 02:12:16.849139 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="kubecfg-setup" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.849146 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="kubecfg-setup" Apr 04 02:12:16 crc kubenswrapper[4681]: E0404 02:12:16.849156 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="kube-rbac-proxy-ovn-metrics" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.849162 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="kube-rbac-proxy-ovn-metrics" Apr 04 02:12:16 crc kubenswrapper[4681]: E0404 02:12:16.849174 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="ovn-controller" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.849181 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="ovn-controller" Apr 04 02:12:16 crc kubenswrapper[4681]: E0404 02:12:16.849192 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="nbdb" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.849199 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="nbdb" Apr 04 02:12:16 crc kubenswrapper[4681]: E0404 02:12:16.849210 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="kube-rbac-proxy-node" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.849217 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="kube-rbac-proxy-node" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.849407 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="ovn-acl-logging" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.849424 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="nbdb" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.849437 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="ovn-controller" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.849446 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="ovnkube-controller" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.849456 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="kube-rbac-proxy-node" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.849468 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="ovn-acl-logging" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.849477 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="ovnkube-controller" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.849489 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="kube-rbac-proxy-ovn-metrics" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.849497 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="northd" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.849507 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="sbdb" Apr 04 02:12:16 crc kubenswrapper[4681]: E0404 02:12:16.849624 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="ovn-acl-logging" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.849633 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="ovn-acl-logging" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.849736 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="ovnkube-controller" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.849890 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d004639b-c07a-4401-8588-8af4ed981db3" containerName="ovnkube-controller" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.854696 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.926906 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d004639b-c07a-4401-8588-8af4ed981db3-ovn-node-metrics-cert\") pod \"d004639b-c07a-4401-8588-8af4ed981db3\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.926957 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d004639b-c07a-4401-8588-8af4ed981db3-ovnkube-config\") pod \"d004639b-c07a-4401-8588-8af4ed981db3\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.926989 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-var-lib-openvswitch\") pod \"d004639b-c07a-4401-8588-8af4ed981db3\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927021 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-run-systemd\") pod \"d004639b-c07a-4401-8588-8af4ed981db3\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927046 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-cni-bin\") pod \"d004639b-c07a-4401-8588-8af4ed981db3\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927050 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "d004639b-c07a-4401-8588-8af4ed981db3" (UID: "d004639b-c07a-4401-8588-8af4ed981db3"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927070 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-slash\") pod \"d004639b-c07a-4401-8588-8af4ed981db3\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927139 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-etc-openvswitch\") pod \"d004639b-c07a-4401-8588-8af4ed981db3\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927158 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "d004639b-c07a-4401-8588-8af4ed981db3" (UID: "d004639b-c07a-4401-8588-8af4ed981db3"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927188 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-systemd-units\") pod \"d004639b-c07a-4401-8588-8af4ed981db3\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927224 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"d004639b-c07a-4401-8588-8af4ed981db3\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927250 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-log-socket\") pod \"d004639b-c07a-4401-8588-8af4ed981db3\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927302 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-node-log\") pod \"d004639b-c07a-4401-8588-8af4ed981db3\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927181 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "d004639b-c07a-4401-8588-8af4ed981db3" (UID: "d004639b-c07a-4401-8588-8af4ed981db3"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927198 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-slash" (OuterVolumeSpecName: "host-slash") pod "d004639b-c07a-4401-8588-8af4ed981db3" (UID: "d004639b-c07a-4401-8588-8af4ed981db3"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927223 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "d004639b-c07a-4401-8588-8af4ed981db3" (UID: "d004639b-c07a-4401-8588-8af4ed981db3"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927299 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-log-socket" (OuterVolumeSpecName: "log-socket") pod "d004639b-c07a-4401-8588-8af4ed981db3" (UID: "d004639b-c07a-4401-8588-8af4ed981db3"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927328 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-run-ovn\") pod \"d004639b-c07a-4401-8588-8af4ed981db3\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927352 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "d004639b-c07a-4401-8588-8af4ed981db3" (UID: "d004639b-c07a-4401-8588-8af4ed981db3"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927381 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-node-log" (OuterVolumeSpecName: "node-log") pod "d004639b-c07a-4401-8588-8af4ed981db3" (UID: "d004639b-c07a-4401-8588-8af4ed981db3"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927385 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d004639b-c07a-4401-8588-8af4ed981db3-ovnkube-script-lib\") pod \"d004639b-c07a-4401-8588-8af4ed981db3\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927412 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-kubelet\") pod \"d004639b-c07a-4401-8588-8af4ed981db3\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927431 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-run-netns\") pod \"d004639b-c07a-4401-8588-8af4ed981db3\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927457 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz8jr\" (UniqueName: \"kubernetes.io/projected/d004639b-c07a-4401-8588-8af4ed981db3-kube-api-access-vz8jr\") pod \"d004639b-c07a-4401-8588-8af4ed981db3\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927476 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-run-openvswitch\") pod \"d004639b-c07a-4401-8588-8af4ed981db3\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927351 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "d004639b-c07a-4401-8588-8af4ed981db3" (UID: "d004639b-c07a-4401-8588-8af4ed981db3"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927477 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "d004639b-c07a-4401-8588-8af4ed981db3" (UID: "d004639b-c07a-4401-8588-8af4ed981db3"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927513 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "d004639b-c07a-4401-8588-8af4ed981db3" (UID: "d004639b-c07a-4401-8588-8af4ed981db3"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927527 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-run-ovn-kubernetes\") pod \"d004639b-c07a-4401-8588-8af4ed981db3\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927549 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d004639b-c07a-4401-8588-8af4ed981db3-env-overrides\") pod \"d004639b-c07a-4401-8588-8af4ed981db3\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927569 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-cni-netd\") pod \"d004639b-c07a-4401-8588-8af4ed981db3\" (UID: \"d004639b-c07a-4401-8588-8af4ed981db3\") " Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927808 4681 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-slash\") on node \"crc\" DevicePath \"\"" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927823 4681 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927834 4681 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-systemd-units\") on node \"crc\" DevicePath \"\"" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927848 4681 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927859 4681 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-log-socket\") on node \"crc\" DevicePath \"\"" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927868 4681 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-node-log\") on node \"crc\" DevicePath \"\"" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927879 4681 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-run-ovn\") on node \"crc\" DevicePath \"\"" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927889 4681 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-kubelet\") on node \"crc\" DevicePath \"\"" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927899 4681 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-run-netns\") on node \"crc\" DevicePath \"\"" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927909 4681 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927931 4681 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-cni-bin\") on node \"crc\" DevicePath \"\"" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927961 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "d004639b-c07a-4401-8588-8af4ed981db3" (UID: "d004639b-c07a-4401-8588-8af4ed981db3"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.927986 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "d004639b-c07a-4401-8588-8af4ed981db3" (UID: "d004639b-c07a-4401-8588-8af4ed981db3"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.928008 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "d004639b-c07a-4401-8588-8af4ed981db3" (UID: "d004639b-c07a-4401-8588-8af4ed981db3"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.928084 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d004639b-c07a-4401-8588-8af4ed981db3-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "d004639b-c07a-4401-8588-8af4ed981db3" (UID: "d004639b-c07a-4401-8588-8af4ed981db3"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.928311 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d004639b-c07a-4401-8588-8af4ed981db3-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "d004639b-c07a-4401-8588-8af4ed981db3" (UID: "d004639b-c07a-4401-8588-8af4ed981db3"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.928625 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d004639b-c07a-4401-8588-8af4ed981db3-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "d004639b-c07a-4401-8588-8af4ed981db3" (UID: "d004639b-c07a-4401-8588-8af4ed981db3"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.942691 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d004639b-c07a-4401-8588-8af4ed981db3-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "d004639b-c07a-4401-8588-8af4ed981db3" (UID: "d004639b-c07a-4401-8588-8af4ed981db3"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.946540 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d004639b-c07a-4401-8588-8af4ed981db3-kube-api-access-vz8jr" (OuterVolumeSpecName: "kube-api-access-vz8jr") pod "d004639b-c07a-4401-8588-8af4ed981db3" (UID: "d004639b-c07a-4401-8588-8af4ed981db3"). InnerVolumeSpecName "kube-api-access-vz8jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:12:16 crc kubenswrapper[4681]: I0404 02:12:16.962294 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "d004639b-c07a-4401-8588-8af4ed981db3" (UID: "d004639b-c07a-4401-8588-8af4ed981db3"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.029705 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-run-ovn\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.029844 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-log-socket\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.029910 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-env-overrides\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.029936 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-run-systemd\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.029957 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-host-cni-netd\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.029978 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-ovnkube-config\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.030002 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-host-kubelet\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.030032 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-ovnkube-script-lib\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.030133 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-host-cni-bin\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.030169 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-systemd-units\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.030189 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-run-openvswitch\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.030207 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.030328 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-etc-openvswitch\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.030388 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-host-run-ovn-kubernetes\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.030419 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-node-log\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.030453 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99mdq\" (UniqueName: \"kubernetes.io/projected/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-kube-api-access-99mdq\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.030477 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-host-slash\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.030505 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-var-lib-openvswitch\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.030532 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-host-run-netns\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.030560 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-ovn-node-metrics-cert\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.030630 4681 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d004639b-c07a-4401-8588-8af4ed981db3-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.030645 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz8jr\" (UniqueName: \"kubernetes.io/projected/d004639b-c07a-4401-8588-8af4ed981db3-kube-api-access-vz8jr\") on node \"crc\" DevicePath \"\"" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.030660 4681 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-run-openvswitch\") on node \"crc\" DevicePath \"\"" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.030676 4681 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.030689 4681 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d004639b-c07a-4401-8588-8af4ed981db3-env-overrides\") on node \"crc\" DevicePath \"\"" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.030700 4681 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-host-cni-netd\") on node \"crc\" DevicePath \"\"" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.030710 4681 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d004639b-c07a-4401-8588-8af4ed981db3-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.030721 4681 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d004639b-c07a-4401-8588-8af4ed981db3-ovnkube-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.030731 4681 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d004639b-c07a-4401-8588-8af4ed981db3-run-systemd\") on node \"crc\" DevicePath \"\"" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.132368 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-host-cni-bin\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.132414 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-systemd-units\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.132439 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.132462 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-run-openvswitch\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.132489 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-etc-openvswitch\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.132514 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-host-run-ovn-kubernetes\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.132522 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-host-cni-bin\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.132549 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-systemd-units\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.132584 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.132537 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-node-log\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.132588 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-node-log\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.132620 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-etc-openvswitch\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.132642 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-run-openvswitch\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.132649 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-host-run-ovn-kubernetes\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.132750 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99mdq\" (UniqueName: \"kubernetes.io/projected/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-kube-api-access-99mdq\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.132776 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-host-slash\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.132800 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-var-lib-openvswitch\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.132832 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-host-slash\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.132834 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-host-run-netns\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.132856 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-host-run-netns\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.132912 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-var-lib-openvswitch\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.132887 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-ovn-node-metrics-cert\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.133011 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-run-ovn\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.133063 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-log-socket\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.133102 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-env-overrides\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.133124 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-run-systemd\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.133129 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-run-ovn\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.133145 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-host-cni-netd\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.133169 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-ovnkube-config\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.133197 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-run-systemd\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.133198 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-host-kubelet\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.133224 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-host-kubelet\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.133220 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-log-socket\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.133286 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-ovnkube-script-lib\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.133206 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-host-cni-netd\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.133985 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-ovnkube-config\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.134097 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-ovnkube-script-lib\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.134471 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-env-overrides\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.137862 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-ovn-node-metrics-cert\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.152684 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99mdq\" (UniqueName: \"kubernetes.io/projected/fc0de3af-9143-47d3-a65c-0ff7cb2f46d0-kube-api-access-99mdq\") pod \"ovnkube-node-4rmkn\" (UID: \"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0\") " pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.205515 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.413986 4681 generic.go:334] "Generic (PLEG): container finished" podID="fc0de3af-9143-47d3-a65c-0ff7cb2f46d0" containerID="ad6aae7ed75392764dbc69fd9043b30920c08d25b3320cfbff987edf986a03ce" exitCode=0 Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.414068 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" event={"ID":"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0","Type":"ContainerDied","Data":"ad6aae7ed75392764dbc69fd9043b30920c08d25b3320cfbff987edf986a03ce"} Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.414445 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" event={"ID":"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0","Type":"ContainerStarted","Data":"a0d5bc9a58261caae3585b523a4444771a8035b75baf35ff496502faac5fc501"} Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.416826 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-w5wbs_cab7ffc5-0101-48b8-87ab-de8324bacc38/kube-multus/1.log" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.416950 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-w5wbs" event={"ID":"cab7ffc5-0101-48b8-87ab-de8324bacc38","Type":"ContainerStarted","Data":"8cb60429135c7cd49e83eae6c3d159f0cba1022c2e88179f736e4246aae6de6e"} Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.421326 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cntwc_d004639b-c07a-4401-8588-8af4ed981db3/ovn-acl-logging/1.log" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.423297 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-cntwc_d004639b-c07a-4401-8588-8af4ed981db3/ovn-controller/0.log" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.423639 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" event={"ID":"d004639b-c07a-4401-8588-8af4ed981db3","Type":"ContainerDied","Data":"15cab4aace595262b01cd287da7d8d427cff77f3f5b2304cb74965bc5358f82f"} Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.423689 4681 scope.go:117] "RemoveContainer" containerID="7eb98cfe208dd9452114fa20dd0da34ba28dc788cd640b4539e2b080908b2f51" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.423753 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-cntwc" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.451192 4681 scope.go:117] "RemoveContainer" containerID="781945902b239ddb43e807866e874c9772627c2a89a780bb78915f9d56be3af7" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.477025 4681 scope.go:117] "RemoveContainer" containerID="4b65c78e32ddd224ada91fbdc9cbb29ccf856a749c80ca55db5d72a81be0f6b2" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.500129 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cntwc"] Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.505289 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-cntwc"] Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.506840 4681 scope.go:117] "RemoveContainer" containerID="0fdfc9c349e6678301d3158c25393964508989741034e7bf89fa1184f76c03a4" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.530478 4681 scope.go:117] "RemoveContainer" containerID="1eaf3363b57f81c1d30955e2f079097ee67801f30ce59a31744cbafe32358635" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.545206 4681 scope.go:117] "RemoveContainer" containerID="4d2840027a88428593a03be2fe95869f9aec36765ef0a70345f5e7832bbe6500" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.559276 4681 scope.go:117] "RemoveContainer" containerID="d6e4534b33b6e5ef0b5215513f32180fc0ca211032267b4a8c4234f8898f568b" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.578838 4681 scope.go:117] "RemoveContainer" containerID="efab081b44f2dca4164229bbc1cc45821aa0e010e6a68f83087c3f2eff932840" Apr 04 02:12:17 crc kubenswrapper[4681]: I0404 02:12:17.595062 4681 scope.go:117] "RemoveContainer" containerID="776eb63df699651274965137fee32cddced1c2369254bd969aa1f6dfe0f6a8ab" Apr 04 02:12:18 crc kubenswrapper[4681]: I0404 02:12:18.433792 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" event={"ID":"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0","Type":"ContainerStarted","Data":"3eab3894c2502fa5409d9eb2e7b460785e9074902eefb9993a67106df072e822"} Apr 04 02:12:18 crc kubenswrapper[4681]: I0404 02:12:18.434056 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" event={"ID":"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0","Type":"ContainerStarted","Data":"a8594c8d66017365a56a3578fcb98f2327ba0938a9cae165c02d6d014d22ff9e"} Apr 04 02:12:18 crc kubenswrapper[4681]: I0404 02:12:18.434071 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" event={"ID":"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0","Type":"ContainerStarted","Data":"69296ff66e03375738f6c58d13275d5130335b5ca1574daa1f39a291d03a0b40"} Apr 04 02:12:18 crc kubenswrapper[4681]: I0404 02:12:18.434085 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" event={"ID":"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0","Type":"ContainerStarted","Data":"0b722ed51aa3427a74d0e87745ccf498143358ed6402e8a7aaf547ef4c348cdb"} Apr 04 02:12:18 crc kubenswrapper[4681]: I0404 02:12:18.434098 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" event={"ID":"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0","Type":"ContainerStarted","Data":"610fbb7c418d3dfb2fccfa6c5b727cc7fdd697a53054357f70587ca286c91288"} Apr 04 02:12:18 crc kubenswrapper[4681]: I0404 02:12:18.434111 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" event={"ID":"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0","Type":"ContainerStarted","Data":"ee4ff1226adc04fc709305ceaae49d763b2ea3a04295808d0ece0f5a108f3821"} Apr 04 02:12:19 crc kubenswrapper[4681]: I0404 02:12:19.212533 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d004639b-c07a-4401-8588-8af4ed981db3" path="/var/lib/kubelet/pods/d004639b-c07a-4401-8588-8af4ed981db3/volumes" Apr 04 02:12:21 crc kubenswrapper[4681]: I0404 02:12:21.464214 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" event={"ID":"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0","Type":"ContainerStarted","Data":"258c2f0ea7f42629af401c3a5b9cef4d5aeff372ca03efe38420dc494fc35e36"} Apr 04 02:12:23 crc kubenswrapper[4681]: I0404 02:12:23.493599 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" event={"ID":"fc0de3af-9143-47d3-a65c-0ff7cb2f46d0","Type":"ContainerStarted","Data":"d2b7b076fbb75f0029f519db9f155d80250869619dc9ac352de16b670faa83b2"} Apr 04 02:12:23 crc kubenswrapper[4681]: I0404 02:12:23.493942 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:23 crc kubenswrapper[4681]: I0404 02:12:23.494028 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:23 crc kubenswrapper[4681]: I0404 02:12:23.494091 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:23 crc kubenswrapper[4681]: I0404 02:12:23.531355 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" podStartSLOduration=7.531334122 podStartE2EDuration="7.531334122s" podCreationTimestamp="2026-04-04 02:12:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:12:23.526623169 +0000 UTC m=+1023.192398299" watchObservedRunningTime="2026-04-04 02:12:23.531334122 +0000 UTC m=+1023.197109262" Apr 04 02:12:23 crc kubenswrapper[4681]: I0404 02:12:23.539521 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:23 crc kubenswrapper[4681]: I0404 02:12:23.540086 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:41 crc kubenswrapper[4681]: I0404 02:12:41.444726 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl"] Apr 04 02:12:41 crc kubenswrapper[4681]: I0404 02:12:41.446103 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl" Apr 04 02:12:41 crc kubenswrapper[4681]: I0404 02:12:41.448035 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Apr 04 02:12:41 crc kubenswrapper[4681]: I0404 02:12:41.448281 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2b4070a8-f657-4819-8ab7-b105f33e5560-util\") pod \"ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl\" (UID: \"2b4070a8-f657-4819-8ab7-b105f33e5560\") " pod="openshift-marketplace/ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl" Apr 04 02:12:41 crc kubenswrapper[4681]: I0404 02:12:41.448342 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2b4070a8-f657-4819-8ab7-b105f33e5560-bundle\") pod \"ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl\" (UID: \"2b4070a8-f657-4819-8ab7-b105f33e5560\") " pod="openshift-marketplace/ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl" Apr 04 02:12:41 crc kubenswrapper[4681]: I0404 02:12:41.448377 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll7fs\" (UniqueName: \"kubernetes.io/projected/2b4070a8-f657-4819-8ab7-b105f33e5560-kube-api-access-ll7fs\") pod \"ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl\" (UID: \"2b4070a8-f657-4819-8ab7-b105f33e5560\") " pod="openshift-marketplace/ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl" Apr 04 02:12:41 crc kubenswrapper[4681]: I0404 02:12:41.459764 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl"] Apr 04 02:12:41 crc kubenswrapper[4681]: I0404 02:12:41.549123 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2b4070a8-f657-4819-8ab7-b105f33e5560-bundle\") pod \"ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl\" (UID: \"2b4070a8-f657-4819-8ab7-b105f33e5560\") " pod="openshift-marketplace/ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl" Apr 04 02:12:41 crc kubenswrapper[4681]: I0404 02:12:41.549198 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll7fs\" (UniqueName: \"kubernetes.io/projected/2b4070a8-f657-4819-8ab7-b105f33e5560-kube-api-access-ll7fs\") pod \"ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl\" (UID: \"2b4070a8-f657-4819-8ab7-b105f33e5560\") " pod="openshift-marketplace/ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl" Apr 04 02:12:41 crc kubenswrapper[4681]: I0404 02:12:41.549242 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2b4070a8-f657-4819-8ab7-b105f33e5560-util\") pod \"ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl\" (UID: \"2b4070a8-f657-4819-8ab7-b105f33e5560\") " pod="openshift-marketplace/ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl" Apr 04 02:12:41 crc kubenswrapper[4681]: I0404 02:12:41.549727 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2b4070a8-f657-4819-8ab7-b105f33e5560-util\") pod \"ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl\" (UID: \"2b4070a8-f657-4819-8ab7-b105f33e5560\") " pod="openshift-marketplace/ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl" Apr 04 02:12:41 crc kubenswrapper[4681]: I0404 02:12:41.550234 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2b4070a8-f657-4819-8ab7-b105f33e5560-bundle\") pod \"ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl\" (UID: \"2b4070a8-f657-4819-8ab7-b105f33e5560\") " pod="openshift-marketplace/ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl" Apr 04 02:12:41 crc kubenswrapper[4681]: I0404 02:12:41.568128 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll7fs\" (UniqueName: \"kubernetes.io/projected/2b4070a8-f657-4819-8ab7-b105f33e5560-kube-api-access-ll7fs\") pod \"ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl\" (UID: \"2b4070a8-f657-4819-8ab7-b105f33e5560\") " pod="openshift-marketplace/ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl" Apr 04 02:12:41 crc kubenswrapper[4681]: I0404 02:12:41.764470 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl" Apr 04 02:12:41 crc kubenswrapper[4681]: I0404 02:12:41.955902 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl"] Apr 04 02:12:42 crc kubenswrapper[4681]: I0404 02:12:42.601432 4681 generic.go:334] "Generic (PLEG): container finished" podID="2b4070a8-f657-4819-8ab7-b105f33e5560" containerID="422e622315838eaea850250528044a1602f5788766fc9da9977d633a0bbd6beb" exitCode=0 Apr 04 02:12:42 crc kubenswrapper[4681]: I0404 02:12:42.601514 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl" event={"ID":"2b4070a8-f657-4819-8ab7-b105f33e5560","Type":"ContainerDied","Data":"422e622315838eaea850250528044a1602f5788766fc9da9977d633a0bbd6beb"} Apr 04 02:12:42 crc kubenswrapper[4681]: I0404 02:12:42.601705 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl" event={"ID":"2b4070a8-f657-4819-8ab7-b105f33e5560","Type":"ContainerStarted","Data":"c39de47ae3075cb1959d38a5829d1a785a3be4068bf19186cec552119bec7cd8"} Apr 04 02:12:43 crc kubenswrapper[4681]: I0404 02:12:43.813829 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ljx6g"] Apr 04 02:12:43 crc kubenswrapper[4681]: I0404 02:12:43.815095 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ljx6g" Apr 04 02:12:43 crc kubenswrapper[4681]: I0404 02:12:43.825349 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ljx6g"] Apr 04 02:12:43 crc kubenswrapper[4681]: I0404 02:12:43.979349 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mg9b\" (UniqueName: \"kubernetes.io/projected/a9db2935-87b5-4918-b3a7-aaf59cd53a45-kube-api-access-2mg9b\") pod \"redhat-operators-ljx6g\" (UID: \"a9db2935-87b5-4918-b3a7-aaf59cd53a45\") " pod="openshift-marketplace/redhat-operators-ljx6g" Apr 04 02:12:43 crc kubenswrapper[4681]: I0404 02:12:43.980029 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9db2935-87b5-4918-b3a7-aaf59cd53a45-catalog-content\") pod \"redhat-operators-ljx6g\" (UID: \"a9db2935-87b5-4918-b3a7-aaf59cd53a45\") " pod="openshift-marketplace/redhat-operators-ljx6g" Apr 04 02:12:43 crc kubenswrapper[4681]: I0404 02:12:43.980092 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9db2935-87b5-4918-b3a7-aaf59cd53a45-utilities\") pod \"redhat-operators-ljx6g\" (UID: \"a9db2935-87b5-4918-b3a7-aaf59cd53a45\") " pod="openshift-marketplace/redhat-operators-ljx6g" Apr 04 02:12:44 crc kubenswrapper[4681]: I0404 02:12:44.082122 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9db2935-87b5-4918-b3a7-aaf59cd53a45-utilities\") pod \"redhat-operators-ljx6g\" (UID: \"a9db2935-87b5-4918-b3a7-aaf59cd53a45\") " pod="openshift-marketplace/redhat-operators-ljx6g" Apr 04 02:12:44 crc kubenswrapper[4681]: I0404 02:12:44.082554 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mg9b\" (UniqueName: \"kubernetes.io/projected/a9db2935-87b5-4918-b3a7-aaf59cd53a45-kube-api-access-2mg9b\") pod \"redhat-operators-ljx6g\" (UID: \"a9db2935-87b5-4918-b3a7-aaf59cd53a45\") " pod="openshift-marketplace/redhat-operators-ljx6g" Apr 04 02:12:44 crc kubenswrapper[4681]: I0404 02:12:44.082886 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9db2935-87b5-4918-b3a7-aaf59cd53a45-utilities\") pod \"redhat-operators-ljx6g\" (UID: \"a9db2935-87b5-4918-b3a7-aaf59cd53a45\") " pod="openshift-marketplace/redhat-operators-ljx6g" Apr 04 02:12:44 crc kubenswrapper[4681]: I0404 02:12:44.082991 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9db2935-87b5-4918-b3a7-aaf59cd53a45-catalog-content\") pod \"redhat-operators-ljx6g\" (UID: \"a9db2935-87b5-4918-b3a7-aaf59cd53a45\") " pod="openshift-marketplace/redhat-operators-ljx6g" Apr 04 02:12:44 crc kubenswrapper[4681]: I0404 02:12:44.083730 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9db2935-87b5-4918-b3a7-aaf59cd53a45-catalog-content\") pod \"redhat-operators-ljx6g\" (UID: \"a9db2935-87b5-4918-b3a7-aaf59cd53a45\") " pod="openshift-marketplace/redhat-operators-ljx6g" Apr 04 02:12:44 crc kubenswrapper[4681]: I0404 02:12:44.106183 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mg9b\" (UniqueName: \"kubernetes.io/projected/a9db2935-87b5-4918-b3a7-aaf59cd53a45-kube-api-access-2mg9b\") pod \"redhat-operators-ljx6g\" (UID: \"a9db2935-87b5-4918-b3a7-aaf59cd53a45\") " pod="openshift-marketplace/redhat-operators-ljx6g" Apr 04 02:12:44 crc kubenswrapper[4681]: I0404 02:12:44.131401 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ljx6g" Apr 04 02:12:44 crc kubenswrapper[4681]: I0404 02:12:44.549026 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ljx6g"] Apr 04 02:12:44 crc kubenswrapper[4681]: W0404 02:12:44.558286 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9db2935_87b5_4918_b3a7_aaf59cd53a45.slice/crio-f48321c473324ab2d0208883eb0814f2ca7ecda8dded3898dcd5481f5e588423 WatchSource:0}: Error finding container f48321c473324ab2d0208883eb0814f2ca7ecda8dded3898dcd5481f5e588423: Status 404 returned error can't find the container with id f48321c473324ab2d0208883eb0814f2ca7ecda8dded3898dcd5481f5e588423 Apr 04 02:12:44 crc kubenswrapper[4681]: I0404 02:12:44.614981 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljx6g" event={"ID":"a9db2935-87b5-4918-b3a7-aaf59cd53a45","Type":"ContainerStarted","Data":"f48321c473324ab2d0208883eb0814f2ca7ecda8dded3898dcd5481f5e588423"} Apr 04 02:12:44 crc kubenswrapper[4681]: I0404 02:12:44.617343 4681 generic.go:334] "Generic (PLEG): container finished" podID="2b4070a8-f657-4819-8ab7-b105f33e5560" containerID="dbe420dbc095fdf538a1835c22d559b0c26c7dbf4ef7dbd821f79b95b82afa97" exitCode=0 Apr 04 02:12:44 crc kubenswrapper[4681]: I0404 02:12:44.617381 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl" event={"ID":"2b4070a8-f657-4819-8ab7-b105f33e5560","Type":"ContainerDied","Data":"dbe420dbc095fdf538a1835c22d559b0c26c7dbf4ef7dbd821f79b95b82afa97"} Apr 04 02:12:45 crc kubenswrapper[4681]: I0404 02:12:45.627122 4681 generic.go:334] "Generic (PLEG): container finished" podID="a9db2935-87b5-4918-b3a7-aaf59cd53a45" containerID="bccfef3a0ea84aa0f2c4a4e0486b8996235c5c3f4a390979b4524db49efdde69" exitCode=0 Apr 04 02:12:45 crc kubenswrapper[4681]: I0404 02:12:45.627210 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljx6g" event={"ID":"a9db2935-87b5-4918-b3a7-aaf59cd53a45","Type":"ContainerDied","Data":"bccfef3a0ea84aa0f2c4a4e0486b8996235c5c3f4a390979b4524db49efdde69"} Apr 04 02:12:45 crc kubenswrapper[4681]: I0404 02:12:45.632938 4681 generic.go:334] "Generic (PLEG): container finished" podID="2b4070a8-f657-4819-8ab7-b105f33e5560" containerID="93678e13697caee91c56dde57849dfb461ce3acbb429f0ae1d038206311d7f3a" exitCode=0 Apr 04 02:12:45 crc kubenswrapper[4681]: I0404 02:12:45.633020 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl" event={"ID":"2b4070a8-f657-4819-8ab7-b105f33e5560","Type":"ContainerDied","Data":"93678e13697caee91c56dde57849dfb461ce3acbb429f0ae1d038206311d7f3a"} Apr 04 02:12:46 crc kubenswrapper[4681]: I0404 02:12:46.896062 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl" Apr 04 02:12:46 crc kubenswrapper[4681]: I0404 02:12:46.916619 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2b4070a8-f657-4819-8ab7-b105f33e5560-util\") pod \"2b4070a8-f657-4819-8ab7-b105f33e5560\" (UID: \"2b4070a8-f657-4819-8ab7-b105f33e5560\") " Apr 04 02:12:46 crc kubenswrapper[4681]: I0404 02:12:46.916656 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll7fs\" (UniqueName: \"kubernetes.io/projected/2b4070a8-f657-4819-8ab7-b105f33e5560-kube-api-access-ll7fs\") pod \"2b4070a8-f657-4819-8ab7-b105f33e5560\" (UID: \"2b4070a8-f657-4819-8ab7-b105f33e5560\") " Apr 04 02:12:46 crc kubenswrapper[4681]: I0404 02:12:46.916689 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2b4070a8-f657-4819-8ab7-b105f33e5560-bundle\") pod \"2b4070a8-f657-4819-8ab7-b105f33e5560\" (UID: \"2b4070a8-f657-4819-8ab7-b105f33e5560\") " Apr 04 02:12:46 crc kubenswrapper[4681]: I0404 02:12:46.920156 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b4070a8-f657-4819-8ab7-b105f33e5560-bundle" (OuterVolumeSpecName: "bundle") pod "2b4070a8-f657-4819-8ab7-b105f33e5560" (UID: "2b4070a8-f657-4819-8ab7-b105f33e5560"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:12:46 crc kubenswrapper[4681]: I0404 02:12:46.928412 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b4070a8-f657-4819-8ab7-b105f33e5560-kube-api-access-ll7fs" (OuterVolumeSpecName: "kube-api-access-ll7fs") pod "2b4070a8-f657-4819-8ab7-b105f33e5560" (UID: "2b4070a8-f657-4819-8ab7-b105f33e5560"). InnerVolumeSpecName "kube-api-access-ll7fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:12:46 crc kubenswrapper[4681]: I0404 02:12:46.941699 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b4070a8-f657-4819-8ab7-b105f33e5560-util" (OuterVolumeSpecName: "util") pod "2b4070a8-f657-4819-8ab7-b105f33e5560" (UID: "2b4070a8-f657-4819-8ab7-b105f33e5560"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:12:47 crc kubenswrapper[4681]: I0404 02:12:47.017360 4681 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2b4070a8-f657-4819-8ab7-b105f33e5560-util\") on node \"crc\" DevicePath \"\"" Apr 04 02:12:47 crc kubenswrapper[4681]: I0404 02:12:47.017401 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ll7fs\" (UniqueName: \"kubernetes.io/projected/2b4070a8-f657-4819-8ab7-b105f33e5560-kube-api-access-ll7fs\") on node \"crc\" DevicePath \"\"" Apr 04 02:12:47 crc kubenswrapper[4681]: I0404 02:12:47.017413 4681 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2b4070a8-f657-4819-8ab7-b105f33e5560-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:12:47 crc kubenswrapper[4681]: I0404 02:12:47.245086 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4rmkn" Apr 04 02:12:47 crc kubenswrapper[4681]: I0404 02:12:47.648935 4681 generic.go:334] "Generic (PLEG): container finished" podID="a9db2935-87b5-4918-b3a7-aaf59cd53a45" containerID="f93fd6aeec8d942fac0373e5adcf58e659952ac55fb789856505ccd970e526c5" exitCode=0 Apr 04 02:12:47 crc kubenswrapper[4681]: I0404 02:12:47.649154 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljx6g" event={"ID":"a9db2935-87b5-4918-b3a7-aaf59cd53a45","Type":"ContainerDied","Data":"f93fd6aeec8d942fac0373e5adcf58e659952ac55fb789856505ccd970e526c5"} Apr 04 02:12:47 crc kubenswrapper[4681]: I0404 02:12:47.654808 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl" event={"ID":"2b4070a8-f657-4819-8ab7-b105f33e5560","Type":"ContainerDied","Data":"c39de47ae3075cb1959d38a5829d1a785a3be4068bf19186cec552119bec7cd8"} Apr 04 02:12:47 crc kubenswrapper[4681]: I0404 02:12:47.654848 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl" Apr 04 02:12:47 crc kubenswrapper[4681]: I0404 02:12:47.654865 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c39de47ae3075cb1959d38a5829d1a785a3be4068bf19186cec552119bec7cd8" Apr 04 02:12:48 crc kubenswrapper[4681]: I0404 02:12:48.665671 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljx6g" event={"ID":"a9db2935-87b5-4918-b3a7-aaf59cd53a45","Type":"ContainerStarted","Data":"f516719546bfcaf53f34ab5e04081614b308ed472a0b2ad496c87a68ee3cd5b1"} Apr 04 02:12:48 crc kubenswrapper[4681]: I0404 02:12:48.691488 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ljx6g" podStartSLOduration=3.195844788 podStartE2EDuration="5.691470571s" podCreationTimestamp="2026-04-04 02:12:43 +0000 UTC" firstStartedPulling="2026-04-04 02:12:45.629065061 +0000 UTC m=+1045.294840221" lastFinishedPulling="2026-04-04 02:12:48.124690884 +0000 UTC m=+1047.790466004" observedRunningTime="2026-04-04 02:12:48.687677393 +0000 UTC m=+1048.353452513" watchObservedRunningTime="2026-04-04 02:12:48.691470571 +0000 UTC m=+1048.357245691" Apr 04 02:12:54 crc kubenswrapper[4681]: I0404 02:12:54.132079 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ljx6g" Apr 04 02:12:54 crc kubenswrapper[4681]: I0404 02:12:54.132455 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ljx6g" Apr 04 02:12:55 crc kubenswrapper[4681]: I0404 02:12:55.181302 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ljx6g" podUID="a9db2935-87b5-4918-b3a7-aaf59cd53a45" containerName="registry-server" probeResult="failure" output=< Apr 04 02:12:55 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Apr 04 02:12:55 crc kubenswrapper[4681]: > Apr 04 02:13:01 crc kubenswrapper[4681]: I0404 02:13:01.167570 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668"] Apr 04 02:13:01 crc kubenswrapper[4681]: E0404 02:13:01.168029 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4070a8-f657-4819-8ab7-b105f33e5560" containerName="extract" Apr 04 02:13:01 crc kubenswrapper[4681]: I0404 02:13:01.168040 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4070a8-f657-4819-8ab7-b105f33e5560" containerName="extract" Apr 04 02:13:01 crc kubenswrapper[4681]: E0404 02:13:01.168053 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4070a8-f657-4819-8ab7-b105f33e5560" containerName="pull" Apr 04 02:13:01 crc kubenswrapper[4681]: I0404 02:13:01.168058 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4070a8-f657-4819-8ab7-b105f33e5560" containerName="pull" Apr 04 02:13:01 crc kubenswrapper[4681]: E0404 02:13:01.168066 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b4070a8-f657-4819-8ab7-b105f33e5560" containerName="util" Apr 04 02:13:01 crc kubenswrapper[4681]: I0404 02:13:01.168074 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b4070a8-f657-4819-8ab7-b105f33e5560" containerName="util" Apr 04 02:13:01 crc kubenswrapper[4681]: I0404 02:13:01.168195 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b4070a8-f657-4819-8ab7-b105f33e5560" containerName="extract" Apr 04 02:13:01 crc kubenswrapper[4681]: I0404 02:13:01.168876 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668" Apr 04 02:13:01 crc kubenswrapper[4681]: I0404 02:13:01.187444 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Apr 04 02:13:01 crc kubenswrapper[4681]: I0404 02:13:01.210878 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668"] Apr 04 02:13:01 crc kubenswrapper[4681]: I0404 02:13:01.303160 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whbk4\" (UniqueName: \"kubernetes.io/projected/d625d583-bc5e-4cf4-914b-09f9452b7633-kube-api-access-whbk4\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668\" (UID: \"d625d583-bc5e-4cf4-914b-09f9452b7633\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668" Apr 04 02:13:01 crc kubenswrapper[4681]: I0404 02:13:01.303232 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d625d583-bc5e-4cf4-914b-09f9452b7633-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668\" (UID: \"d625d583-bc5e-4cf4-914b-09f9452b7633\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668" Apr 04 02:13:01 crc kubenswrapper[4681]: I0404 02:13:01.303440 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d625d583-bc5e-4cf4-914b-09f9452b7633-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668\" (UID: \"d625d583-bc5e-4cf4-914b-09f9452b7633\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668" Apr 04 02:13:01 crc kubenswrapper[4681]: I0404 02:13:01.404390 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whbk4\" (UniqueName: \"kubernetes.io/projected/d625d583-bc5e-4cf4-914b-09f9452b7633-kube-api-access-whbk4\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668\" (UID: \"d625d583-bc5e-4cf4-914b-09f9452b7633\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668" Apr 04 02:13:01 crc kubenswrapper[4681]: I0404 02:13:01.404449 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d625d583-bc5e-4cf4-914b-09f9452b7633-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668\" (UID: \"d625d583-bc5e-4cf4-914b-09f9452b7633\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668" Apr 04 02:13:01 crc kubenswrapper[4681]: I0404 02:13:01.404498 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d625d583-bc5e-4cf4-914b-09f9452b7633-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668\" (UID: \"d625d583-bc5e-4cf4-914b-09f9452b7633\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668" Apr 04 02:13:01 crc kubenswrapper[4681]: I0404 02:13:01.404969 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d625d583-bc5e-4cf4-914b-09f9452b7633-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668\" (UID: \"d625d583-bc5e-4cf4-914b-09f9452b7633\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668" Apr 04 02:13:01 crc kubenswrapper[4681]: I0404 02:13:01.405018 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d625d583-bc5e-4cf4-914b-09f9452b7633-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668\" (UID: \"d625d583-bc5e-4cf4-914b-09f9452b7633\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668" Apr 04 02:13:01 crc kubenswrapper[4681]: I0404 02:13:01.425764 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whbk4\" (UniqueName: \"kubernetes.io/projected/d625d583-bc5e-4cf4-914b-09f9452b7633-kube-api-access-whbk4\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668\" (UID: \"d625d583-bc5e-4cf4-914b-09f9452b7633\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668" Apr 04 02:13:01 crc kubenswrapper[4681]: I0404 02:13:01.485410 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668" Apr 04 02:13:02 crc kubenswrapper[4681]: I0404 02:13:02.150559 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668"] Apr 04 02:13:02 crc kubenswrapper[4681]: I0404 02:13:02.778196 4681 generic.go:334] "Generic (PLEG): container finished" podID="d625d583-bc5e-4cf4-914b-09f9452b7633" containerID="1e5d5e051d02b951be136550bbf8a7880156bf981589d1aaf0df20ad57314e19" exitCode=0 Apr 04 02:13:02 crc kubenswrapper[4681]: I0404 02:13:02.778368 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668" event={"ID":"d625d583-bc5e-4cf4-914b-09f9452b7633","Type":"ContainerDied","Data":"1e5d5e051d02b951be136550bbf8a7880156bf981589d1aaf0df20ad57314e19"} Apr 04 02:13:02 crc kubenswrapper[4681]: I0404 02:13:02.778603 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668" event={"ID":"d625d583-bc5e-4cf4-914b-09f9452b7633","Type":"ContainerStarted","Data":"03f4a5802ad0057674537639fae9d6bdda87a76451f2ac5a02a78fe5be471075"} Apr 04 02:13:04 crc kubenswrapper[4681]: I0404 02:13:04.180067 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ljx6g" Apr 04 02:13:04 crc kubenswrapper[4681]: I0404 02:13:04.244477 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ljx6g" Apr 04 02:13:04 crc kubenswrapper[4681]: I0404 02:13:04.793706 4681 generic.go:334] "Generic (PLEG): container finished" podID="d625d583-bc5e-4cf4-914b-09f9452b7633" containerID="de47e38d24ad5937c3bba4519843308321fb54b9c12095c96630bff291e171b9" exitCode=0 Apr 04 02:13:04 crc kubenswrapper[4681]: I0404 02:13:04.793755 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668" event={"ID":"d625d583-bc5e-4cf4-914b-09f9452b7633","Type":"ContainerDied","Data":"de47e38d24ad5937c3bba4519843308321fb54b9c12095c96630bff291e171b9"} Apr 04 02:13:04 crc kubenswrapper[4681]: I0404 02:13:04.867428 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-86dff4bf76-pkfp4"] Apr 04 02:13:04 crc kubenswrapper[4681]: I0404 02:13:04.868351 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86dff4bf76-pkfp4" Apr 04 02:13:04 crc kubenswrapper[4681]: I0404 02:13:04.871119 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Apr 04 02:13:04 crc kubenswrapper[4681]: I0404 02:13:04.871568 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Apr 04 02:13:04 crc kubenswrapper[4681]: I0404 02:13:04.871589 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-g85g9" Apr 04 02:13:04 crc kubenswrapper[4681]: I0404 02:13:04.884377 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86dff4bf76-pkfp4"] Apr 04 02:13:04 crc kubenswrapper[4681]: I0404 02:13:04.957941 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nlvz\" (UniqueName: \"kubernetes.io/projected/b23b52f2-8062-48f1-a937-590414fcb369-kube-api-access-2nlvz\") pod \"obo-prometheus-operator-86dff4bf76-pkfp4\" (UID: \"b23b52f2-8062-48f1-a937-590414fcb369\") " pod="openshift-operators/obo-prometheus-operator-86dff4bf76-pkfp4" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.003212 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-57bc579d78-6gdgv"] Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.004117 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57bc579d78-6gdgv" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.006182 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-h6b58" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.006702 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.014288 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-57bc579d78-bpz42"] Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.015687 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57bc579d78-bpz42" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.035787 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-57bc579d78-bpz42"] Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.060560 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nlvz\" (UniqueName: \"kubernetes.io/projected/b23b52f2-8062-48f1-a937-590414fcb369-kube-api-access-2nlvz\") pod \"obo-prometheus-operator-86dff4bf76-pkfp4\" (UID: \"b23b52f2-8062-48f1-a937-590414fcb369\") " pod="openshift-operators/obo-prometheus-operator-86dff4bf76-pkfp4" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.084981 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-57bc579d78-6gdgv"] Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.105398 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nlvz\" (UniqueName: \"kubernetes.io/projected/b23b52f2-8062-48f1-a937-590414fcb369-kube-api-access-2nlvz\") pod \"obo-prometheus-operator-86dff4bf76-pkfp4\" (UID: \"b23b52f2-8062-48f1-a937-590414fcb369\") " pod="openshift-operators/obo-prometheus-operator-86dff4bf76-pkfp4" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.161993 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c11383e1-c1fe-4d1e-ab47-234adca1f589-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-57bc579d78-6gdgv\" (UID: \"c11383e1-c1fe-4d1e-ab47-234adca1f589\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57bc579d78-6gdgv" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.162069 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca6efa76-cc20-4742-9c8b-1ef70ff6acff-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-57bc579d78-bpz42\" (UID: \"ca6efa76-cc20-4742-9c8b-1ef70ff6acff\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57bc579d78-bpz42" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.162091 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca6efa76-cc20-4742-9c8b-1ef70ff6acff-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-57bc579d78-bpz42\" (UID: \"ca6efa76-cc20-4742-9c8b-1ef70ff6acff\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57bc579d78-bpz42" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.162122 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c11383e1-c1fe-4d1e-ab47-234adca1f589-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-57bc579d78-6gdgv\" (UID: \"c11383e1-c1fe-4d1e-ab47-234adca1f589\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57bc579d78-6gdgv" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.197437 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86dff4bf76-pkfp4" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.260073 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-dd944d769-szx5n"] Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.260790 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-dd944d769-szx5n" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.266142 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c11383e1-c1fe-4d1e-ab47-234adca1f589-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-57bc579d78-6gdgv\" (UID: \"c11383e1-c1fe-4d1e-ab47-234adca1f589\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57bc579d78-6gdgv" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.266205 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca6efa76-cc20-4742-9c8b-1ef70ff6acff-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-57bc579d78-bpz42\" (UID: \"ca6efa76-cc20-4742-9c8b-1ef70ff6acff\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57bc579d78-bpz42" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.266226 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca6efa76-cc20-4742-9c8b-1ef70ff6acff-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-57bc579d78-bpz42\" (UID: \"ca6efa76-cc20-4742-9c8b-1ef70ff6acff\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57bc579d78-bpz42" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.266257 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c11383e1-c1fe-4d1e-ab47-234adca1f589-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-57bc579d78-6gdgv\" (UID: \"c11383e1-c1fe-4d1e-ab47-234adca1f589\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57bc579d78-6gdgv" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.267888 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-gjk88" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.268205 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.269813 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c11383e1-c1fe-4d1e-ab47-234adca1f589-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-57bc579d78-6gdgv\" (UID: \"c11383e1-c1fe-4d1e-ab47-234adca1f589\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57bc579d78-6gdgv" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.273092 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca6efa76-cc20-4742-9c8b-1ef70ff6acff-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-57bc579d78-bpz42\" (UID: \"ca6efa76-cc20-4742-9c8b-1ef70ff6acff\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57bc579d78-bpz42" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.274381 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c11383e1-c1fe-4d1e-ab47-234adca1f589-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-57bc579d78-6gdgv\" (UID: \"c11383e1-c1fe-4d1e-ab47-234adca1f589\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57bc579d78-6gdgv" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.274953 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca6efa76-cc20-4742-9c8b-1ef70ff6acff-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-57bc579d78-bpz42\" (UID: \"ca6efa76-cc20-4742-9c8b-1ef70ff6acff\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-57bc579d78-bpz42" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.282790 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-dd944d769-szx5n"] Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.324327 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57bc579d78-6gdgv" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.336751 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57bc579d78-bpz42" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.367330 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m44h\" (UniqueName: \"kubernetes.io/projected/a510961d-019d-41d4-8a75-66f69f5d6728-kube-api-access-2m44h\") pod \"observability-operator-dd944d769-szx5n\" (UID: \"a510961d-019d-41d4-8a75-66f69f5d6728\") " pod="openshift-operators/observability-operator-dd944d769-szx5n" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.367422 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/a510961d-019d-41d4-8a75-66f69f5d6728-observability-operator-tls\") pod \"observability-operator-dd944d769-szx5n\" (UID: \"a510961d-019d-41d4-8a75-66f69f5d6728\") " pod="openshift-operators/observability-operator-dd944d769-szx5n" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.419753 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-74445bf4b8-4fv89"] Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.420842 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-74445bf4b8-4fv89" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.427837 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-r9n99" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.433347 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-74445bf4b8-4fv89"] Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.468049 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/a510961d-019d-41d4-8a75-66f69f5d6728-observability-operator-tls\") pod \"observability-operator-dd944d769-szx5n\" (UID: \"a510961d-019d-41d4-8a75-66f69f5d6728\") " pod="openshift-operators/observability-operator-dd944d769-szx5n" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.468149 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m44h\" (UniqueName: \"kubernetes.io/projected/a510961d-019d-41d4-8a75-66f69f5d6728-kube-api-access-2m44h\") pod \"observability-operator-dd944d769-szx5n\" (UID: \"a510961d-019d-41d4-8a75-66f69f5d6728\") " pod="openshift-operators/observability-operator-dd944d769-szx5n" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.487675 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/a510961d-019d-41d4-8a75-66f69f5d6728-observability-operator-tls\") pod \"observability-operator-dd944d769-szx5n\" (UID: \"a510961d-019d-41d4-8a75-66f69f5d6728\") " pod="openshift-operators/observability-operator-dd944d769-szx5n" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.498151 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m44h\" (UniqueName: \"kubernetes.io/projected/a510961d-019d-41d4-8a75-66f69f5d6728-kube-api-access-2m44h\") pod \"observability-operator-dd944d769-szx5n\" (UID: \"a510961d-019d-41d4-8a75-66f69f5d6728\") " pod="openshift-operators/observability-operator-dd944d769-szx5n" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.570489 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4npk\" (UniqueName: \"kubernetes.io/projected/0c2979a7-06b5-4451-875e-f8e64da75780-kube-api-access-k4npk\") pod \"perses-operator-74445bf4b8-4fv89\" (UID: \"0c2979a7-06b5-4451-875e-f8e64da75780\") " pod="openshift-operators/perses-operator-74445bf4b8-4fv89" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.570568 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c2979a7-06b5-4451-875e-f8e64da75780-openshift-service-ca\") pod \"perses-operator-74445bf4b8-4fv89\" (UID: \"0c2979a7-06b5-4451-875e-f8e64da75780\") " pod="openshift-operators/perses-operator-74445bf4b8-4fv89" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.638868 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-dd944d769-szx5n" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.674285 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4npk\" (UniqueName: \"kubernetes.io/projected/0c2979a7-06b5-4451-875e-f8e64da75780-kube-api-access-k4npk\") pod \"perses-operator-74445bf4b8-4fv89\" (UID: \"0c2979a7-06b5-4451-875e-f8e64da75780\") " pod="openshift-operators/perses-operator-74445bf4b8-4fv89" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.674386 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c2979a7-06b5-4451-875e-f8e64da75780-openshift-service-ca\") pod \"perses-operator-74445bf4b8-4fv89\" (UID: \"0c2979a7-06b5-4451-875e-f8e64da75780\") " pod="openshift-operators/perses-operator-74445bf4b8-4fv89" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.675615 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0c2979a7-06b5-4451-875e-f8e64da75780-openshift-service-ca\") pod \"perses-operator-74445bf4b8-4fv89\" (UID: \"0c2979a7-06b5-4451-875e-f8e64da75780\") " pod="openshift-operators/perses-operator-74445bf4b8-4fv89" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.702467 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4npk\" (UniqueName: \"kubernetes.io/projected/0c2979a7-06b5-4451-875e-f8e64da75780-kube-api-access-k4npk\") pod \"perses-operator-74445bf4b8-4fv89\" (UID: \"0c2979a7-06b5-4451-875e-f8e64da75780\") " pod="openshift-operators/perses-operator-74445bf4b8-4fv89" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.736438 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-57bc579d78-bpz42"] Apr 04 02:13:05 crc kubenswrapper[4681]: W0404 02:13:05.754449 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca6efa76_cc20_4742_9c8b_1ef70ff6acff.slice/crio-ebe7c142a60ac1efdb71614a3f91ee7dbcdfb45e66ed3387eb561927858010ef WatchSource:0}: Error finding container ebe7c142a60ac1efdb71614a3f91ee7dbcdfb45e66ed3387eb561927858010ef: Status 404 returned error can't find the container with id ebe7c142a60ac1efdb71614a3f91ee7dbcdfb45e66ed3387eb561927858010ef Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.779613 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86dff4bf76-pkfp4"] Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.813657 4681 generic.go:334] "Generic (PLEG): container finished" podID="d625d583-bc5e-4cf4-914b-09f9452b7633" containerID="3f422b551526c1b019e5d68a11e73b43b65b48623ae560d2a564bf7ecf905cd3" exitCode=0 Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.814008 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668" event={"ID":"d625d583-bc5e-4cf4-914b-09f9452b7633","Type":"ContainerDied","Data":"3f422b551526c1b019e5d68a11e73b43b65b48623ae560d2a564bf7ecf905cd3"} Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.819740 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57bc579d78-bpz42" event={"ID":"ca6efa76-cc20-4742-9c8b-1ef70ff6acff","Type":"ContainerStarted","Data":"ebe7c142a60ac1efdb71614a3f91ee7dbcdfb45e66ed3387eb561927858010ef"} Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.829067 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-74445bf4b8-4fv89" Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.873449 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-57bc579d78-6gdgv"] Apr 04 02:13:05 crc kubenswrapper[4681]: I0404 02:13:05.898249 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-dd944d769-szx5n"] Apr 04 02:13:05 crc kubenswrapper[4681]: W0404 02:13:05.908445 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda510961d_019d_41d4_8a75_66f69f5d6728.slice/crio-6bbeb9065b4455f9020ae9b8f2c7ff607afd075943c692f5bc5331be01d3b9d7 WatchSource:0}: Error finding container 6bbeb9065b4455f9020ae9b8f2c7ff607afd075943c692f5bc5331be01d3b9d7: Status 404 returned error can't find the container with id 6bbeb9065b4455f9020ae9b8f2c7ff607afd075943c692f5bc5331be01d3b9d7 Apr 04 02:13:06 crc kubenswrapper[4681]: I0404 02:13:06.277301 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-74445bf4b8-4fv89"] Apr 04 02:13:06 crc kubenswrapper[4681]: W0404 02:13:06.291124 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c2979a7_06b5_4451_875e_f8e64da75780.slice/crio-b25af21389b649ac60b67274ef4b7b3fca4ac6f85d00719e8cabba73a1dea86a WatchSource:0}: Error finding container b25af21389b649ac60b67274ef4b7b3fca4ac6f85d00719e8cabba73a1dea86a: Status 404 returned error can't find the container with id b25af21389b649ac60b67274ef4b7b3fca4ac6f85d00719e8cabba73a1dea86a Apr 04 02:13:06 crc kubenswrapper[4681]: I0404 02:13:06.455849 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ljx6g"] Apr 04 02:13:06 crc kubenswrapper[4681]: I0404 02:13:06.456095 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ljx6g" podUID="a9db2935-87b5-4918-b3a7-aaf59cd53a45" containerName="registry-server" containerID="cri-o://f516719546bfcaf53f34ab5e04081614b308ed472a0b2ad496c87a68ee3cd5b1" gracePeriod=2 Apr 04 02:13:06 crc kubenswrapper[4681]: I0404 02:13:06.832398 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86dff4bf76-pkfp4" event={"ID":"b23b52f2-8062-48f1-a937-590414fcb369","Type":"ContainerStarted","Data":"d24261c549c2d3a330f77c6779f1264728c07a6b8b19a264ad1d0891492231dd"} Apr 04 02:13:06 crc kubenswrapper[4681]: I0404 02:13:06.841386 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-74445bf4b8-4fv89" event={"ID":"0c2979a7-06b5-4451-875e-f8e64da75780","Type":"ContainerStarted","Data":"b25af21389b649ac60b67274ef4b7b3fca4ac6f85d00719e8cabba73a1dea86a"} Apr 04 02:13:06 crc kubenswrapper[4681]: I0404 02:13:06.842774 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57bc579d78-6gdgv" event={"ID":"c11383e1-c1fe-4d1e-ab47-234adca1f589","Type":"ContainerStarted","Data":"e16541f00e4ada048ab28c0405e60ec730b18a4425d96f5bae2df5748ac44405"} Apr 04 02:13:06 crc kubenswrapper[4681]: I0404 02:13:06.844962 4681 generic.go:334] "Generic (PLEG): container finished" podID="a9db2935-87b5-4918-b3a7-aaf59cd53a45" containerID="f516719546bfcaf53f34ab5e04081614b308ed472a0b2ad496c87a68ee3cd5b1" exitCode=0 Apr 04 02:13:06 crc kubenswrapper[4681]: I0404 02:13:06.845032 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljx6g" event={"ID":"a9db2935-87b5-4918-b3a7-aaf59cd53a45","Type":"ContainerDied","Data":"f516719546bfcaf53f34ab5e04081614b308ed472a0b2ad496c87a68ee3cd5b1"} Apr 04 02:13:06 crc kubenswrapper[4681]: I0404 02:13:06.846367 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-dd944d769-szx5n" event={"ID":"a510961d-019d-41d4-8a75-66f69f5d6728","Type":"ContainerStarted","Data":"6bbeb9065b4455f9020ae9b8f2c7ff607afd075943c692f5bc5331be01d3b9d7"} Apr 04 02:13:06 crc kubenswrapper[4681]: I0404 02:13:06.907594 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ljx6g" Apr 04 02:13:06 crc kubenswrapper[4681]: I0404 02:13:06.991991 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mg9b\" (UniqueName: \"kubernetes.io/projected/a9db2935-87b5-4918-b3a7-aaf59cd53a45-kube-api-access-2mg9b\") pod \"a9db2935-87b5-4918-b3a7-aaf59cd53a45\" (UID: \"a9db2935-87b5-4918-b3a7-aaf59cd53a45\") " Apr 04 02:13:06 crc kubenswrapper[4681]: I0404 02:13:06.992052 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9db2935-87b5-4918-b3a7-aaf59cd53a45-utilities\") pod \"a9db2935-87b5-4918-b3a7-aaf59cd53a45\" (UID: \"a9db2935-87b5-4918-b3a7-aaf59cd53a45\") " Apr 04 02:13:06 crc kubenswrapper[4681]: I0404 02:13:06.992216 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9db2935-87b5-4918-b3a7-aaf59cd53a45-catalog-content\") pod \"a9db2935-87b5-4918-b3a7-aaf59cd53a45\" (UID: \"a9db2935-87b5-4918-b3a7-aaf59cd53a45\") " Apr 04 02:13:06 crc kubenswrapper[4681]: I0404 02:13:06.993593 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9db2935-87b5-4918-b3a7-aaf59cd53a45-utilities" (OuterVolumeSpecName: "utilities") pod "a9db2935-87b5-4918-b3a7-aaf59cd53a45" (UID: "a9db2935-87b5-4918-b3a7-aaf59cd53a45"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:13:07 crc kubenswrapper[4681]: I0404 02:13:07.002632 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9db2935-87b5-4918-b3a7-aaf59cd53a45-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 02:13:07 crc kubenswrapper[4681]: I0404 02:13:07.003930 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9db2935-87b5-4918-b3a7-aaf59cd53a45-kube-api-access-2mg9b" (OuterVolumeSpecName: "kube-api-access-2mg9b") pod "a9db2935-87b5-4918-b3a7-aaf59cd53a45" (UID: "a9db2935-87b5-4918-b3a7-aaf59cd53a45"). InnerVolumeSpecName "kube-api-access-2mg9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:13:07 crc kubenswrapper[4681]: I0404 02:13:07.111680 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mg9b\" (UniqueName: \"kubernetes.io/projected/a9db2935-87b5-4918-b3a7-aaf59cd53a45-kube-api-access-2mg9b\") on node \"crc\" DevicePath \"\"" Apr 04 02:13:07 crc kubenswrapper[4681]: I0404 02:13:07.160015 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9db2935-87b5-4918-b3a7-aaf59cd53a45-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9db2935-87b5-4918-b3a7-aaf59cd53a45" (UID: "a9db2935-87b5-4918-b3a7-aaf59cd53a45"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:13:07 crc kubenswrapper[4681]: I0404 02:13:07.186885 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668" Apr 04 02:13:07 crc kubenswrapper[4681]: I0404 02:13:07.213317 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9db2935-87b5-4918-b3a7-aaf59cd53a45-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 02:13:07 crc kubenswrapper[4681]: I0404 02:13:07.314752 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whbk4\" (UniqueName: \"kubernetes.io/projected/d625d583-bc5e-4cf4-914b-09f9452b7633-kube-api-access-whbk4\") pod \"d625d583-bc5e-4cf4-914b-09f9452b7633\" (UID: \"d625d583-bc5e-4cf4-914b-09f9452b7633\") " Apr 04 02:13:07 crc kubenswrapper[4681]: I0404 02:13:07.314802 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d625d583-bc5e-4cf4-914b-09f9452b7633-bundle\") pod \"d625d583-bc5e-4cf4-914b-09f9452b7633\" (UID: \"d625d583-bc5e-4cf4-914b-09f9452b7633\") " Apr 04 02:13:07 crc kubenswrapper[4681]: I0404 02:13:07.314854 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d625d583-bc5e-4cf4-914b-09f9452b7633-util\") pod \"d625d583-bc5e-4cf4-914b-09f9452b7633\" (UID: \"d625d583-bc5e-4cf4-914b-09f9452b7633\") " Apr 04 02:13:07 crc kubenswrapper[4681]: I0404 02:13:07.319731 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d625d583-bc5e-4cf4-914b-09f9452b7633-bundle" (OuterVolumeSpecName: "bundle") pod "d625d583-bc5e-4cf4-914b-09f9452b7633" (UID: "d625d583-bc5e-4cf4-914b-09f9452b7633"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:13:07 crc kubenswrapper[4681]: I0404 02:13:07.320391 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d625d583-bc5e-4cf4-914b-09f9452b7633-kube-api-access-whbk4" (OuterVolumeSpecName: "kube-api-access-whbk4") pod "d625d583-bc5e-4cf4-914b-09f9452b7633" (UID: "d625d583-bc5e-4cf4-914b-09f9452b7633"). InnerVolumeSpecName "kube-api-access-whbk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:13:07 crc kubenswrapper[4681]: I0404 02:13:07.333219 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d625d583-bc5e-4cf4-914b-09f9452b7633-util" (OuterVolumeSpecName: "util") pod "d625d583-bc5e-4cf4-914b-09f9452b7633" (UID: "d625d583-bc5e-4cf4-914b-09f9452b7633"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:13:07 crc kubenswrapper[4681]: I0404 02:13:07.416034 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whbk4\" (UniqueName: \"kubernetes.io/projected/d625d583-bc5e-4cf4-914b-09f9452b7633-kube-api-access-whbk4\") on node \"crc\" DevicePath \"\"" Apr 04 02:13:07 crc kubenswrapper[4681]: I0404 02:13:07.416071 4681 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d625d583-bc5e-4cf4-914b-09f9452b7633-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:13:07 crc kubenswrapper[4681]: I0404 02:13:07.416080 4681 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d625d583-bc5e-4cf4-914b-09f9452b7633-util\") on node \"crc\" DevicePath \"\"" Apr 04 02:13:07 crc kubenswrapper[4681]: I0404 02:13:07.857426 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ljx6g" event={"ID":"a9db2935-87b5-4918-b3a7-aaf59cd53a45","Type":"ContainerDied","Data":"f48321c473324ab2d0208883eb0814f2ca7ecda8dded3898dcd5481f5e588423"} Apr 04 02:13:07 crc kubenswrapper[4681]: I0404 02:13:07.857523 4681 scope.go:117] "RemoveContainer" containerID="f516719546bfcaf53f34ab5e04081614b308ed472a0b2ad496c87a68ee3cd5b1" Apr 04 02:13:07 crc kubenswrapper[4681]: I0404 02:13:07.857444 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ljx6g" Apr 04 02:13:07 crc kubenswrapper[4681]: I0404 02:13:07.864758 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668" event={"ID":"d625d583-bc5e-4cf4-914b-09f9452b7633","Type":"ContainerDied","Data":"03f4a5802ad0057674537639fae9d6bdda87a76451f2ac5a02a78fe5be471075"} Apr 04 02:13:07 crc kubenswrapper[4681]: I0404 02:13:07.864792 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668" Apr 04 02:13:07 crc kubenswrapper[4681]: I0404 02:13:07.864914 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03f4a5802ad0057674537639fae9d6bdda87a76451f2ac5a02a78fe5be471075" Apr 04 02:13:07 crc kubenswrapper[4681]: I0404 02:13:07.884739 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ljx6g"] Apr 04 02:13:07 crc kubenswrapper[4681]: I0404 02:13:07.894821 4681 scope.go:117] "RemoveContainer" containerID="f93fd6aeec8d942fac0373e5adcf58e659952ac55fb789856505ccd970e526c5" Apr 04 02:13:07 crc kubenswrapper[4681]: I0404 02:13:07.902797 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ljx6g"] Apr 04 02:13:07 crc kubenswrapper[4681]: I0404 02:13:07.939509 4681 scope.go:117] "RemoveContainer" containerID="bccfef3a0ea84aa0f2c4a4e0486b8996235c5c3f4a390979b4524db49efdde69" Apr 04 02:13:09 crc kubenswrapper[4681]: I0404 02:13:09.210623 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9db2935-87b5-4918-b3a7-aaf59cd53a45" path="/var/lib/kubelet/pods/a9db2935-87b5-4918-b3a7-aaf59cd53a45/volumes" Apr 04 02:13:14 crc kubenswrapper[4681]: I0404 02:13:14.921284 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-dd944d769-szx5n" event={"ID":"a510961d-019d-41d4-8a75-66f69f5d6728","Type":"ContainerStarted","Data":"abe07362247bfb416274fea19e4a0cfa52e4989811ef8c74a02d06b846e44c81"} Apr 04 02:13:14 crc kubenswrapper[4681]: I0404 02:13:14.921872 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-dd944d769-szx5n" Apr 04 02:13:14 crc kubenswrapper[4681]: I0404 02:13:14.923089 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86dff4bf76-pkfp4" event={"ID":"b23b52f2-8062-48f1-a937-590414fcb369","Type":"ContainerStarted","Data":"2df5c4c36eab9c09b6ef2d9d158ea5ac69a9208ba455f317fad4d1be5510aba7"} Apr 04 02:13:14 crc kubenswrapper[4681]: I0404 02:13:14.923905 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-dd944d769-szx5n" Apr 04 02:13:14 crc kubenswrapper[4681]: I0404 02:13:14.925159 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-74445bf4b8-4fv89" event={"ID":"0c2979a7-06b5-4451-875e-f8e64da75780","Type":"ContainerStarted","Data":"afa9b1459093c104a3d581e0cad4dfe4543086f294eeb1731dc0fdf5b8519c83"} Apr 04 02:13:14 crc kubenswrapper[4681]: I0404 02:13:14.925303 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-74445bf4b8-4fv89" Apr 04 02:13:14 crc kubenswrapper[4681]: I0404 02:13:14.927889 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57bc579d78-6gdgv" event={"ID":"c11383e1-c1fe-4d1e-ab47-234adca1f589","Type":"ContainerStarted","Data":"392215f9994d6d69ff8920c64000705dbc17b3e49b1558425250275908a626c4"} Apr 04 02:13:14 crc kubenswrapper[4681]: I0404 02:13:14.930819 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57bc579d78-bpz42" event={"ID":"ca6efa76-cc20-4742-9c8b-1ef70ff6acff","Type":"ContainerStarted","Data":"e3a558192abc94e334c8d49bf2f50c5cce6205e08883ca4c2a0055a850f4a667"} Apr 04 02:13:14 crc kubenswrapper[4681]: I0404 02:13:14.949798 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-dd944d769-szx5n" podStartSLOduration=2.116045317 podStartE2EDuration="9.949770341s" podCreationTimestamp="2026-04-04 02:13:05 +0000 UTC" firstStartedPulling="2026-04-04 02:13:05.914085697 +0000 UTC m=+1065.579860817" lastFinishedPulling="2026-04-04 02:13:13.747810701 +0000 UTC m=+1073.413585841" observedRunningTime="2026-04-04 02:13:14.946081005 +0000 UTC m=+1074.611856165" watchObservedRunningTime="2026-04-04 02:13:14.949770341 +0000 UTC m=+1074.615545481" Apr 04 02:13:14 crc kubenswrapper[4681]: I0404 02:13:14.969818 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57bc579d78-6gdgv" podStartSLOduration=3.107347978 podStartE2EDuration="10.96980085s" podCreationTimestamp="2026-04-04 02:13:04 +0000 UTC" firstStartedPulling="2026-04-04 02:13:05.885757582 +0000 UTC m=+1065.551532702" lastFinishedPulling="2026-04-04 02:13:13.748210454 +0000 UTC m=+1073.413985574" observedRunningTime="2026-04-04 02:13:14.96842374 +0000 UTC m=+1074.634198920" watchObservedRunningTime="2026-04-04 02:13:14.96980085 +0000 UTC m=+1074.635575970" Apr 04 02:13:14 crc kubenswrapper[4681]: I0404 02:13:14.989018 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-86dff4bf76-pkfp4" podStartSLOduration=3.048925069 podStartE2EDuration="10.988993635s" podCreationTimestamp="2026-04-04 02:13:04 +0000 UTC" firstStartedPulling="2026-04-04 02:13:05.815040353 +0000 UTC m=+1065.480815473" lastFinishedPulling="2026-04-04 02:13:13.755108919 +0000 UTC m=+1073.420884039" observedRunningTime="2026-04-04 02:13:14.988062028 +0000 UTC m=+1074.653837188" watchObservedRunningTime="2026-04-04 02:13:14.988993635 +0000 UTC m=+1074.654768775" Apr 04 02:13:15 crc kubenswrapper[4681]: I0404 02:13:15.067164 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-57bc579d78-bpz42" podStartSLOduration=3.112414552 podStartE2EDuration="11.067143696s" podCreationTimestamp="2026-04-04 02:13:04 +0000 UTC" firstStartedPulling="2026-04-04 02:13:05.765916506 +0000 UTC m=+1065.431691626" lastFinishedPulling="2026-04-04 02:13:13.72064565 +0000 UTC m=+1073.386420770" observedRunningTime="2026-04-04 02:13:15.056314839 +0000 UTC m=+1074.722090009" watchObservedRunningTime="2026-04-04 02:13:15.067143696 +0000 UTC m=+1074.732918826" Apr 04 02:13:15 crc kubenswrapper[4681]: I0404 02:13:15.084863 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-74445bf4b8-4fv89" podStartSLOduration=2.585664671 podStartE2EDuration="10.084844059s" podCreationTimestamp="2026-04-04 02:13:05 +0000 UTC" firstStartedPulling="2026-04-04 02:13:06.293933112 +0000 UTC m=+1065.959708232" lastFinishedPulling="2026-04-04 02:13:13.793080959 +0000 UTC m=+1073.458887620" observedRunningTime="2026-04-04 02:13:15.081673859 +0000 UTC m=+1074.747448969" watchObservedRunningTime="2026-04-04 02:13:15.084844059 +0000 UTC m=+1074.750619179" Apr 04 02:13:25 crc kubenswrapper[4681]: I0404 02:13:25.833155 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-74445bf4b8-4fv89" Apr 04 02:13:41 crc kubenswrapper[4681]: I0404 02:13:41.766215 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw"] Apr 04 02:13:41 crc kubenswrapper[4681]: E0404 02:13:41.767043 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d625d583-bc5e-4cf4-914b-09f9452b7633" containerName="extract" Apr 04 02:13:41 crc kubenswrapper[4681]: I0404 02:13:41.767057 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d625d583-bc5e-4cf4-914b-09f9452b7633" containerName="extract" Apr 04 02:13:41 crc kubenswrapper[4681]: E0404 02:13:41.767077 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9db2935-87b5-4918-b3a7-aaf59cd53a45" containerName="registry-server" Apr 04 02:13:41 crc kubenswrapper[4681]: I0404 02:13:41.767083 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9db2935-87b5-4918-b3a7-aaf59cd53a45" containerName="registry-server" Apr 04 02:13:41 crc kubenswrapper[4681]: E0404 02:13:41.767101 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d625d583-bc5e-4cf4-914b-09f9452b7633" containerName="util" Apr 04 02:13:41 crc kubenswrapper[4681]: I0404 02:13:41.767109 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d625d583-bc5e-4cf4-914b-09f9452b7633" containerName="util" Apr 04 02:13:41 crc kubenswrapper[4681]: E0404 02:13:41.767124 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d625d583-bc5e-4cf4-914b-09f9452b7633" containerName="pull" Apr 04 02:13:41 crc kubenswrapper[4681]: I0404 02:13:41.767130 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d625d583-bc5e-4cf4-914b-09f9452b7633" containerName="pull" Apr 04 02:13:41 crc kubenswrapper[4681]: E0404 02:13:41.767137 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9db2935-87b5-4918-b3a7-aaf59cd53a45" containerName="extract-content" Apr 04 02:13:41 crc kubenswrapper[4681]: I0404 02:13:41.767145 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9db2935-87b5-4918-b3a7-aaf59cd53a45" containerName="extract-content" Apr 04 02:13:41 crc kubenswrapper[4681]: E0404 02:13:41.767160 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9db2935-87b5-4918-b3a7-aaf59cd53a45" containerName="extract-utilities" Apr 04 02:13:41 crc kubenswrapper[4681]: I0404 02:13:41.767167 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9db2935-87b5-4918-b3a7-aaf59cd53a45" containerName="extract-utilities" Apr 04 02:13:41 crc kubenswrapper[4681]: I0404 02:13:41.767307 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9db2935-87b5-4918-b3a7-aaf59cd53a45" containerName="registry-server" Apr 04 02:13:41 crc kubenswrapper[4681]: I0404 02:13:41.767323 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d625d583-bc5e-4cf4-914b-09f9452b7633" containerName="extract" Apr 04 02:13:41 crc kubenswrapper[4681]: I0404 02:13:41.768230 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw" Apr 04 02:13:41 crc kubenswrapper[4681]: I0404 02:13:41.773234 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw"] Apr 04 02:13:41 crc kubenswrapper[4681]: I0404 02:13:41.775417 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Apr 04 02:13:41 crc kubenswrapper[4681]: I0404 02:13:41.870201 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c7308318-787f-4347-8939-2f27b367b588-bundle\") pod \"5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw\" (UID: \"c7308318-787f-4347-8939-2f27b367b588\") " pod="openshift-marketplace/5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw" Apr 04 02:13:41 crc kubenswrapper[4681]: I0404 02:13:41.870252 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plvhd\" (UniqueName: \"kubernetes.io/projected/c7308318-787f-4347-8939-2f27b367b588-kube-api-access-plvhd\") pod \"5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw\" (UID: \"c7308318-787f-4347-8939-2f27b367b588\") " pod="openshift-marketplace/5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw" Apr 04 02:13:41 crc kubenswrapper[4681]: I0404 02:13:41.870344 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c7308318-787f-4347-8939-2f27b367b588-util\") pod \"5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw\" (UID: \"c7308318-787f-4347-8939-2f27b367b588\") " pod="openshift-marketplace/5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw" Apr 04 02:13:41 crc kubenswrapper[4681]: I0404 02:13:41.971822 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c7308318-787f-4347-8939-2f27b367b588-bundle\") pod \"5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw\" (UID: \"c7308318-787f-4347-8939-2f27b367b588\") " pod="openshift-marketplace/5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw" Apr 04 02:13:41 crc kubenswrapper[4681]: I0404 02:13:41.971912 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plvhd\" (UniqueName: \"kubernetes.io/projected/c7308318-787f-4347-8939-2f27b367b588-kube-api-access-plvhd\") pod \"5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw\" (UID: \"c7308318-787f-4347-8939-2f27b367b588\") " pod="openshift-marketplace/5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw" Apr 04 02:13:41 crc kubenswrapper[4681]: I0404 02:13:41.971983 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c7308318-787f-4347-8939-2f27b367b588-util\") pod \"5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw\" (UID: \"c7308318-787f-4347-8939-2f27b367b588\") " pod="openshift-marketplace/5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw" Apr 04 02:13:41 crc kubenswrapper[4681]: I0404 02:13:41.972386 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c7308318-787f-4347-8939-2f27b367b588-bundle\") pod \"5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw\" (UID: \"c7308318-787f-4347-8939-2f27b367b588\") " pod="openshift-marketplace/5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw" Apr 04 02:13:41 crc kubenswrapper[4681]: I0404 02:13:41.972628 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c7308318-787f-4347-8939-2f27b367b588-util\") pod \"5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw\" (UID: \"c7308318-787f-4347-8939-2f27b367b588\") " pod="openshift-marketplace/5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw" Apr 04 02:13:41 crc kubenswrapper[4681]: I0404 02:13:41.993446 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plvhd\" (UniqueName: \"kubernetes.io/projected/c7308318-787f-4347-8939-2f27b367b588-kube-api-access-plvhd\") pod \"5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw\" (UID: \"c7308318-787f-4347-8939-2f27b367b588\") " pod="openshift-marketplace/5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw" Apr 04 02:13:42 crc kubenswrapper[4681]: I0404 02:13:42.086546 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw" Apr 04 02:13:42 crc kubenswrapper[4681]: I0404 02:13:42.526830 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw"] Apr 04 02:13:43 crc kubenswrapper[4681]: I0404 02:13:43.155299 4681 generic.go:334] "Generic (PLEG): container finished" podID="c7308318-787f-4347-8939-2f27b367b588" containerID="eeb7a85cae8941c9679960662307d740aca556ccab63ed0da208289a23959b16" exitCode=0 Apr 04 02:13:43 crc kubenswrapper[4681]: I0404 02:13:43.155380 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw" event={"ID":"c7308318-787f-4347-8939-2f27b367b588","Type":"ContainerDied","Data":"eeb7a85cae8941c9679960662307d740aca556ccab63ed0da208289a23959b16"} Apr 04 02:13:43 crc kubenswrapper[4681]: I0404 02:13:43.155626 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw" event={"ID":"c7308318-787f-4347-8939-2f27b367b588","Type":"ContainerStarted","Data":"adafff515354e58173023b10375257cfd7381f2fffca0adc35ec0b4b8357e692"} Apr 04 02:13:45 crc kubenswrapper[4681]: I0404 02:13:45.169563 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw" event={"ID":"c7308318-787f-4347-8939-2f27b367b588","Type":"ContainerStarted","Data":"1505e5dbf3d3d7ac249a4add806e359aa8598b5dbbdf0a036f1b39614b65d042"} Apr 04 02:13:46 crc kubenswrapper[4681]: I0404 02:13:46.176166 4681 generic.go:334] "Generic (PLEG): container finished" podID="c7308318-787f-4347-8939-2f27b367b588" containerID="1505e5dbf3d3d7ac249a4add806e359aa8598b5dbbdf0a036f1b39614b65d042" exitCode=0 Apr 04 02:13:46 crc kubenswrapper[4681]: I0404 02:13:46.176214 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw" event={"ID":"c7308318-787f-4347-8939-2f27b367b588","Type":"ContainerDied","Data":"1505e5dbf3d3d7ac249a4add806e359aa8598b5dbbdf0a036f1b39614b65d042"} Apr 04 02:13:47 crc kubenswrapper[4681]: I0404 02:13:47.186544 4681 generic.go:334] "Generic (PLEG): container finished" podID="c7308318-787f-4347-8939-2f27b367b588" containerID="3a300665a14e702d6fa72a7cb84384bcf215cef51ee4c6d62e660206a1c81fe3" exitCode=0 Apr 04 02:13:47 crc kubenswrapper[4681]: I0404 02:13:47.186596 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw" event={"ID":"c7308318-787f-4347-8939-2f27b367b588","Type":"ContainerDied","Data":"3a300665a14e702d6fa72a7cb84384bcf215cef51ee4c6d62e660206a1c81fe3"} Apr 04 02:13:48 crc kubenswrapper[4681]: I0404 02:13:48.453605 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw" Apr 04 02:13:48 crc kubenswrapper[4681]: I0404 02:13:48.459226 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plvhd\" (UniqueName: \"kubernetes.io/projected/c7308318-787f-4347-8939-2f27b367b588-kube-api-access-plvhd\") pod \"c7308318-787f-4347-8939-2f27b367b588\" (UID: \"c7308318-787f-4347-8939-2f27b367b588\") " Apr 04 02:13:48 crc kubenswrapper[4681]: I0404 02:13:48.459336 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c7308318-787f-4347-8939-2f27b367b588-util\") pod \"c7308318-787f-4347-8939-2f27b367b588\" (UID: \"c7308318-787f-4347-8939-2f27b367b588\") " Apr 04 02:13:48 crc kubenswrapper[4681]: I0404 02:13:48.459471 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c7308318-787f-4347-8939-2f27b367b588-bundle\") pod \"c7308318-787f-4347-8939-2f27b367b588\" (UID: \"c7308318-787f-4347-8939-2f27b367b588\") " Apr 04 02:13:48 crc kubenswrapper[4681]: I0404 02:13:48.460651 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7308318-787f-4347-8939-2f27b367b588-bundle" (OuterVolumeSpecName: "bundle") pod "c7308318-787f-4347-8939-2f27b367b588" (UID: "c7308318-787f-4347-8939-2f27b367b588"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:13:48 crc kubenswrapper[4681]: I0404 02:13:48.465626 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7308318-787f-4347-8939-2f27b367b588-kube-api-access-plvhd" (OuterVolumeSpecName: "kube-api-access-plvhd") pod "c7308318-787f-4347-8939-2f27b367b588" (UID: "c7308318-787f-4347-8939-2f27b367b588"). InnerVolumeSpecName "kube-api-access-plvhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:13:48 crc kubenswrapper[4681]: I0404 02:13:48.469972 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7308318-787f-4347-8939-2f27b367b588-util" (OuterVolumeSpecName: "util") pod "c7308318-787f-4347-8939-2f27b367b588" (UID: "c7308318-787f-4347-8939-2f27b367b588"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:13:48 crc kubenswrapper[4681]: I0404 02:13:48.560542 4681 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c7308318-787f-4347-8939-2f27b367b588-util\") on node \"crc\" DevicePath \"\"" Apr 04 02:13:48 crc kubenswrapper[4681]: I0404 02:13:48.560587 4681 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c7308318-787f-4347-8939-2f27b367b588-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:13:48 crc kubenswrapper[4681]: I0404 02:13:48.560606 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plvhd\" (UniqueName: \"kubernetes.io/projected/c7308318-787f-4347-8939-2f27b367b588-kube-api-access-plvhd\") on node \"crc\" DevicePath \"\"" Apr 04 02:13:49 crc kubenswrapper[4681]: I0404 02:13:49.207496 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw" Apr 04 02:13:49 crc kubenswrapper[4681]: I0404 02:13:49.216227 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw" event={"ID":"c7308318-787f-4347-8939-2f27b367b588","Type":"ContainerDied","Data":"adafff515354e58173023b10375257cfd7381f2fffca0adc35ec0b4b8357e692"} Apr 04 02:13:49 crc kubenswrapper[4681]: I0404 02:13:49.216319 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="adafff515354e58173023b10375257cfd7381f2fffca0adc35ec0b4b8357e692" Apr 04 02:13:53 crc kubenswrapper[4681]: I0404 02:13:53.315687 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-6b8c6447b-vd5sz"] Apr 04 02:13:53 crc kubenswrapper[4681]: E0404 02:13:53.315906 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7308318-787f-4347-8939-2f27b367b588" containerName="pull" Apr 04 02:13:53 crc kubenswrapper[4681]: I0404 02:13:53.315917 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7308318-787f-4347-8939-2f27b367b588" containerName="pull" Apr 04 02:13:53 crc kubenswrapper[4681]: E0404 02:13:53.315933 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7308318-787f-4347-8939-2f27b367b588" containerName="util" Apr 04 02:13:53 crc kubenswrapper[4681]: I0404 02:13:53.315940 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7308318-787f-4347-8939-2f27b367b588" containerName="util" Apr 04 02:13:53 crc kubenswrapper[4681]: E0404 02:13:53.315954 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7308318-787f-4347-8939-2f27b367b588" containerName="extract" Apr 04 02:13:53 crc kubenswrapper[4681]: I0404 02:13:53.315960 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7308318-787f-4347-8939-2f27b367b588" containerName="extract" Apr 04 02:13:53 crc kubenswrapper[4681]: I0404 02:13:53.316052 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7308318-787f-4347-8939-2f27b367b588" containerName="extract" Apr 04 02:13:53 crc kubenswrapper[4681]: I0404 02:13:53.316455 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6b8c6447b-vd5sz" Apr 04 02:13:53 crc kubenswrapper[4681]: I0404 02:13:53.319061 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Apr 04 02:13:53 crc kubenswrapper[4681]: I0404 02:13:53.320346 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Apr 04 02:13:53 crc kubenswrapper[4681]: I0404 02:13:53.321462 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-8h5nb" Apr 04 02:13:53 crc kubenswrapper[4681]: I0404 02:13:53.327303 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl4t2\" (UniqueName: \"kubernetes.io/projected/dd449ba7-bc18-4cdb-8f0f-05c997e2274e-kube-api-access-pl4t2\") pod \"nmstate-operator-6b8c6447b-vd5sz\" (UID: \"dd449ba7-bc18-4cdb-8f0f-05c997e2274e\") " pod="openshift-nmstate/nmstate-operator-6b8c6447b-vd5sz" Apr 04 02:13:53 crc kubenswrapper[4681]: I0404 02:13:53.330260 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6b8c6447b-vd5sz"] Apr 04 02:13:53 crc kubenswrapper[4681]: I0404 02:13:53.429314 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl4t2\" (UniqueName: \"kubernetes.io/projected/dd449ba7-bc18-4cdb-8f0f-05c997e2274e-kube-api-access-pl4t2\") pod \"nmstate-operator-6b8c6447b-vd5sz\" (UID: \"dd449ba7-bc18-4cdb-8f0f-05c997e2274e\") " pod="openshift-nmstate/nmstate-operator-6b8c6447b-vd5sz" Apr 04 02:13:53 crc kubenswrapper[4681]: I0404 02:13:53.447583 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl4t2\" (UniqueName: \"kubernetes.io/projected/dd449ba7-bc18-4cdb-8f0f-05c997e2274e-kube-api-access-pl4t2\") pod \"nmstate-operator-6b8c6447b-vd5sz\" (UID: \"dd449ba7-bc18-4cdb-8f0f-05c997e2274e\") " pod="openshift-nmstate/nmstate-operator-6b8c6447b-vd5sz" Apr 04 02:13:53 crc kubenswrapper[4681]: I0404 02:13:53.630215 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6b8c6447b-vd5sz" Apr 04 02:13:53 crc kubenswrapper[4681]: I0404 02:13:53.838797 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6b8c6447b-vd5sz"] Apr 04 02:13:54 crc kubenswrapper[4681]: I0404 02:13:54.239328 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6b8c6447b-vd5sz" event={"ID":"dd449ba7-bc18-4cdb-8f0f-05c997e2274e","Type":"ContainerStarted","Data":"02f1f76822a34853ad1b80e942addfe632ebce4f0c25e6a3f55c9177d3e4728d"} Apr 04 02:13:56 crc kubenswrapper[4681]: I0404 02:13:56.524809 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:13:56 crc kubenswrapper[4681]: I0404 02:13:56.525190 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:13:57 crc kubenswrapper[4681]: I0404 02:13:57.256387 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6b8c6447b-vd5sz" event={"ID":"dd449ba7-bc18-4cdb-8f0f-05c997e2274e","Type":"ContainerStarted","Data":"e7794cbda843d1d9443be8548be53b8dd5b940dc9973babd06764e4e618d98f1"} Apr 04 02:13:57 crc kubenswrapper[4681]: I0404 02:13:57.273093 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-6b8c6447b-vd5sz" podStartSLOduration=1.70624173 podStartE2EDuration="4.273072657s" podCreationTimestamp="2026-04-04 02:13:53 +0000 UTC" firstStartedPulling="2026-04-04 02:13:53.84624128 +0000 UTC m=+1113.512016400" lastFinishedPulling="2026-04-04 02:13:56.413072207 +0000 UTC m=+1116.078847327" observedRunningTime="2026-04-04 02:13:57.272040557 +0000 UTC m=+1116.937815667" watchObservedRunningTime="2026-04-04 02:13:57.273072657 +0000 UTC m=+1116.938847797" Apr 04 02:14:00 crc kubenswrapper[4681]: I0404 02:14:00.135052 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587814-cdrrh"] Apr 04 02:14:00 crc kubenswrapper[4681]: I0404 02:14:00.136340 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587814-cdrrh" Apr 04 02:14:00 crc kubenswrapper[4681]: I0404 02:14:00.138426 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 02:14:00 crc kubenswrapper[4681]: I0404 02:14:00.138484 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 02:14:00 crc kubenswrapper[4681]: I0404 02:14:00.144352 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587814-cdrrh"] Apr 04 02:14:00 crc kubenswrapper[4681]: I0404 02:14:00.145143 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 02:14:00 crc kubenswrapper[4681]: I0404 02:14:00.212732 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzs6z\" (UniqueName: \"kubernetes.io/projected/f599124d-85f4-4576-a845-ef6ae9456614-kube-api-access-gzs6z\") pod \"auto-csr-approver-29587814-cdrrh\" (UID: \"f599124d-85f4-4576-a845-ef6ae9456614\") " pod="openshift-infra/auto-csr-approver-29587814-cdrrh" Apr 04 02:14:00 crc kubenswrapper[4681]: I0404 02:14:00.313888 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzs6z\" (UniqueName: \"kubernetes.io/projected/f599124d-85f4-4576-a845-ef6ae9456614-kube-api-access-gzs6z\") pod \"auto-csr-approver-29587814-cdrrh\" (UID: \"f599124d-85f4-4576-a845-ef6ae9456614\") " pod="openshift-infra/auto-csr-approver-29587814-cdrrh" Apr 04 02:14:00 crc kubenswrapper[4681]: I0404 02:14:00.334409 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzs6z\" (UniqueName: \"kubernetes.io/projected/f599124d-85f4-4576-a845-ef6ae9456614-kube-api-access-gzs6z\") pod \"auto-csr-approver-29587814-cdrrh\" (UID: \"f599124d-85f4-4576-a845-ef6ae9456614\") " pod="openshift-infra/auto-csr-approver-29587814-cdrrh" Apr 04 02:14:00 crc kubenswrapper[4681]: I0404 02:14:00.461118 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587814-cdrrh" Apr 04 02:14:00 crc kubenswrapper[4681]: I0404 02:14:00.671234 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587814-cdrrh"] Apr 04 02:14:01 crc kubenswrapper[4681]: I0404 02:14:01.288565 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587814-cdrrh" event={"ID":"f599124d-85f4-4576-a845-ef6ae9456614","Type":"ContainerStarted","Data":"53356c901d15ad867ac7ccb969ca61ea80fd3dfd54e9c9c04f4aaf43a1b162c1"} Apr 04 02:14:02 crc kubenswrapper[4681]: I0404 02:14:02.311684 4681 generic.go:334] "Generic (PLEG): container finished" podID="f599124d-85f4-4576-a845-ef6ae9456614" containerID="ef016fc482b0c4b464c6e4702f1ebe59beb821e8121c2c0a7dc7ecb4d1877291" exitCode=0 Apr 04 02:14:02 crc kubenswrapper[4681]: I0404 02:14:02.311787 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587814-cdrrh" event={"ID":"f599124d-85f4-4576-a845-ef6ae9456614","Type":"ContainerDied","Data":"ef016fc482b0c4b464c6e4702f1ebe59beb821e8121c2c0a7dc7ecb4d1877291"} Apr 04 02:14:02 crc kubenswrapper[4681]: I0404 02:14:02.837297 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-gc988"] Apr 04 02:14:02 crc kubenswrapper[4681]: I0404 02:14:02.838400 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-gc988" Apr 04 02:14:02 crc kubenswrapper[4681]: I0404 02:14:02.840358 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-sm7fd" Apr 04 02:14:02 crc kubenswrapper[4681]: I0404 02:14:02.850021 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-gc988"] Apr 04 02:14:02 crc kubenswrapper[4681]: I0404 02:14:02.857113 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-l6xdx"] Apr 04 02:14:02 crc kubenswrapper[4681]: I0404 02:14:02.858210 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-l6xdx" Apr 04 02:14:02 crc kubenswrapper[4681]: I0404 02:14:02.874209 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-jn9q8"] Apr 04 02:14:02 crc kubenswrapper[4681]: I0404 02:14:02.875307 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-jn9q8" Apr 04 02:14:02 crc kubenswrapper[4681]: I0404 02:14:02.877230 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Apr 04 02:14:02 crc kubenswrapper[4681]: I0404 02:14:02.896159 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-jn9q8"] Apr 04 02:14:02 crc kubenswrapper[4681]: I0404 02:14:02.953028 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22w2f\" (UniqueName: \"kubernetes.io/projected/ec47a21e-ac21-4720-ac9c-b0b9f50bfc85-kube-api-access-22w2f\") pod \"nmstate-metrics-9b8c8685d-gc988\" (UID: \"ec47a21e-ac21-4720-ac9c-b0b9f50bfc85\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-gc988" Apr 04 02:14:02 crc kubenswrapper[4681]: I0404 02:14:02.970205 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7b5ddc4dc7-cg7df"] Apr 04 02:14:02 crc kubenswrapper[4681]: I0404 02:14:02.971147 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7b5ddc4dc7-cg7df" Apr 04 02:14:02 crc kubenswrapper[4681]: I0404 02:14:02.973221 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Apr 04 02:14:02 crc kubenswrapper[4681]: I0404 02:14:02.974601 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Apr 04 02:14:02 crc kubenswrapper[4681]: I0404 02:14:02.980522 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-8dz4w" Apr 04 02:14:02 crc kubenswrapper[4681]: I0404 02:14:02.982966 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7b5ddc4dc7-cg7df"] Apr 04 02:14:02 crc kubenswrapper[4681]: I0404 02:14:02.984821 4681 scope.go:117] "RemoveContainer" containerID="14a6c65a0aca260e68eaa2d4a3d2418b9423b654ad7577f011fe33205a4e79bc" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.054130 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22w2f\" (UniqueName: \"kubernetes.io/projected/ec47a21e-ac21-4720-ac9c-b0b9f50bfc85-kube-api-access-22w2f\") pod \"nmstate-metrics-9b8c8685d-gc988\" (UID: \"ec47a21e-ac21-4720-ac9c-b0b9f50bfc85\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-gc988" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.054465 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e174b98a-0ca7-4dfc-846f-b0395cb9b4a4-dbus-socket\") pod \"nmstate-handler-l6xdx\" (UID: \"e174b98a-0ca7-4dfc-846f-b0395cb9b4a4\") " pod="openshift-nmstate/nmstate-handler-l6xdx" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.054501 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fbb2aa57-946f-43fb-9380-83a69cced169-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-jn9q8\" (UID: \"fbb2aa57-946f-43fb-9380-83a69cced169\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-jn9q8" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.054525 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkl68\" (UniqueName: \"kubernetes.io/projected/fbb2aa57-946f-43fb-9380-83a69cced169-kube-api-access-kkl68\") pod \"nmstate-webhook-5f558f5558-jn9q8\" (UID: \"fbb2aa57-946f-43fb-9380-83a69cced169\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-jn9q8" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.054553 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e174b98a-0ca7-4dfc-846f-b0395cb9b4a4-ovs-socket\") pod \"nmstate-handler-l6xdx\" (UID: \"e174b98a-0ca7-4dfc-846f-b0395cb9b4a4\") " pod="openshift-nmstate/nmstate-handler-l6xdx" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.054713 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e174b98a-0ca7-4dfc-846f-b0395cb9b4a4-nmstate-lock\") pod \"nmstate-handler-l6xdx\" (UID: \"e174b98a-0ca7-4dfc-846f-b0395cb9b4a4\") " pod="openshift-nmstate/nmstate-handler-l6xdx" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.054823 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwjpx\" (UniqueName: \"kubernetes.io/projected/e174b98a-0ca7-4dfc-846f-b0395cb9b4a4-kube-api-access-jwjpx\") pod \"nmstate-handler-l6xdx\" (UID: \"e174b98a-0ca7-4dfc-846f-b0395cb9b4a4\") " pod="openshift-nmstate/nmstate-handler-l6xdx" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.094159 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22w2f\" (UniqueName: \"kubernetes.io/projected/ec47a21e-ac21-4720-ac9c-b0b9f50bfc85-kube-api-access-22w2f\") pod \"nmstate-metrics-9b8c8685d-gc988\" (UID: \"ec47a21e-ac21-4720-ac9c-b0b9f50bfc85\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-gc988" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.156481 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e174b98a-0ca7-4dfc-846f-b0395cb9b4a4-ovs-socket\") pod \"nmstate-handler-l6xdx\" (UID: \"e174b98a-0ca7-4dfc-846f-b0395cb9b4a4\") " pod="openshift-nmstate/nmstate-handler-l6xdx" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.156553 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e174b98a-0ca7-4dfc-846f-b0395cb9b4a4-nmstate-lock\") pod \"nmstate-handler-l6xdx\" (UID: \"e174b98a-0ca7-4dfc-846f-b0395cb9b4a4\") " pod="openshift-nmstate/nmstate-handler-l6xdx" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.156616 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e174b98a-0ca7-4dfc-846f-b0395cb9b4a4-ovs-socket\") pod \"nmstate-handler-l6xdx\" (UID: \"e174b98a-0ca7-4dfc-846f-b0395cb9b4a4\") " pod="openshift-nmstate/nmstate-handler-l6xdx" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.156638 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e174b98a-0ca7-4dfc-846f-b0395cb9b4a4-nmstate-lock\") pod \"nmstate-handler-l6xdx\" (UID: \"e174b98a-0ca7-4dfc-846f-b0395cb9b4a4\") " pod="openshift-nmstate/nmstate-handler-l6xdx" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.156638 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwjpx\" (UniqueName: \"kubernetes.io/projected/e174b98a-0ca7-4dfc-846f-b0395cb9b4a4-kube-api-access-jwjpx\") pod \"nmstate-handler-l6xdx\" (UID: \"e174b98a-0ca7-4dfc-846f-b0395cb9b4a4\") " pod="openshift-nmstate/nmstate-handler-l6xdx" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.156719 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/04ddbb2f-cda2-457e-8e65-e3b1e1d9ae53-plugin-serving-cert\") pod \"nmstate-console-plugin-7b5ddc4dc7-cg7df\" (UID: \"04ddbb2f-cda2-457e-8e65-e3b1e1d9ae53\") " pod="openshift-nmstate/nmstate-console-plugin-7b5ddc4dc7-cg7df" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.156900 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e174b98a-0ca7-4dfc-846f-b0395cb9b4a4-dbus-socket\") pod \"nmstate-handler-l6xdx\" (UID: \"e174b98a-0ca7-4dfc-846f-b0395cb9b4a4\") " pod="openshift-nmstate/nmstate-handler-l6xdx" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.156924 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/04ddbb2f-cda2-457e-8e65-e3b1e1d9ae53-nginx-conf\") pod \"nmstate-console-plugin-7b5ddc4dc7-cg7df\" (UID: \"04ddbb2f-cda2-457e-8e65-e3b1e1d9ae53\") " pod="openshift-nmstate/nmstate-console-plugin-7b5ddc4dc7-cg7df" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.156970 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fbb2aa57-946f-43fb-9380-83a69cced169-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-jn9q8\" (UID: \"fbb2aa57-946f-43fb-9380-83a69cced169\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-jn9q8" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.157003 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkl68\" (UniqueName: \"kubernetes.io/projected/fbb2aa57-946f-43fb-9380-83a69cced169-kube-api-access-kkl68\") pod \"nmstate-webhook-5f558f5558-jn9q8\" (UID: \"fbb2aa57-946f-43fb-9380-83a69cced169\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-jn9q8" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.157046 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qn62\" (UniqueName: \"kubernetes.io/projected/04ddbb2f-cda2-457e-8e65-e3b1e1d9ae53-kube-api-access-8qn62\") pod \"nmstate-console-plugin-7b5ddc4dc7-cg7df\" (UID: \"04ddbb2f-cda2-457e-8e65-e3b1e1d9ae53\") " pod="openshift-nmstate/nmstate-console-plugin-7b5ddc4dc7-cg7df" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.157225 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e174b98a-0ca7-4dfc-846f-b0395cb9b4a4-dbus-socket\") pod \"nmstate-handler-l6xdx\" (UID: \"e174b98a-0ca7-4dfc-846f-b0395cb9b4a4\") " pod="openshift-nmstate/nmstate-handler-l6xdx" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.157244 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-gc988" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.163726 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-565957798b-tfdrf"] Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.164462 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-565957798b-tfdrf" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.173020 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/fbb2aa57-946f-43fb-9380-83a69cced169-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-jn9q8\" (UID: \"fbb2aa57-946f-43fb-9380-83a69cced169\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-jn9q8" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.175930 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-565957798b-tfdrf"] Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.176781 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkl68\" (UniqueName: \"kubernetes.io/projected/fbb2aa57-946f-43fb-9380-83a69cced169-kube-api-access-kkl68\") pod \"nmstate-webhook-5f558f5558-jn9q8\" (UID: \"fbb2aa57-946f-43fb-9380-83a69cced169\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-jn9q8" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.187139 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwjpx\" (UniqueName: \"kubernetes.io/projected/e174b98a-0ca7-4dfc-846f-b0395cb9b4a4-kube-api-access-jwjpx\") pod \"nmstate-handler-l6xdx\" (UID: \"e174b98a-0ca7-4dfc-846f-b0395cb9b4a4\") " pod="openshift-nmstate/nmstate-handler-l6xdx" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.193588 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-jn9q8" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.258051 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/04ddbb2f-cda2-457e-8e65-e3b1e1d9ae53-nginx-conf\") pod \"nmstate-console-plugin-7b5ddc4dc7-cg7df\" (UID: \"04ddbb2f-cda2-457e-8e65-e3b1e1d9ae53\") " pod="openshift-nmstate/nmstate-console-plugin-7b5ddc4dc7-cg7df" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.258100 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qn62\" (UniqueName: \"kubernetes.io/projected/04ddbb2f-cda2-457e-8e65-e3b1e1d9ae53-kube-api-access-8qn62\") pod \"nmstate-console-plugin-7b5ddc4dc7-cg7df\" (UID: \"04ddbb2f-cda2-457e-8e65-e3b1e1d9ae53\") " pod="openshift-nmstate/nmstate-console-plugin-7b5ddc4dc7-cg7df" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.258133 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/864c4653-08ad-4044-8c2b-cb57117f4606-service-ca\") pod \"console-565957798b-tfdrf\" (UID: \"864c4653-08ad-4044-8c2b-cb57117f4606\") " pod="openshift-console/console-565957798b-tfdrf" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.258153 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/864c4653-08ad-4044-8c2b-cb57117f4606-oauth-serving-cert\") pod \"console-565957798b-tfdrf\" (UID: \"864c4653-08ad-4044-8c2b-cb57117f4606\") " pod="openshift-console/console-565957798b-tfdrf" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.258203 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/04ddbb2f-cda2-457e-8e65-e3b1e1d9ae53-plugin-serving-cert\") pod \"nmstate-console-plugin-7b5ddc4dc7-cg7df\" (UID: \"04ddbb2f-cda2-457e-8e65-e3b1e1d9ae53\") " pod="openshift-nmstate/nmstate-console-plugin-7b5ddc4dc7-cg7df" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.258218 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/864c4653-08ad-4044-8c2b-cb57117f4606-trusted-ca-bundle\") pod \"console-565957798b-tfdrf\" (UID: \"864c4653-08ad-4044-8c2b-cb57117f4606\") " pod="openshift-console/console-565957798b-tfdrf" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.258234 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/864c4653-08ad-4044-8c2b-cb57117f4606-console-config\") pod \"console-565957798b-tfdrf\" (UID: \"864c4653-08ad-4044-8c2b-cb57117f4606\") " pod="openshift-console/console-565957798b-tfdrf" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.258248 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/864c4653-08ad-4044-8c2b-cb57117f4606-console-oauth-config\") pod \"console-565957798b-tfdrf\" (UID: \"864c4653-08ad-4044-8c2b-cb57117f4606\") " pod="openshift-console/console-565957798b-tfdrf" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.258283 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/864c4653-08ad-4044-8c2b-cb57117f4606-console-serving-cert\") pod \"console-565957798b-tfdrf\" (UID: \"864c4653-08ad-4044-8c2b-cb57117f4606\") " pod="openshift-console/console-565957798b-tfdrf" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.258301 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l29gb\" (UniqueName: \"kubernetes.io/projected/864c4653-08ad-4044-8c2b-cb57117f4606-kube-api-access-l29gb\") pod \"console-565957798b-tfdrf\" (UID: \"864c4653-08ad-4044-8c2b-cb57117f4606\") " pod="openshift-console/console-565957798b-tfdrf" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.268834 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/04ddbb2f-cda2-457e-8e65-e3b1e1d9ae53-nginx-conf\") pod \"nmstate-console-plugin-7b5ddc4dc7-cg7df\" (UID: \"04ddbb2f-cda2-457e-8e65-e3b1e1d9ae53\") " pod="openshift-nmstate/nmstate-console-plugin-7b5ddc4dc7-cg7df" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.295314 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/04ddbb2f-cda2-457e-8e65-e3b1e1d9ae53-plugin-serving-cert\") pod \"nmstate-console-plugin-7b5ddc4dc7-cg7df\" (UID: \"04ddbb2f-cda2-457e-8e65-e3b1e1d9ae53\") " pod="openshift-nmstate/nmstate-console-plugin-7b5ddc4dc7-cg7df" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.297123 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qn62\" (UniqueName: \"kubernetes.io/projected/04ddbb2f-cda2-457e-8e65-e3b1e1d9ae53-kube-api-access-8qn62\") pod \"nmstate-console-plugin-7b5ddc4dc7-cg7df\" (UID: \"04ddbb2f-cda2-457e-8e65-e3b1e1d9ae53\") " pod="openshift-nmstate/nmstate-console-plugin-7b5ddc4dc7-cg7df" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.367297 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/864c4653-08ad-4044-8c2b-cb57117f4606-service-ca\") pod \"console-565957798b-tfdrf\" (UID: \"864c4653-08ad-4044-8c2b-cb57117f4606\") " pod="openshift-console/console-565957798b-tfdrf" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.367452 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/864c4653-08ad-4044-8c2b-cb57117f4606-oauth-serving-cert\") pod \"console-565957798b-tfdrf\" (UID: \"864c4653-08ad-4044-8c2b-cb57117f4606\") " pod="openshift-console/console-565957798b-tfdrf" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.367508 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/864c4653-08ad-4044-8c2b-cb57117f4606-trusted-ca-bundle\") pod \"console-565957798b-tfdrf\" (UID: \"864c4653-08ad-4044-8c2b-cb57117f4606\") " pod="openshift-console/console-565957798b-tfdrf" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.367528 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/864c4653-08ad-4044-8c2b-cb57117f4606-console-config\") pod \"console-565957798b-tfdrf\" (UID: \"864c4653-08ad-4044-8c2b-cb57117f4606\") " pod="openshift-console/console-565957798b-tfdrf" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.367544 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/864c4653-08ad-4044-8c2b-cb57117f4606-console-oauth-config\") pod \"console-565957798b-tfdrf\" (UID: \"864c4653-08ad-4044-8c2b-cb57117f4606\") " pod="openshift-console/console-565957798b-tfdrf" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.367564 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/864c4653-08ad-4044-8c2b-cb57117f4606-console-serving-cert\") pod \"console-565957798b-tfdrf\" (UID: \"864c4653-08ad-4044-8c2b-cb57117f4606\") " pod="openshift-console/console-565957798b-tfdrf" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.367585 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l29gb\" (UniqueName: \"kubernetes.io/projected/864c4653-08ad-4044-8c2b-cb57117f4606-kube-api-access-l29gb\") pod \"console-565957798b-tfdrf\" (UID: \"864c4653-08ad-4044-8c2b-cb57117f4606\") " pod="openshift-console/console-565957798b-tfdrf" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.369108 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/864c4653-08ad-4044-8c2b-cb57117f4606-oauth-serving-cert\") pod \"console-565957798b-tfdrf\" (UID: \"864c4653-08ad-4044-8c2b-cb57117f4606\") " pod="openshift-console/console-565957798b-tfdrf" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.372244 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/864c4653-08ad-4044-8c2b-cb57117f4606-service-ca\") pod \"console-565957798b-tfdrf\" (UID: \"864c4653-08ad-4044-8c2b-cb57117f4606\") " pod="openshift-console/console-565957798b-tfdrf" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.372485 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/864c4653-08ad-4044-8c2b-cb57117f4606-trusted-ca-bundle\") pod \"console-565957798b-tfdrf\" (UID: \"864c4653-08ad-4044-8c2b-cb57117f4606\") " pod="openshift-console/console-565957798b-tfdrf" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.372546 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/864c4653-08ad-4044-8c2b-cb57117f4606-console-config\") pod \"console-565957798b-tfdrf\" (UID: \"864c4653-08ad-4044-8c2b-cb57117f4606\") " pod="openshift-console/console-565957798b-tfdrf" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.375701 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/864c4653-08ad-4044-8c2b-cb57117f4606-console-oauth-config\") pod \"console-565957798b-tfdrf\" (UID: \"864c4653-08ad-4044-8c2b-cb57117f4606\") " pod="openshift-console/console-565957798b-tfdrf" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.379516 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/864c4653-08ad-4044-8c2b-cb57117f4606-console-serving-cert\") pod \"console-565957798b-tfdrf\" (UID: \"864c4653-08ad-4044-8c2b-cb57117f4606\") " pod="openshift-console/console-565957798b-tfdrf" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.392090 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l29gb\" (UniqueName: \"kubernetes.io/projected/864c4653-08ad-4044-8c2b-cb57117f4606-kube-api-access-l29gb\") pod \"console-565957798b-tfdrf\" (UID: \"864c4653-08ad-4044-8c2b-cb57117f4606\") " pod="openshift-console/console-565957798b-tfdrf" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.392211 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-jn9q8"] Apr 04 02:14:03 crc kubenswrapper[4681]: W0404 02:14:03.397664 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbb2aa57_946f_43fb_9380_83a69cced169.slice/crio-e018be3f570f48aaccd50022c60086279b19f3f3952a0146f076800076ec6e29 WatchSource:0}: Error finding container e018be3f570f48aaccd50022c60086279b19f3f3952a0146f076800076ec6e29: Status 404 returned error can't find the container with id e018be3f570f48aaccd50022c60086279b19f3f3952a0146f076800076ec6e29 Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.421800 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-gc988"] Apr 04 02:14:03 crc kubenswrapper[4681]: W0404 02:14:03.424595 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec47a21e_ac21_4720_ac9c_b0b9f50bfc85.slice/crio-c048432438c4d8e2420a258ccb2af478b94be8cfbb277002958dca0cfd280315 WatchSource:0}: Error finding container c048432438c4d8e2420a258ccb2af478b94be8cfbb277002958dca0cfd280315: Status 404 returned error can't find the container with id c048432438c4d8e2420a258ccb2af478b94be8cfbb277002958dca0cfd280315 Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.480863 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-l6xdx" Apr 04 02:14:03 crc kubenswrapper[4681]: W0404 02:14:03.505738 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode174b98a_0ca7_4dfc_846f_b0395cb9b4a4.slice/crio-06baa517e29c566d85fb163c8fffa57190e1e1f091dd079f0f878e8edd4bc294 WatchSource:0}: Error finding container 06baa517e29c566d85fb163c8fffa57190e1e1f091dd079f0f878e8edd4bc294: Status 404 returned error can't find the container with id 06baa517e29c566d85fb163c8fffa57190e1e1f091dd079f0f878e8edd4bc294 Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.514534 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587814-cdrrh" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.537583 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-565957798b-tfdrf" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.588216 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7b5ddc4dc7-cg7df" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.672988 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzs6z\" (UniqueName: \"kubernetes.io/projected/f599124d-85f4-4576-a845-ef6ae9456614-kube-api-access-gzs6z\") pod \"f599124d-85f4-4576-a845-ef6ae9456614\" (UID: \"f599124d-85f4-4576-a845-ef6ae9456614\") " Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.677038 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f599124d-85f4-4576-a845-ef6ae9456614-kube-api-access-gzs6z" (OuterVolumeSpecName: "kube-api-access-gzs6z") pod "f599124d-85f4-4576-a845-ef6ae9456614" (UID: "f599124d-85f4-4576-a845-ef6ae9456614"). InnerVolumeSpecName "kube-api-access-gzs6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.722224 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-565957798b-tfdrf"] Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.774295 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzs6z\" (UniqueName: \"kubernetes.io/projected/f599124d-85f4-4576-a845-ef6ae9456614-kube-api-access-gzs6z\") on node \"crc\" DevicePath \"\"" Apr 04 02:14:03 crc kubenswrapper[4681]: I0404 02:14:03.784874 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7b5ddc4dc7-cg7df"] Apr 04 02:14:03 crc kubenswrapper[4681]: W0404 02:14:03.791374 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04ddbb2f_cda2_457e_8e65_e3b1e1d9ae53.slice/crio-bdaddc859220d5fef3d7a6065ef326a30fee046652105d3422b33741f50584f5 WatchSource:0}: Error finding container bdaddc859220d5fef3d7a6065ef326a30fee046652105d3422b33741f50584f5: Status 404 returned error can't find the container with id bdaddc859220d5fef3d7a6065ef326a30fee046652105d3422b33741f50584f5 Apr 04 02:14:04 crc kubenswrapper[4681]: I0404 02:14:04.336195 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-565957798b-tfdrf" event={"ID":"864c4653-08ad-4044-8c2b-cb57117f4606","Type":"ContainerStarted","Data":"3859d0935426ec56832fa9c0b3f3d6db1d699caf17c36f6f86b30fc8af7c07ac"} Apr 04 02:14:04 crc kubenswrapper[4681]: I0404 02:14:04.336667 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-565957798b-tfdrf" event={"ID":"864c4653-08ad-4044-8c2b-cb57117f4606","Type":"ContainerStarted","Data":"6ca7b150ca656f42dce661d525fea933d8bb6e518debb0c11cba351f9764ecd2"} Apr 04 02:14:04 crc kubenswrapper[4681]: I0404 02:14:04.338157 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-l6xdx" event={"ID":"e174b98a-0ca7-4dfc-846f-b0395cb9b4a4","Type":"ContainerStarted","Data":"06baa517e29c566d85fb163c8fffa57190e1e1f091dd079f0f878e8edd4bc294"} Apr 04 02:14:04 crc kubenswrapper[4681]: I0404 02:14:04.341750 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-gc988" event={"ID":"ec47a21e-ac21-4720-ac9c-b0b9f50bfc85","Type":"ContainerStarted","Data":"c048432438c4d8e2420a258ccb2af478b94be8cfbb277002958dca0cfd280315"} Apr 04 02:14:04 crc kubenswrapper[4681]: I0404 02:14:04.344510 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587814-cdrrh" Apr 04 02:14:04 crc kubenswrapper[4681]: I0404 02:14:04.344522 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587814-cdrrh" event={"ID":"f599124d-85f4-4576-a845-ef6ae9456614","Type":"ContainerDied","Data":"53356c901d15ad867ac7ccb969ca61ea80fd3dfd54e9c9c04f4aaf43a1b162c1"} Apr 04 02:14:04 crc kubenswrapper[4681]: I0404 02:14:04.344623 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53356c901d15ad867ac7ccb969ca61ea80fd3dfd54e9c9c04f4aaf43a1b162c1" Apr 04 02:14:04 crc kubenswrapper[4681]: I0404 02:14:04.346600 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7b5ddc4dc7-cg7df" event={"ID":"04ddbb2f-cda2-457e-8e65-e3b1e1d9ae53","Type":"ContainerStarted","Data":"bdaddc859220d5fef3d7a6065ef326a30fee046652105d3422b33741f50584f5"} Apr 04 02:14:04 crc kubenswrapper[4681]: I0404 02:14:04.348514 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-jn9q8" event={"ID":"fbb2aa57-946f-43fb-9380-83a69cced169","Type":"ContainerStarted","Data":"e018be3f570f48aaccd50022c60086279b19f3f3952a0146f076800076ec6e29"} Apr 04 02:14:04 crc kubenswrapper[4681]: I0404 02:14:04.370147 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-565957798b-tfdrf" podStartSLOduration=1.370123676 podStartE2EDuration="1.370123676s" podCreationTimestamp="2026-04-04 02:14:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:14:04.363474177 +0000 UTC m=+1124.029249307" watchObservedRunningTime="2026-04-04 02:14:04.370123676 +0000 UTC m=+1124.035898806" Apr 04 02:14:04 crc kubenswrapper[4681]: I0404 02:14:04.568550 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587808-zjwxg"] Apr 04 02:14:04 crc kubenswrapper[4681]: I0404 02:14:04.574861 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587808-zjwxg"] Apr 04 02:14:05 crc kubenswrapper[4681]: I0404 02:14:05.210250 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6587f6a2-6fa3-4c4c-9e71-2f6f48027630" path="/var/lib/kubelet/pods/6587f6a2-6fa3-4c4c-9e71-2f6f48027630/volumes" Apr 04 02:14:06 crc kubenswrapper[4681]: I0404 02:14:06.364752 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-gc988" event={"ID":"ec47a21e-ac21-4720-ac9c-b0b9f50bfc85","Type":"ContainerStarted","Data":"ea9d2807728ce0bfbd4a65530cf21f042014c39c8e64a5842eb8acd1e5298ad0"} Apr 04 02:14:06 crc kubenswrapper[4681]: I0404 02:14:06.366355 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-jn9q8" event={"ID":"fbb2aa57-946f-43fb-9380-83a69cced169","Type":"ContainerStarted","Data":"2c45b0bf089f268a8593aa999a557466a7ca2e70fba5dd5a1004b06e65743acf"} Apr 04 02:14:06 crc kubenswrapper[4681]: I0404 02:14:06.366511 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-jn9q8" Apr 04 02:14:06 crc kubenswrapper[4681]: I0404 02:14:06.368671 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-l6xdx" event={"ID":"e174b98a-0ca7-4dfc-846f-b0395cb9b4a4","Type":"ContainerStarted","Data":"3d26e3661b8a41f651a34cc7596bb35ebb33940775def6c762b9a7f42d1cc9fa"} Apr 04 02:14:06 crc kubenswrapper[4681]: I0404 02:14:06.388533 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-jn9q8" podStartSLOduration=1.9619563759999998 podStartE2EDuration="4.388513426s" podCreationTimestamp="2026-04-04 02:14:02 +0000 UTC" firstStartedPulling="2026-04-04 02:14:03.403427444 +0000 UTC m=+1123.069202564" lastFinishedPulling="2026-04-04 02:14:05.829984494 +0000 UTC m=+1125.495759614" observedRunningTime="2026-04-04 02:14:06.38582802 +0000 UTC m=+1126.051603140" watchObservedRunningTime="2026-04-04 02:14:06.388513426 +0000 UTC m=+1126.054288556" Apr 04 02:14:07 crc kubenswrapper[4681]: I0404 02:14:07.374811 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-l6xdx" Apr 04 02:14:07 crc kubenswrapper[4681]: I0404 02:14:07.391126 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-l6xdx" podStartSLOduration=3.091923549 podStartE2EDuration="5.391111929s" podCreationTimestamp="2026-04-04 02:14:02 +0000 UTC" firstStartedPulling="2026-04-04 02:14:03.507689447 +0000 UTC m=+1123.173464567" lastFinishedPulling="2026-04-04 02:14:05.806877827 +0000 UTC m=+1125.472652947" observedRunningTime="2026-04-04 02:14:07.389396841 +0000 UTC m=+1127.055171971" watchObservedRunningTime="2026-04-04 02:14:07.391111929 +0000 UTC m=+1127.056887049" Apr 04 02:14:08 crc kubenswrapper[4681]: I0404 02:14:08.386681 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7b5ddc4dc7-cg7df" event={"ID":"04ddbb2f-cda2-457e-8e65-e3b1e1d9ae53","Type":"ContainerStarted","Data":"8243dd4fc96972ababc79fc0e9c4faac7424bf09119fd558d54416d7ab2f8631"} Apr 04 02:14:08 crc kubenswrapper[4681]: I0404 02:14:08.407757 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7b5ddc4dc7-cg7df" podStartSLOduration=2.58422757 podStartE2EDuration="6.40772104s" podCreationTimestamp="2026-04-04 02:14:02 +0000 UTC" firstStartedPulling="2026-04-04 02:14:03.794009563 +0000 UTC m=+1123.459784683" lastFinishedPulling="2026-04-04 02:14:07.617503013 +0000 UTC m=+1127.283278153" observedRunningTime="2026-04-04 02:14:08.401354259 +0000 UTC m=+1128.067129379" watchObservedRunningTime="2026-04-04 02:14:08.40772104 +0000 UTC m=+1128.073496190" Apr 04 02:14:09 crc kubenswrapper[4681]: I0404 02:14:09.396532 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-gc988" event={"ID":"ec47a21e-ac21-4720-ac9c-b0b9f50bfc85","Type":"ContainerStarted","Data":"02c0eab3751a8ea5e9025faf1fdbd62e61f794c147b116c56ab4765705ff8e33"} Apr 04 02:14:09 crc kubenswrapper[4681]: I0404 02:14:09.422860 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-gc988" podStartSLOduration=2.208027349 podStartE2EDuration="7.422840949s" podCreationTimestamp="2026-04-04 02:14:02 +0000 UTC" firstStartedPulling="2026-04-04 02:14:03.426734426 +0000 UTC m=+1123.092509556" lastFinishedPulling="2026-04-04 02:14:08.641548036 +0000 UTC m=+1128.307323156" observedRunningTime="2026-04-04 02:14:09.419844153 +0000 UTC m=+1129.085619293" watchObservedRunningTime="2026-04-04 02:14:09.422840949 +0000 UTC m=+1129.088616079" Apr 04 02:14:13 crc kubenswrapper[4681]: I0404 02:14:13.517781 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-l6xdx" Apr 04 02:14:13 crc kubenswrapper[4681]: I0404 02:14:13.538671 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-565957798b-tfdrf" Apr 04 02:14:13 crc kubenswrapper[4681]: I0404 02:14:13.538734 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-565957798b-tfdrf" Apr 04 02:14:13 crc kubenswrapper[4681]: I0404 02:14:13.543367 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-565957798b-tfdrf" Apr 04 02:14:14 crc kubenswrapper[4681]: I0404 02:14:14.456848 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-565957798b-tfdrf" Apr 04 02:14:14 crc kubenswrapper[4681]: I0404 02:14:14.546230 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-56c4884fb5-4p4lw"] Apr 04 02:14:23 crc kubenswrapper[4681]: I0404 02:14:23.198903 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-jn9q8" Apr 04 02:14:26 crc kubenswrapper[4681]: I0404 02:14:26.524823 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:14:26 crc kubenswrapper[4681]: I0404 02:14:26.525415 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:14:38 crc kubenswrapper[4681]: I0404 02:14:38.361544 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9"] Apr 04 02:14:38 crc kubenswrapper[4681]: E0404 02:14:38.362172 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f599124d-85f4-4576-a845-ef6ae9456614" containerName="oc" Apr 04 02:14:38 crc kubenswrapper[4681]: I0404 02:14:38.362185 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f599124d-85f4-4576-a845-ef6ae9456614" containerName="oc" Apr 04 02:14:38 crc kubenswrapper[4681]: I0404 02:14:38.362294 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="f599124d-85f4-4576-a845-ef6ae9456614" containerName="oc" Apr 04 02:14:38 crc kubenswrapper[4681]: I0404 02:14:38.363050 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9" Apr 04 02:14:38 crc kubenswrapper[4681]: I0404 02:14:38.364988 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Apr 04 02:14:38 crc kubenswrapper[4681]: I0404 02:14:38.384066 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9"] Apr 04 02:14:38 crc kubenswrapper[4681]: I0404 02:14:38.455187 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/992f104f-096e-415f-a791-d2f2d0bd17a7-util\") pod \"4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9\" (UID: \"992f104f-096e-415f-a791-d2f2d0bd17a7\") " pod="openshift-marketplace/4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9" Apr 04 02:14:38 crc kubenswrapper[4681]: I0404 02:14:38.455294 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/992f104f-096e-415f-a791-d2f2d0bd17a7-bundle\") pod \"4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9\" (UID: \"992f104f-096e-415f-a791-d2f2d0bd17a7\") " pod="openshift-marketplace/4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9" Apr 04 02:14:38 crc kubenswrapper[4681]: I0404 02:14:38.455354 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mtsb\" (UniqueName: \"kubernetes.io/projected/992f104f-096e-415f-a791-d2f2d0bd17a7-kube-api-access-2mtsb\") pod \"4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9\" (UID: \"992f104f-096e-415f-a791-d2f2d0bd17a7\") " pod="openshift-marketplace/4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9" Apr 04 02:14:38 crc kubenswrapper[4681]: I0404 02:14:38.556957 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mtsb\" (UniqueName: \"kubernetes.io/projected/992f104f-096e-415f-a791-d2f2d0bd17a7-kube-api-access-2mtsb\") pod \"4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9\" (UID: \"992f104f-096e-415f-a791-d2f2d0bd17a7\") " pod="openshift-marketplace/4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9" Apr 04 02:14:38 crc kubenswrapper[4681]: I0404 02:14:38.557019 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/992f104f-096e-415f-a791-d2f2d0bd17a7-util\") pod \"4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9\" (UID: \"992f104f-096e-415f-a791-d2f2d0bd17a7\") " pod="openshift-marketplace/4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9" Apr 04 02:14:38 crc kubenswrapper[4681]: I0404 02:14:38.557062 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/992f104f-096e-415f-a791-d2f2d0bd17a7-bundle\") pod \"4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9\" (UID: \"992f104f-096e-415f-a791-d2f2d0bd17a7\") " pod="openshift-marketplace/4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9" Apr 04 02:14:38 crc kubenswrapper[4681]: I0404 02:14:38.557614 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/992f104f-096e-415f-a791-d2f2d0bd17a7-bundle\") pod \"4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9\" (UID: \"992f104f-096e-415f-a791-d2f2d0bd17a7\") " pod="openshift-marketplace/4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9" Apr 04 02:14:38 crc kubenswrapper[4681]: I0404 02:14:38.557899 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/992f104f-096e-415f-a791-d2f2d0bd17a7-util\") pod \"4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9\" (UID: \"992f104f-096e-415f-a791-d2f2d0bd17a7\") " pod="openshift-marketplace/4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9" Apr 04 02:14:38 crc kubenswrapper[4681]: I0404 02:14:38.574953 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mtsb\" (UniqueName: \"kubernetes.io/projected/992f104f-096e-415f-a791-d2f2d0bd17a7-kube-api-access-2mtsb\") pod \"4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9\" (UID: \"992f104f-096e-415f-a791-d2f2d0bd17a7\") " pod="openshift-marketplace/4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9" Apr 04 02:14:38 crc kubenswrapper[4681]: I0404 02:14:38.679708 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9" Apr 04 02:14:39 crc kubenswrapper[4681]: I0404 02:14:39.084791 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9"] Apr 04 02:14:39 crc kubenswrapper[4681]: I0404 02:14:39.596002 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-56c4884fb5-4p4lw" podUID="1e389ab6-12e2-4fa3-b338-3e2080ab710e" containerName="console" containerID="cri-o://34645b1e7815792bfb49338f9456ed6bd86790abe903222e7f2c6f9aa6e7f4d7" gracePeriod=15 Apr 04 02:14:39 crc kubenswrapper[4681]: I0404 02:14:39.624048 4681 generic.go:334] "Generic (PLEG): container finished" podID="992f104f-096e-415f-a791-d2f2d0bd17a7" containerID="d2637b7ddfef6fb4f4f0413f0f5de8f51680b03569de1c04b1f820aa62667bbc" exitCode=0 Apr 04 02:14:39 crc kubenswrapper[4681]: I0404 02:14:39.624117 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9" event={"ID":"992f104f-096e-415f-a791-d2f2d0bd17a7","Type":"ContainerDied","Data":"d2637b7ddfef6fb4f4f0413f0f5de8f51680b03569de1c04b1f820aa62667bbc"} Apr 04 02:14:39 crc kubenswrapper[4681]: I0404 02:14:39.624206 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9" event={"ID":"992f104f-096e-415f-a791-d2f2d0bd17a7","Type":"ContainerStarted","Data":"c43d0a2dcbb55fc74857384e369dd556ee5785f96fadf5a32e13fe8a252cda21"} Apr 04 02:14:39 crc kubenswrapper[4681]: I0404 02:14:39.937851 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-56c4884fb5-4p4lw_1e389ab6-12e2-4fa3-b338-3e2080ab710e/console/0.log" Apr 04 02:14:39 crc kubenswrapper[4681]: I0404 02:14:39.938196 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-56c4884fb5-4p4lw" Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.079768 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1e389ab6-12e2-4fa3-b338-3e2080ab710e-console-serving-cert\") pod \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\" (UID: \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\") " Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.079856 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1e389ab6-12e2-4fa3-b338-3e2080ab710e-console-oauth-config\") pod \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\" (UID: \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\") " Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.079907 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1e389ab6-12e2-4fa3-b338-3e2080ab710e-console-config\") pod \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\" (UID: \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\") " Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.079941 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shj89\" (UniqueName: \"kubernetes.io/projected/1e389ab6-12e2-4fa3-b338-3e2080ab710e-kube-api-access-shj89\") pod \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\" (UID: \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\") " Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.079987 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e389ab6-12e2-4fa3-b338-3e2080ab710e-trusted-ca-bundle\") pod \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\" (UID: \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\") " Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.080004 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1e389ab6-12e2-4fa3-b338-3e2080ab710e-oauth-serving-cert\") pod \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\" (UID: \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\") " Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.080027 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1e389ab6-12e2-4fa3-b338-3e2080ab710e-service-ca\") pod \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\" (UID: \"1e389ab6-12e2-4fa3-b338-3e2080ab710e\") " Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.081929 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e389ab6-12e2-4fa3-b338-3e2080ab710e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "1e389ab6-12e2-4fa3-b338-3e2080ab710e" (UID: "1e389ab6-12e2-4fa3-b338-3e2080ab710e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.082320 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e389ab6-12e2-4fa3-b338-3e2080ab710e-console-config" (OuterVolumeSpecName: "console-config") pod "1e389ab6-12e2-4fa3-b338-3e2080ab710e" (UID: "1e389ab6-12e2-4fa3-b338-3e2080ab710e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.082427 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e389ab6-12e2-4fa3-b338-3e2080ab710e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1e389ab6-12e2-4fa3-b338-3e2080ab710e" (UID: "1e389ab6-12e2-4fa3-b338-3e2080ab710e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.082523 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e389ab6-12e2-4fa3-b338-3e2080ab710e-service-ca" (OuterVolumeSpecName: "service-ca") pod "1e389ab6-12e2-4fa3-b338-3e2080ab710e" (UID: "1e389ab6-12e2-4fa3-b338-3e2080ab710e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.098914 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e389ab6-12e2-4fa3-b338-3e2080ab710e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "1e389ab6-12e2-4fa3-b338-3e2080ab710e" (UID: "1e389ab6-12e2-4fa3-b338-3e2080ab710e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.099468 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e389ab6-12e2-4fa3-b338-3e2080ab710e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "1e389ab6-12e2-4fa3-b338-3e2080ab710e" (UID: "1e389ab6-12e2-4fa3-b338-3e2080ab710e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.102507 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e389ab6-12e2-4fa3-b338-3e2080ab710e-kube-api-access-shj89" (OuterVolumeSpecName: "kube-api-access-shj89") pod "1e389ab6-12e2-4fa3-b338-3e2080ab710e" (UID: "1e389ab6-12e2-4fa3-b338-3e2080ab710e"). InnerVolumeSpecName "kube-api-access-shj89". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.181109 4681 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1e389ab6-12e2-4fa3-b338-3e2080ab710e-console-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.181142 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shj89\" (UniqueName: \"kubernetes.io/projected/1e389ab6-12e2-4fa3-b338-3e2080ab710e-kube-api-access-shj89\") on node \"crc\" DevicePath \"\"" Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.181152 4681 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e389ab6-12e2-4fa3-b338-3e2080ab710e-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.181161 4681 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1e389ab6-12e2-4fa3-b338-3e2080ab710e-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.181170 4681 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1e389ab6-12e2-4fa3-b338-3e2080ab710e-service-ca\") on node \"crc\" DevicePath \"\"" Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.181179 4681 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1e389ab6-12e2-4fa3-b338-3e2080ab710e-console-serving-cert\") on node \"crc\" DevicePath \"\"" Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.181186 4681 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1e389ab6-12e2-4fa3-b338-3e2080ab710e-console-oauth-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.638631 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-56c4884fb5-4p4lw_1e389ab6-12e2-4fa3-b338-3e2080ab710e/console/0.log" Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.638691 4681 generic.go:334] "Generic (PLEG): container finished" podID="1e389ab6-12e2-4fa3-b338-3e2080ab710e" containerID="34645b1e7815792bfb49338f9456ed6bd86790abe903222e7f2c6f9aa6e7f4d7" exitCode=2 Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.638724 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-56c4884fb5-4p4lw" event={"ID":"1e389ab6-12e2-4fa3-b338-3e2080ab710e","Type":"ContainerDied","Data":"34645b1e7815792bfb49338f9456ed6bd86790abe903222e7f2c6f9aa6e7f4d7"} Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.638752 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-56c4884fb5-4p4lw" event={"ID":"1e389ab6-12e2-4fa3-b338-3e2080ab710e","Type":"ContainerDied","Data":"f3f20ee447beb38922d02430fa2b359c153eef5e049e444d2bf66a16e46ea2ae"} Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.638770 4681 scope.go:117] "RemoveContainer" containerID="34645b1e7815792bfb49338f9456ed6bd86790abe903222e7f2c6f9aa6e7f4d7" Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.638835 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-56c4884fb5-4p4lw" Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.688889 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-56c4884fb5-4p4lw"] Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.694716 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-56c4884fb5-4p4lw"] Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.695311 4681 scope.go:117] "RemoveContainer" containerID="34645b1e7815792bfb49338f9456ed6bd86790abe903222e7f2c6f9aa6e7f4d7" Apr 04 02:14:40 crc kubenswrapper[4681]: E0404 02:14:40.695733 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34645b1e7815792bfb49338f9456ed6bd86790abe903222e7f2c6f9aa6e7f4d7\": container with ID starting with 34645b1e7815792bfb49338f9456ed6bd86790abe903222e7f2c6f9aa6e7f4d7 not found: ID does not exist" containerID="34645b1e7815792bfb49338f9456ed6bd86790abe903222e7f2c6f9aa6e7f4d7" Apr 04 02:14:40 crc kubenswrapper[4681]: I0404 02:14:40.695775 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34645b1e7815792bfb49338f9456ed6bd86790abe903222e7f2c6f9aa6e7f4d7"} err="failed to get container status \"34645b1e7815792bfb49338f9456ed6bd86790abe903222e7f2c6f9aa6e7f4d7\": rpc error: code = NotFound desc = could not find container \"34645b1e7815792bfb49338f9456ed6bd86790abe903222e7f2c6f9aa6e7f4d7\": container with ID starting with 34645b1e7815792bfb49338f9456ed6bd86790abe903222e7f2c6f9aa6e7f4d7 not found: ID does not exist" Apr 04 02:14:41 crc kubenswrapper[4681]: I0404 02:14:41.215811 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e389ab6-12e2-4fa3-b338-3e2080ab710e" path="/var/lib/kubelet/pods/1e389ab6-12e2-4fa3-b338-3e2080ab710e/volumes" Apr 04 02:14:41 crc kubenswrapper[4681]: I0404 02:14:41.649301 4681 generic.go:334] "Generic (PLEG): container finished" podID="992f104f-096e-415f-a791-d2f2d0bd17a7" containerID="aba08f2572bd4a9cc6782a41453ffc45b6640fce0cd7261a2e7668c2c8d88d8c" exitCode=0 Apr 04 02:14:41 crc kubenswrapper[4681]: I0404 02:14:41.649366 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9" event={"ID":"992f104f-096e-415f-a791-d2f2d0bd17a7","Type":"ContainerDied","Data":"aba08f2572bd4a9cc6782a41453ffc45b6640fce0cd7261a2e7668c2c8d88d8c"} Apr 04 02:14:42 crc kubenswrapper[4681]: I0404 02:14:42.659437 4681 generic.go:334] "Generic (PLEG): container finished" podID="992f104f-096e-415f-a791-d2f2d0bd17a7" containerID="9fb8bcf09b9d46d7947abf3fc6a345cc9dfa4c64a3588e8f9faf45619f9ec8a3" exitCode=0 Apr 04 02:14:42 crc kubenswrapper[4681]: I0404 02:14:42.659628 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9" event={"ID":"992f104f-096e-415f-a791-d2f2d0bd17a7","Type":"ContainerDied","Data":"9fb8bcf09b9d46d7947abf3fc6a345cc9dfa4c64a3588e8f9faf45619f9ec8a3"} Apr 04 02:14:43 crc kubenswrapper[4681]: I0404 02:14:43.957539 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9" Apr 04 02:14:44 crc kubenswrapper[4681]: I0404 02:14:44.035404 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/992f104f-096e-415f-a791-d2f2d0bd17a7-bundle\") pod \"992f104f-096e-415f-a791-d2f2d0bd17a7\" (UID: \"992f104f-096e-415f-a791-d2f2d0bd17a7\") " Apr 04 02:14:44 crc kubenswrapper[4681]: I0404 02:14:44.035524 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/992f104f-096e-415f-a791-d2f2d0bd17a7-util\") pod \"992f104f-096e-415f-a791-d2f2d0bd17a7\" (UID: \"992f104f-096e-415f-a791-d2f2d0bd17a7\") " Apr 04 02:14:44 crc kubenswrapper[4681]: I0404 02:14:44.035554 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mtsb\" (UniqueName: \"kubernetes.io/projected/992f104f-096e-415f-a791-d2f2d0bd17a7-kube-api-access-2mtsb\") pod \"992f104f-096e-415f-a791-d2f2d0bd17a7\" (UID: \"992f104f-096e-415f-a791-d2f2d0bd17a7\") " Apr 04 02:14:44 crc kubenswrapper[4681]: I0404 02:14:44.036894 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/992f104f-096e-415f-a791-d2f2d0bd17a7-bundle" (OuterVolumeSpecName: "bundle") pod "992f104f-096e-415f-a791-d2f2d0bd17a7" (UID: "992f104f-096e-415f-a791-d2f2d0bd17a7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:14:44 crc kubenswrapper[4681]: I0404 02:14:44.040602 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/992f104f-096e-415f-a791-d2f2d0bd17a7-kube-api-access-2mtsb" (OuterVolumeSpecName: "kube-api-access-2mtsb") pod "992f104f-096e-415f-a791-d2f2d0bd17a7" (UID: "992f104f-096e-415f-a791-d2f2d0bd17a7"). InnerVolumeSpecName "kube-api-access-2mtsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:14:44 crc kubenswrapper[4681]: I0404 02:14:44.048912 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/992f104f-096e-415f-a791-d2f2d0bd17a7-util" (OuterVolumeSpecName: "util") pod "992f104f-096e-415f-a791-d2f2d0bd17a7" (UID: "992f104f-096e-415f-a791-d2f2d0bd17a7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:14:44 crc kubenswrapper[4681]: I0404 02:14:44.137128 4681 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/992f104f-096e-415f-a791-d2f2d0bd17a7-util\") on node \"crc\" DevicePath \"\"" Apr 04 02:14:44 crc kubenswrapper[4681]: I0404 02:14:44.137174 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mtsb\" (UniqueName: \"kubernetes.io/projected/992f104f-096e-415f-a791-d2f2d0bd17a7-kube-api-access-2mtsb\") on node \"crc\" DevicePath \"\"" Apr 04 02:14:44 crc kubenswrapper[4681]: I0404 02:14:44.137188 4681 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/992f104f-096e-415f-a791-d2f2d0bd17a7-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:14:44 crc kubenswrapper[4681]: I0404 02:14:44.678623 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9" event={"ID":"992f104f-096e-415f-a791-d2f2d0bd17a7","Type":"ContainerDied","Data":"c43d0a2dcbb55fc74857384e369dd556ee5785f96fadf5a32e13fe8a252cda21"} Apr 04 02:14:44 crc kubenswrapper[4681]: I0404 02:14:44.678680 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c43d0a2dcbb55fc74857384e369dd556ee5785f96fadf5a32e13fe8a252cda21" Apr 04 02:14:44 crc kubenswrapper[4681]: I0404 02:14:44.678714 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9" Apr 04 02:14:53 crc kubenswrapper[4681]: I0404 02:14:53.740661 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b949c746f-bbmhk"] Apr 04 02:14:53 crc kubenswrapper[4681]: E0404 02:14:53.741515 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="992f104f-096e-415f-a791-d2f2d0bd17a7" containerName="util" Apr 04 02:14:53 crc kubenswrapper[4681]: I0404 02:14:53.741533 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="992f104f-096e-415f-a791-d2f2d0bd17a7" containerName="util" Apr 04 02:14:53 crc kubenswrapper[4681]: E0404 02:14:53.741550 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e389ab6-12e2-4fa3-b338-3e2080ab710e" containerName="console" Apr 04 02:14:53 crc kubenswrapper[4681]: I0404 02:14:53.741558 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e389ab6-12e2-4fa3-b338-3e2080ab710e" containerName="console" Apr 04 02:14:53 crc kubenswrapper[4681]: E0404 02:14:53.741573 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="992f104f-096e-415f-a791-d2f2d0bd17a7" containerName="extract" Apr 04 02:14:53 crc kubenswrapper[4681]: I0404 02:14:53.741583 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="992f104f-096e-415f-a791-d2f2d0bd17a7" containerName="extract" Apr 04 02:14:53 crc kubenswrapper[4681]: E0404 02:14:53.741599 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="992f104f-096e-415f-a791-d2f2d0bd17a7" containerName="pull" Apr 04 02:14:53 crc kubenswrapper[4681]: I0404 02:14:53.741607 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="992f104f-096e-415f-a791-d2f2d0bd17a7" containerName="pull" Apr 04 02:14:53 crc kubenswrapper[4681]: I0404 02:14:53.741738 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="992f104f-096e-415f-a791-d2f2d0bd17a7" containerName="extract" Apr 04 02:14:53 crc kubenswrapper[4681]: I0404 02:14:53.741762 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e389ab6-12e2-4fa3-b338-3e2080ab710e" containerName="console" Apr 04 02:14:53 crc kubenswrapper[4681]: I0404 02:14:53.742254 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6b949c746f-bbmhk" Apr 04 02:14:53 crc kubenswrapper[4681]: I0404 02:14:53.744343 4681 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Apr 04 02:14:53 crc kubenswrapper[4681]: I0404 02:14:53.744954 4681 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Apr 04 02:14:53 crc kubenswrapper[4681]: I0404 02:14:53.745052 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Apr 04 02:14:53 crc kubenswrapper[4681]: I0404 02:14:53.745487 4681 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-9glk8" Apr 04 02:14:53 crc kubenswrapper[4681]: I0404 02:14:53.746241 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Apr 04 02:14:53 crc kubenswrapper[4681]: I0404 02:14:53.765718 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b949c746f-bbmhk"] Apr 04 02:14:53 crc kubenswrapper[4681]: I0404 02:14:53.862410 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtl4l\" (UniqueName: \"kubernetes.io/projected/bbb46a7c-3e17-4b01-8a75-20a864bee1d3-kube-api-access-qtl4l\") pod \"metallb-operator-controller-manager-6b949c746f-bbmhk\" (UID: \"bbb46a7c-3e17-4b01-8a75-20a864bee1d3\") " pod="metallb-system/metallb-operator-controller-manager-6b949c746f-bbmhk" Apr 04 02:14:53 crc kubenswrapper[4681]: I0404 02:14:53.862691 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bbb46a7c-3e17-4b01-8a75-20a864bee1d3-webhook-cert\") pod \"metallb-operator-controller-manager-6b949c746f-bbmhk\" (UID: \"bbb46a7c-3e17-4b01-8a75-20a864bee1d3\") " pod="metallb-system/metallb-operator-controller-manager-6b949c746f-bbmhk" Apr 04 02:14:53 crc kubenswrapper[4681]: I0404 02:14:53.862828 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bbb46a7c-3e17-4b01-8a75-20a864bee1d3-apiservice-cert\") pod \"metallb-operator-controller-manager-6b949c746f-bbmhk\" (UID: \"bbb46a7c-3e17-4b01-8a75-20a864bee1d3\") " pod="metallb-system/metallb-operator-controller-manager-6b949c746f-bbmhk" Apr 04 02:14:53 crc kubenswrapper[4681]: I0404 02:14:53.963918 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtl4l\" (UniqueName: \"kubernetes.io/projected/bbb46a7c-3e17-4b01-8a75-20a864bee1d3-kube-api-access-qtl4l\") pod \"metallb-operator-controller-manager-6b949c746f-bbmhk\" (UID: \"bbb46a7c-3e17-4b01-8a75-20a864bee1d3\") " pod="metallb-system/metallb-operator-controller-manager-6b949c746f-bbmhk" Apr 04 02:14:53 crc kubenswrapper[4681]: I0404 02:14:53.964006 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bbb46a7c-3e17-4b01-8a75-20a864bee1d3-webhook-cert\") pod \"metallb-operator-controller-manager-6b949c746f-bbmhk\" (UID: \"bbb46a7c-3e17-4b01-8a75-20a864bee1d3\") " pod="metallb-system/metallb-operator-controller-manager-6b949c746f-bbmhk" Apr 04 02:14:53 crc kubenswrapper[4681]: I0404 02:14:53.964046 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bbb46a7c-3e17-4b01-8a75-20a864bee1d3-apiservice-cert\") pod \"metallb-operator-controller-manager-6b949c746f-bbmhk\" (UID: \"bbb46a7c-3e17-4b01-8a75-20a864bee1d3\") " pod="metallb-system/metallb-operator-controller-manager-6b949c746f-bbmhk" Apr 04 02:14:53 crc kubenswrapper[4681]: I0404 02:14:53.971471 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bbb46a7c-3e17-4b01-8a75-20a864bee1d3-webhook-cert\") pod \"metallb-operator-controller-manager-6b949c746f-bbmhk\" (UID: \"bbb46a7c-3e17-4b01-8a75-20a864bee1d3\") " pod="metallb-system/metallb-operator-controller-manager-6b949c746f-bbmhk" Apr 04 02:14:53 crc kubenswrapper[4681]: I0404 02:14:53.977761 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b54d9cb4b-kbxnx"] Apr 04 02:14:53 crc kubenswrapper[4681]: I0404 02:14:53.978455 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6b54d9cb4b-kbxnx" Apr 04 02:14:53 crc kubenswrapper[4681]: I0404 02:14:53.981329 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bbb46a7c-3e17-4b01-8a75-20a864bee1d3-apiservice-cert\") pod \"metallb-operator-controller-manager-6b949c746f-bbmhk\" (UID: \"bbb46a7c-3e17-4b01-8a75-20a864bee1d3\") " pod="metallb-system/metallb-operator-controller-manager-6b949c746f-bbmhk" Apr 04 02:14:53 crc kubenswrapper[4681]: I0404 02:14:53.984252 4681 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Apr 04 02:14:53 crc kubenswrapper[4681]: I0404 02:14:53.984407 4681 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-f6sjd" Apr 04 02:14:53 crc kubenswrapper[4681]: I0404 02:14:53.985926 4681 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Apr 04 02:14:53 crc kubenswrapper[4681]: I0404 02:14:53.996965 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtl4l\" (UniqueName: \"kubernetes.io/projected/bbb46a7c-3e17-4b01-8a75-20a864bee1d3-kube-api-access-qtl4l\") pod \"metallb-operator-controller-manager-6b949c746f-bbmhk\" (UID: \"bbb46a7c-3e17-4b01-8a75-20a864bee1d3\") " pod="metallb-system/metallb-operator-controller-manager-6b949c746f-bbmhk" Apr 04 02:14:54 crc kubenswrapper[4681]: I0404 02:14:54.004746 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b54d9cb4b-kbxnx"] Apr 04 02:14:54 crc kubenswrapper[4681]: I0404 02:14:54.060435 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6b949c746f-bbmhk" Apr 04 02:14:54 crc kubenswrapper[4681]: I0404 02:14:54.064784 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bmdt\" (UniqueName: \"kubernetes.io/projected/22266062-5a6f-4352-80ea-f9cb334bf963-kube-api-access-5bmdt\") pod \"metallb-operator-webhook-server-6b54d9cb4b-kbxnx\" (UID: \"22266062-5a6f-4352-80ea-f9cb334bf963\") " pod="metallb-system/metallb-operator-webhook-server-6b54d9cb4b-kbxnx" Apr 04 02:14:54 crc kubenswrapper[4681]: I0404 02:14:54.065027 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/22266062-5a6f-4352-80ea-f9cb334bf963-webhook-cert\") pod \"metallb-operator-webhook-server-6b54d9cb4b-kbxnx\" (UID: \"22266062-5a6f-4352-80ea-f9cb334bf963\") " pod="metallb-system/metallb-operator-webhook-server-6b54d9cb4b-kbxnx" Apr 04 02:14:54 crc kubenswrapper[4681]: I0404 02:14:54.065082 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/22266062-5a6f-4352-80ea-f9cb334bf963-apiservice-cert\") pod \"metallb-operator-webhook-server-6b54d9cb4b-kbxnx\" (UID: \"22266062-5a6f-4352-80ea-f9cb334bf963\") " pod="metallb-system/metallb-operator-webhook-server-6b54d9cb4b-kbxnx" Apr 04 02:14:54 crc kubenswrapper[4681]: I0404 02:14:54.168974 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bmdt\" (UniqueName: \"kubernetes.io/projected/22266062-5a6f-4352-80ea-f9cb334bf963-kube-api-access-5bmdt\") pod \"metallb-operator-webhook-server-6b54d9cb4b-kbxnx\" (UID: \"22266062-5a6f-4352-80ea-f9cb334bf963\") " pod="metallb-system/metallb-operator-webhook-server-6b54d9cb4b-kbxnx" Apr 04 02:14:54 crc kubenswrapper[4681]: I0404 02:14:54.169028 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/22266062-5a6f-4352-80ea-f9cb334bf963-webhook-cert\") pod \"metallb-operator-webhook-server-6b54d9cb4b-kbxnx\" (UID: \"22266062-5a6f-4352-80ea-f9cb334bf963\") " pod="metallb-system/metallb-operator-webhook-server-6b54d9cb4b-kbxnx" Apr 04 02:14:54 crc kubenswrapper[4681]: I0404 02:14:54.169074 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/22266062-5a6f-4352-80ea-f9cb334bf963-apiservice-cert\") pod \"metallb-operator-webhook-server-6b54d9cb4b-kbxnx\" (UID: \"22266062-5a6f-4352-80ea-f9cb334bf963\") " pod="metallb-system/metallb-operator-webhook-server-6b54d9cb4b-kbxnx" Apr 04 02:14:54 crc kubenswrapper[4681]: I0404 02:14:54.177244 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/22266062-5a6f-4352-80ea-f9cb334bf963-apiservice-cert\") pod \"metallb-operator-webhook-server-6b54d9cb4b-kbxnx\" (UID: \"22266062-5a6f-4352-80ea-f9cb334bf963\") " pod="metallb-system/metallb-operator-webhook-server-6b54d9cb4b-kbxnx" Apr 04 02:14:54 crc kubenswrapper[4681]: I0404 02:14:54.190942 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/22266062-5a6f-4352-80ea-f9cb334bf963-webhook-cert\") pod \"metallb-operator-webhook-server-6b54d9cb4b-kbxnx\" (UID: \"22266062-5a6f-4352-80ea-f9cb334bf963\") " pod="metallb-system/metallb-operator-webhook-server-6b54d9cb4b-kbxnx" Apr 04 02:14:54 crc kubenswrapper[4681]: I0404 02:14:54.221421 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bmdt\" (UniqueName: \"kubernetes.io/projected/22266062-5a6f-4352-80ea-f9cb334bf963-kube-api-access-5bmdt\") pod \"metallb-operator-webhook-server-6b54d9cb4b-kbxnx\" (UID: \"22266062-5a6f-4352-80ea-f9cb334bf963\") " pod="metallb-system/metallb-operator-webhook-server-6b54d9cb4b-kbxnx" Apr 04 02:14:54 crc kubenswrapper[4681]: I0404 02:14:54.358530 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6b54d9cb4b-kbxnx" Apr 04 02:14:54 crc kubenswrapper[4681]: I0404 02:14:54.575744 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b949c746f-bbmhk"] Apr 04 02:14:54 crc kubenswrapper[4681]: W0404 02:14:54.587657 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbb46a7c_3e17_4b01_8a75_20a864bee1d3.slice/crio-e18392c3eb7ff8d58386fb3d1242ad91bb14324e202496a9180d2e28ca6995b0 WatchSource:0}: Error finding container e18392c3eb7ff8d58386fb3d1242ad91bb14324e202496a9180d2e28ca6995b0: Status 404 returned error can't find the container with id e18392c3eb7ff8d58386fb3d1242ad91bb14324e202496a9180d2e28ca6995b0 Apr 04 02:14:54 crc kubenswrapper[4681]: I0404 02:14:54.745130 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b949c746f-bbmhk" event={"ID":"bbb46a7c-3e17-4b01-8a75-20a864bee1d3","Type":"ContainerStarted","Data":"e18392c3eb7ff8d58386fb3d1242ad91bb14324e202496a9180d2e28ca6995b0"} Apr 04 02:14:54 crc kubenswrapper[4681]: I0404 02:14:54.822171 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b54d9cb4b-kbxnx"] Apr 04 02:14:54 crc kubenswrapper[4681]: W0404 02:14:54.827495 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22266062_5a6f_4352_80ea_f9cb334bf963.slice/crio-f5e3abbe5ad39216213a29de81266ebfaaa643a04dcd1baa096232c95bf6b930 WatchSource:0}: Error finding container f5e3abbe5ad39216213a29de81266ebfaaa643a04dcd1baa096232c95bf6b930: Status 404 returned error can't find the container with id f5e3abbe5ad39216213a29de81266ebfaaa643a04dcd1baa096232c95bf6b930 Apr 04 02:14:55 crc kubenswrapper[4681]: I0404 02:14:55.753509 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6b54d9cb4b-kbxnx" event={"ID":"22266062-5a6f-4352-80ea-f9cb334bf963","Type":"ContainerStarted","Data":"f5e3abbe5ad39216213a29de81266ebfaaa643a04dcd1baa096232c95bf6b930"} Apr 04 02:14:56 crc kubenswrapper[4681]: I0404 02:14:56.524101 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:14:56 crc kubenswrapper[4681]: I0404 02:14:56.524163 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:14:56 crc kubenswrapper[4681]: I0404 02:14:56.524212 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 02:14:56 crc kubenswrapper[4681]: I0404 02:14:56.524928 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e2e43546dbe2461b9e3426a18769af831106cbff42f433957bda33adde473ed0"} pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Apr 04 02:14:56 crc kubenswrapper[4681]: I0404 02:14:56.524997 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" containerID="cri-o://e2e43546dbe2461b9e3426a18769af831106cbff42f433957bda33adde473ed0" gracePeriod=600 Apr 04 02:14:56 crc kubenswrapper[4681]: I0404 02:14:56.766428 4681 generic.go:334] "Generic (PLEG): container finished" podID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerID="e2e43546dbe2461b9e3426a18769af831106cbff42f433957bda33adde473ed0" exitCode=0 Apr 04 02:14:56 crc kubenswrapper[4681]: I0404 02:14:56.766519 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerDied","Data":"e2e43546dbe2461b9e3426a18769af831106cbff42f433957bda33adde473ed0"} Apr 04 02:14:56 crc kubenswrapper[4681]: I0404 02:14:56.766914 4681 scope.go:117] "RemoveContainer" containerID="6461d3b377cf8bfb047c484785d5de06041b8d0e8bb34eec33f278db844fd42a" Apr 04 02:14:57 crc kubenswrapper[4681]: I0404 02:14:57.777504 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerStarted","Data":"29e9a58ef2bccc789fece86b7ac9bb80cce347a67979c6787d7300d3e52c5b75"} Apr 04 02:15:00 crc kubenswrapper[4681]: I0404 02:15:00.133350 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587815-vr574"] Apr 04 02:15:00 crc kubenswrapper[4681]: I0404 02:15:00.134582 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587815-vr574" Apr 04 02:15:00 crc kubenswrapper[4681]: I0404 02:15:00.137615 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Apr 04 02:15:00 crc kubenswrapper[4681]: I0404 02:15:00.140934 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587815-vr574"] Apr 04 02:15:00 crc kubenswrapper[4681]: I0404 02:15:00.141303 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Apr 04 02:15:00 crc kubenswrapper[4681]: I0404 02:15:00.268030 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3a9a69a-f531-41d0-910d-800cab47e903-config-volume\") pod \"collect-profiles-29587815-vr574\" (UID: \"c3a9a69a-f531-41d0-910d-800cab47e903\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587815-vr574" Apr 04 02:15:00 crc kubenswrapper[4681]: I0404 02:15:00.268403 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3a9a69a-f531-41d0-910d-800cab47e903-secret-volume\") pod \"collect-profiles-29587815-vr574\" (UID: \"c3a9a69a-f531-41d0-910d-800cab47e903\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587815-vr574" Apr 04 02:15:00 crc kubenswrapper[4681]: I0404 02:15:00.268442 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxzq9\" (UniqueName: \"kubernetes.io/projected/c3a9a69a-f531-41d0-910d-800cab47e903-kube-api-access-sxzq9\") pod \"collect-profiles-29587815-vr574\" (UID: \"c3a9a69a-f531-41d0-910d-800cab47e903\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587815-vr574" Apr 04 02:15:00 crc kubenswrapper[4681]: I0404 02:15:00.369064 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3a9a69a-f531-41d0-910d-800cab47e903-secret-volume\") pod \"collect-profiles-29587815-vr574\" (UID: \"c3a9a69a-f531-41d0-910d-800cab47e903\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587815-vr574" Apr 04 02:15:00 crc kubenswrapper[4681]: I0404 02:15:00.369138 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxzq9\" (UniqueName: \"kubernetes.io/projected/c3a9a69a-f531-41d0-910d-800cab47e903-kube-api-access-sxzq9\") pod \"collect-profiles-29587815-vr574\" (UID: \"c3a9a69a-f531-41d0-910d-800cab47e903\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587815-vr574" Apr 04 02:15:00 crc kubenswrapper[4681]: I0404 02:15:00.369174 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3a9a69a-f531-41d0-910d-800cab47e903-config-volume\") pod \"collect-profiles-29587815-vr574\" (UID: \"c3a9a69a-f531-41d0-910d-800cab47e903\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587815-vr574" Apr 04 02:15:00 crc kubenswrapper[4681]: I0404 02:15:00.370081 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3a9a69a-f531-41d0-910d-800cab47e903-config-volume\") pod \"collect-profiles-29587815-vr574\" (UID: \"c3a9a69a-f531-41d0-910d-800cab47e903\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587815-vr574" Apr 04 02:15:00 crc kubenswrapper[4681]: I0404 02:15:00.375835 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3a9a69a-f531-41d0-910d-800cab47e903-secret-volume\") pod \"collect-profiles-29587815-vr574\" (UID: \"c3a9a69a-f531-41d0-910d-800cab47e903\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587815-vr574" Apr 04 02:15:00 crc kubenswrapper[4681]: I0404 02:15:00.385755 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxzq9\" (UniqueName: \"kubernetes.io/projected/c3a9a69a-f531-41d0-910d-800cab47e903-kube-api-access-sxzq9\") pod \"collect-profiles-29587815-vr574\" (UID: \"c3a9a69a-f531-41d0-910d-800cab47e903\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587815-vr574" Apr 04 02:15:00 crc kubenswrapper[4681]: I0404 02:15:00.457686 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587815-vr574" Apr 04 02:15:01 crc kubenswrapper[4681]: I0404 02:15:01.483934 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587815-vr574"] Apr 04 02:15:01 crc kubenswrapper[4681]: I0404 02:15:01.803591 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587815-vr574" event={"ID":"c3a9a69a-f531-41d0-910d-800cab47e903","Type":"ContainerStarted","Data":"312f2e48a098299def1a2c79b3a9b67f1f56d01210624b6d493c9b734e018346"} Apr 04 02:15:01 crc kubenswrapper[4681]: I0404 02:15:01.803907 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587815-vr574" event={"ID":"c3a9a69a-f531-41d0-910d-800cab47e903","Type":"ContainerStarted","Data":"322b0d063dcddf38980fdb590ba43ffd9c8074fa9e47c4e8c2e1ac66e449a27c"} Apr 04 02:15:01 crc kubenswrapper[4681]: I0404 02:15:01.805859 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6b54d9cb4b-kbxnx" event={"ID":"22266062-5a6f-4352-80ea-f9cb334bf963","Type":"ContainerStarted","Data":"76604b14f08c697715bdf9fc3b95e86a4da29a2c4447b8c6fbada11ef0aadba4"} Apr 04 02:15:01 crc kubenswrapper[4681]: I0404 02:15:01.805960 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6b54d9cb4b-kbxnx" Apr 04 02:15:01 crc kubenswrapper[4681]: I0404 02:15:01.807542 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b949c746f-bbmhk" event={"ID":"bbb46a7c-3e17-4b01-8a75-20a864bee1d3","Type":"ContainerStarted","Data":"bb4fdd720e0997d8256e994381042029a72b42bdd03add7673ae79591bb549bf"} Apr 04 02:15:01 crc kubenswrapper[4681]: I0404 02:15:01.807683 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6b949c746f-bbmhk" Apr 04 02:15:01 crc kubenswrapper[4681]: I0404 02:15:01.822952 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29587815-vr574" podStartSLOduration=1.822937993 podStartE2EDuration="1.822937993s" podCreationTimestamp="2026-04-04 02:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:15:01.818844278 +0000 UTC m=+1181.484619398" watchObservedRunningTime="2026-04-04 02:15:01.822937993 +0000 UTC m=+1181.488713113" Apr 04 02:15:01 crc kubenswrapper[4681]: I0404 02:15:01.847954 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6b54d9cb4b-kbxnx" podStartSLOduration=2.514492515 podStartE2EDuration="8.847920785s" podCreationTimestamp="2026-04-04 02:14:53 +0000 UTC" firstStartedPulling="2026-04-04 02:14:54.830386516 +0000 UTC m=+1174.496161636" lastFinishedPulling="2026-04-04 02:15:01.163814786 +0000 UTC m=+1180.829589906" observedRunningTime="2026-04-04 02:15:01.841837043 +0000 UTC m=+1181.507612173" watchObservedRunningTime="2026-04-04 02:15:01.847920785 +0000 UTC m=+1181.513695905" Apr 04 02:15:01 crc kubenswrapper[4681]: I0404 02:15:01.879429 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6b949c746f-bbmhk" podStartSLOduration=2.345909892 podStartE2EDuration="8.879408819s" podCreationTimestamp="2026-04-04 02:14:53 +0000 UTC" firstStartedPulling="2026-04-04 02:14:54.590669665 +0000 UTC m=+1174.256444785" lastFinishedPulling="2026-04-04 02:15:01.124168592 +0000 UTC m=+1180.789943712" observedRunningTime="2026-04-04 02:15:01.863923193 +0000 UTC m=+1181.529698313" watchObservedRunningTime="2026-04-04 02:15:01.879408819 +0000 UTC m=+1181.545183939" Apr 04 02:15:02 crc kubenswrapper[4681]: I0404 02:15:02.815235 4681 generic.go:334] "Generic (PLEG): container finished" podID="c3a9a69a-f531-41d0-910d-800cab47e903" containerID="312f2e48a098299def1a2c79b3a9b67f1f56d01210624b6d493c9b734e018346" exitCode=0 Apr 04 02:15:02 crc kubenswrapper[4681]: I0404 02:15:02.815378 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587815-vr574" event={"ID":"c3a9a69a-f531-41d0-910d-800cab47e903","Type":"ContainerDied","Data":"312f2e48a098299def1a2c79b3a9b67f1f56d01210624b6d493c9b734e018346"} Apr 04 02:15:03 crc kubenswrapper[4681]: I0404 02:15:03.048193 4681 scope.go:117] "RemoveContainer" containerID="d023473e56730b16c4e3656681287626b938bcaea2055858c4e97a52f3d03cd3" Apr 04 02:15:04 crc kubenswrapper[4681]: I0404 02:15:04.075445 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587815-vr574" Apr 04 02:15:04 crc kubenswrapper[4681]: I0404 02:15:04.127539 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxzq9\" (UniqueName: \"kubernetes.io/projected/c3a9a69a-f531-41d0-910d-800cab47e903-kube-api-access-sxzq9\") pod \"c3a9a69a-f531-41d0-910d-800cab47e903\" (UID: \"c3a9a69a-f531-41d0-910d-800cab47e903\") " Apr 04 02:15:04 crc kubenswrapper[4681]: I0404 02:15:04.127629 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3a9a69a-f531-41d0-910d-800cab47e903-config-volume\") pod \"c3a9a69a-f531-41d0-910d-800cab47e903\" (UID: \"c3a9a69a-f531-41d0-910d-800cab47e903\") " Apr 04 02:15:04 crc kubenswrapper[4681]: I0404 02:15:04.127670 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3a9a69a-f531-41d0-910d-800cab47e903-secret-volume\") pod \"c3a9a69a-f531-41d0-910d-800cab47e903\" (UID: \"c3a9a69a-f531-41d0-910d-800cab47e903\") " Apr 04 02:15:04 crc kubenswrapper[4681]: I0404 02:15:04.128360 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3a9a69a-f531-41d0-910d-800cab47e903-config-volume" (OuterVolumeSpecName: "config-volume") pod "c3a9a69a-f531-41d0-910d-800cab47e903" (UID: "c3a9a69a-f531-41d0-910d-800cab47e903"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:15:04 crc kubenswrapper[4681]: I0404 02:15:04.133429 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3a9a69a-f531-41d0-910d-800cab47e903-kube-api-access-sxzq9" (OuterVolumeSpecName: "kube-api-access-sxzq9") pod "c3a9a69a-f531-41d0-910d-800cab47e903" (UID: "c3a9a69a-f531-41d0-910d-800cab47e903"). InnerVolumeSpecName "kube-api-access-sxzq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:15:04 crc kubenswrapper[4681]: I0404 02:15:04.142452 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3a9a69a-f531-41d0-910d-800cab47e903-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c3a9a69a-f531-41d0-910d-800cab47e903" (UID: "c3a9a69a-f531-41d0-910d-800cab47e903"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:15:04 crc kubenswrapper[4681]: I0404 02:15:04.229302 4681 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c3a9a69a-f531-41d0-910d-800cab47e903-config-volume\") on node \"crc\" DevicePath \"\"" Apr 04 02:15:04 crc kubenswrapper[4681]: I0404 02:15:04.229342 4681 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c3a9a69a-f531-41d0-910d-800cab47e903-secret-volume\") on node \"crc\" DevicePath \"\"" Apr 04 02:15:04 crc kubenswrapper[4681]: I0404 02:15:04.229353 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxzq9\" (UniqueName: \"kubernetes.io/projected/c3a9a69a-f531-41d0-910d-800cab47e903-kube-api-access-sxzq9\") on node \"crc\" DevicePath \"\"" Apr 04 02:15:04 crc kubenswrapper[4681]: I0404 02:15:04.830478 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587815-vr574" event={"ID":"c3a9a69a-f531-41d0-910d-800cab47e903","Type":"ContainerDied","Data":"322b0d063dcddf38980fdb590ba43ffd9c8074fa9e47c4e8c2e1ac66e449a27c"} Apr 04 02:15:04 crc kubenswrapper[4681]: I0404 02:15:04.830813 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="322b0d063dcddf38980fdb590ba43ffd9c8074fa9e47c4e8c2e1ac66e449a27c" Apr 04 02:15:04 crc kubenswrapper[4681]: I0404 02:15:04.830556 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587815-vr574" Apr 04 02:15:14 crc kubenswrapper[4681]: I0404 02:15:14.399547 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6b54d9cb4b-kbxnx" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.063777 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6b949c746f-bbmhk" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.723208 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-kj4kn"] Apr 04 02:15:34 crc kubenswrapper[4681]: E0404 02:15:34.723540 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a9a69a-f531-41d0-910d-800cab47e903" containerName="collect-profiles" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.723560 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a9a69a-f531-41d0-910d-800cab47e903" containerName="collect-profiles" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.723703 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3a9a69a-f531-41d0-910d-800cab47e903" containerName="collect-profiles" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.728832 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-kj4kn" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.730651 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.732047 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-wn7bd"] Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.732959 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wn7bd" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.737503 4681 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.737815 4681 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-2lnsc" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.740121 4681 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.758765 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-wn7bd"] Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.832458 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-lvp5w"] Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.833363 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-lvp5w" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.835503 4681 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-ffgkr" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.835614 4681 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.836016 4681 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.836045 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.868676 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/873d7ebe-9962-4fd0-84e5-4dbc1c576644-frr-conf\") pod \"frr-k8s-kj4kn\" (UID: \"873d7ebe-9962-4fd0-84e5-4dbc1c576644\") " pod="metallb-system/frr-k8s-kj4kn" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.868751 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/873d7ebe-9962-4fd0-84e5-4dbc1c576644-metrics\") pod \"frr-k8s-kj4kn\" (UID: \"873d7ebe-9962-4fd0-84e5-4dbc1c576644\") " pod="metallb-system/frr-k8s-kj4kn" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.868791 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwdsd\" (UniqueName: \"kubernetes.io/projected/82278f5d-bc0c-45d9-9efd-170e322295dd-kube-api-access-dwdsd\") pod \"frr-k8s-webhook-server-bcc4b6f68-wn7bd\" (UID: \"82278f5d-bc0c-45d9-9efd-170e322295dd\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wn7bd" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.868825 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxb44\" (UniqueName: \"kubernetes.io/projected/873d7ebe-9962-4fd0-84e5-4dbc1c576644-kube-api-access-vxb44\") pod \"frr-k8s-kj4kn\" (UID: \"873d7ebe-9962-4fd0-84e5-4dbc1c576644\") " pod="metallb-system/frr-k8s-kj4kn" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.869426 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/873d7ebe-9962-4fd0-84e5-4dbc1c576644-frr-startup\") pod \"frr-k8s-kj4kn\" (UID: \"873d7ebe-9962-4fd0-84e5-4dbc1c576644\") " pod="metallb-system/frr-k8s-kj4kn" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.869469 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82278f5d-bc0c-45d9-9efd-170e322295dd-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-wn7bd\" (UID: \"82278f5d-bc0c-45d9-9efd-170e322295dd\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wn7bd" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.869512 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/873d7ebe-9962-4fd0-84e5-4dbc1c576644-metrics-certs\") pod \"frr-k8s-kj4kn\" (UID: \"873d7ebe-9962-4fd0-84e5-4dbc1c576644\") " pod="metallb-system/frr-k8s-kj4kn" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.869549 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/873d7ebe-9962-4fd0-84e5-4dbc1c576644-reloader\") pod \"frr-k8s-kj4kn\" (UID: \"873d7ebe-9962-4fd0-84e5-4dbc1c576644\") " pod="metallb-system/frr-k8s-kj4kn" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.869615 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/873d7ebe-9962-4fd0-84e5-4dbc1c576644-frr-sockets\") pod \"frr-k8s-kj4kn\" (UID: \"873d7ebe-9962-4fd0-84e5-4dbc1c576644\") " pod="metallb-system/frr-k8s-kj4kn" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.873180 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5bb64cd5d7-vnp7h"] Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.875129 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bb64cd5d7-vnp7h" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.881904 4681 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.893302 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bb64cd5d7-vnp7h"] Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.972587 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/873d7ebe-9962-4fd0-84e5-4dbc1c576644-frr-startup\") pod \"frr-k8s-kj4kn\" (UID: \"873d7ebe-9962-4fd0-84e5-4dbc1c576644\") " pod="metallb-system/frr-k8s-kj4kn" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.972651 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82278f5d-bc0c-45d9-9efd-170e322295dd-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-wn7bd\" (UID: \"82278f5d-bc0c-45d9-9efd-170e322295dd\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wn7bd" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.972690 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/873d7ebe-9962-4fd0-84e5-4dbc1c576644-metrics-certs\") pod \"frr-k8s-kj4kn\" (UID: \"873d7ebe-9962-4fd0-84e5-4dbc1c576644\") " pod="metallb-system/frr-k8s-kj4kn" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.972741 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/10953a36-52e8-4614-af9d-7df97c580ffc-metrics-certs\") pod \"controller-5bb64cd5d7-vnp7h\" (UID: \"10953a36-52e8-4614-af9d-7df97c580ffc\") " pod="metallb-system/controller-5bb64cd5d7-vnp7h" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.972772 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/873d7ebe-9962-4fd0-84e5-4dbc1c576644-reloader\") pod \"frr-k8s-kj4kn\" (UID: \"873d7ebe-9962-4fd0-84e5-4dbc1c576644\") " pod="metallb-system/frr-k8s-kj4kn" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.972795 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b9303934-434e-47f9-8c2b-36d6e6320ab2-memberlist\") pod \"speaker-lvp5w\" (UID: \"b9303934-434e-47f9-8c2b-36d6e6320ab2\") " pod="metallb-system/speaker-lvp5w" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.972825 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjm5q\" (UniqueName: \"kubernetes.io/projected/b9303934-434e-47f9-8c2b-36d6e6320ab2-kube-api-access-qjm5q\") pod \"speaker-lvp5w\" (UID: \"b9303934-434e-47f9-8c2b-36d6e6320ab2\") " pod="metallb-system/speaker-lvp5w" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.972860 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rdbt\" (UniqueName: \"kubernetes.io/projected/10953a36-52e8-4614-af9d-7df97c580ffc-kube-api-access-6rdbt\") pod \"controller-5bb64cd5d7-vnp7h\" (UID: \"10953a36-52e8-4614-af9d-7df97c580ffc\") " pod="metallb-system/controller-5bb64cd5d7-vnp7h" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.972887 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/873d7ebe-9962-4fd0-84e5-4dbc1c576644-frr-sockets\") pod \"frr-k8s-kj4kn\" (UID: \"873d7ebe-9962-4fd0-84e5-4dbc1c576644\") " pod="metallb-system/frr-k8s-kj4kn" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.972912 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b9303934-434e-47f9-8c2b-36d6e6320ab2-metallb-excludel2\") pod \"speaker-lvp5w\" (UID: \"b9303934-434e-47f9-8c2b-36d6e6320ab2\") " pod="metallb-system/speaker-lvp5w" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.972941 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10953a36-52e8-4614-af9d-7df97c580ffc-cert\") pod \"controller-5bb64cd5d7-vnp7h\" (UID: \"10953a36-52e8-4614-af9d-7df97c580ffc\") " pod="metallb-system/controller-5bb64cd5d7-vnp7h" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.972974 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b9303934-434e-47f9-8c2b-36d6e6320ab2-metrics-certs\") pod \"speaker-lvp5w\" (UID: \"b9303934-434e-47f9-8c2b-36d6e6320ab2\") " pod="metallb-system/speaker-lvp5w" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.973010 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/873d7ebe-9962-4fd0-84e5-4dbc1c576644-frr-conf\") pod \"frr-k8s-kj4kn\" (UID: \"873d7ebe-9962-4fd0-84e5-4dbc1c576644\") " pod="metallb-system/frr-k8s-kj4kn" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.973033 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/873d7ebe-9962-4fd0-84e5-4dbc1c576644-metrics\") pod \"frr-k8s-kj4kn\" (UID: \"873d7ebe-9962-4fd0-84e5-4dbc1c576644\") " pod="metallb-system/frr-k8s-kj4kn" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.973058 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwdsd\" (UniqueName: \"kubernetes.io/projected/82278f5d-bc0c-45d9-9efd-170e322295dd-kube-api-access-dwdsd\") pod \"frr-k8s-webhook-server-bcc4b6f68-wn7bd\" (UID: \"82278f5d-bc0c-45d9-9efd-170e322295dd\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wn7bd" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.973088 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxb44\" (UniqueName: \"kubernetes.io/projected/873d7ebe-9962-4fd0-84e5-4dbc1c576644-kube-api-access-vxb44\") pod \"frr-k8s-kj4kn\" (UID: \"873d7ebe-9962-4fd0-84e5-4dbc1c576644\") " pod="metallb-system/frr-k8s-kj4kn" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.973843 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/873d7ebe-9962-4fd0-84e5-4dbc1c576644-frr-sockets\") pod \"frr-k8s-kj4kn\" (UID: \"873d7ebe-9962-4fd0-84e5-4dbc1c576644\") " pod="metallb-system/frr-k8s-kj4kn" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.974101 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/873d7ebe-9962-4fd0-84e5-4dbc1c576644-metrics\") pod \"frr-k8s-kj4kn\" (UID: \"873d7ebe-9962-4fd0-84e5-4dbc1c576644\") " pod="metallb-system/frr-k8s-kj4kn" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.974229 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/873d7ebe-9962-4fd0-84e5-4dbc1c576644-frr-conf\") pod \"frr-k8s-kj4kn\" (UID: \"873d7ebe-9962-4fd0-84e5-4dbc1c576644\") " pod="metallb-system/frr-k8s-kj4kn" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.974519 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/873d7ebe-9962-4fd0-84e5-4dbc1c576644-reloader\") pod \"frr-k8s-kj4kn\" (UID: \"873d7ebe-9962-4fd0-84e5-4dbc1c576644\") " pod="metallb-system/frr-k8s-kj4kn" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.974941 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/873d7ebe-9962-4fd0-84e5-4dbc1c576644-frr-startup\") pod \"frr-k8s-kj4kn\" (UID: \"873d7ebe-9962-4fd0-84e5-4dbc1c576644\") " pod="metallb-system/frr-k8s-kj4kn" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.980906 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/873d7ebe-9962-4fd0-84e5-4dbc1c576644-metrics-certs\") pod \"frr-k8s-kj4kn\" (UID: \"873d7ebe-9962-4fd0-84e5-4dbc1c576644\") " pod="metallb-system/frr-k8s-kj4kn" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.981093 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/82278f5d-bc0c-45d9-9efd-170e322295dd-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-wn7bd\" (UID: \"82278f5d-bc0c-45d9-9efd-170e322295dd\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wn7bd" Apr 04 02:15:34 crc kubenswrapper[4681]: I0404 02:15:34.992338 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxb44\" (UniqueName: \"kubernetes.io/projected/873d7ebe-9962-4fd0-84e5-4dbc1c576644-kube-api-access-vxb44\") pod \"frr-k8s-kj4kn\" (UID: \"873d7ebe-9962-4fd0-84e5-4dbc1c576644\") " pod="metallb-system/frr-k8s-kj4kn" Apr 04 02:15:35 crc kubenswrapper[4681]: I0404 02:15:35.001607 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwdsd\" (UniqueName: \"kubernetes.io/projected/82278f5d-bc0c-45d9-9efd-170e322295dd-kube-api-access-dwdsd\") pod \"frr-k8s-webhook-server-bcc4b6f68-wn7bd\" (UID: \"82278f5d-bc0c-45d9-9efd-170e322295dd\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wn7bd" Apr 04 02:15:35 crc kubenswrapper[4681]: I0404 02:15:35.054702 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-kj4kn" Apr 04 02:15:35 crc kubenswrapper[4681]: I0404 02:15:35.074469 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10953a36-52e8-4614-af9d-7df97c580ffc-cert\") pod \"controller-5bb64cd5d7-vnp7h\" (UID: \"10953a36-52e8-4614-af9d-7df97c580ffc\") " pod="metallb-system/controller-5bb64cd5d7-vnp7h" Apr 04 02:15:35 crc kubenswrapper[4681]: I0404 02:15:35.074542 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b9303934-434e-47f9-8c2b-36d6e6320ab2-metrics-certs\") pod \"speaker-lvp5w\" (UID: \"b9303934-434e-47f9-8c2b-36d6e6320ab2\") " pod="metallb-system/speaker-lvp5w" Apr 04 02:15:35 crc kubenswrapper[4681]: I0404 02:15:35.074630 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/10953a36-52e8-4614-af9d-7df97c580ffc-metrics-certs\") pod \"controller-5bb64cd5d7-vnp7h\" (UID: \"10953a36-52e8-4614-af9d-7df97c580ffc\") " pod="metallb-system/controller-5bb64cd5d7-vnp7h" Apr 04 02:15:35 crc kubenswrapper[4681]: I0404 02:15:35.074672 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b9303934-434e-47f9-8c2b-36d6e6320ab2-memberlist\") pod \"speaker-lvp5w\" (UID: \"b9303934-434e-47f9-8c2b-36d6e6320ab2\") " pod="metallb-system/speaker-lvp5w" Apr 04 02:15:35 crc kubenswrapper[4681]: I0404 02:15:35.074705 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjm5q\" (UniqueName: \"kubernetes.io/projected/b9303934-434e-47f9-8c2b-36d6e6320ab2-kube-api-access-qjm5q\") pod \"speaker-lvp5w\" (UID: \"b9303934-434e-47f9-8c2b-36d6e6320ab2\") " pod="metallb-system/speaker-lvp5w" Apr 04 02:15:35 crc kubenswrapper[4681]: I0404 02:15:35.074739 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rdbt\" (UniqueName: \"kubernetes.io/projected/10953a36-52e8-4614-af9d-7df97c580ffc-kube-api-access-6rdbt\") pod \"controller-5bb64cd5d7-vnp7h\" (UID: \"10953a36-52e8-4614-af9d-7df97c580ffc\") " pod="metallb-system/controller-5bb64cd5d7-vnp7h" Apr 04 02:15:35 crc kubenswrapper[4681]: I0404 02:15:35.074778 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b9303934-434e-47f9-8c2b-36d6e6320ab2-metallb-excludel2\") pod \"speaker-lvp5w\" (UID: \"b9303934-434e-47f9-8c2b-36d6e6320ab2\") " pod="metallb-system/speaker-lvp5w" Apr 04 02:15:35 crc kubenswrapper[4681]: E0404 02:15:35.075015 4681 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Apr 04 02:15:35 crc kubenswrapper[4681]: E0404 02:15:35.075082 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b9303934-434e-47f9-8c2b-36d6e6320ab2-memberlist podName:b9303934-434e-47f9-8c2b-36d6e6320ab2 nodeName:}" failed. No retries permitted until 2026-04-04 02:15:35.575063538 +0000 UTC m=+1215.240838658 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b9303934-434e-47f9-8c2b-36d6e6320ab2-memberlist") pod "speaker-lvp5w" (UID: "b9303934-434e-47f9-8c2b-36d6e6320ab2") : secret "metallb-memberlist" not found Apr 04 02:15:35 crc kubenswrapper[4681]: E0404 02:15:35.075696 4681 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Apr 04 02:15:35 crc kubenswrapper[4681]: E0404 02:15:35.075754 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10953a36-52e8-4614-af9d-7df97c580ffc-metrics-certs podName:10953a36-52e8-4614-af9d-7df97c580ffc nodeName:}" failed. No retries permitted until 2026-04-04 02:15:35.575736628 +0000 UTC m=+1215.241511748 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/10953a36-52e8-4614-af9d-7df97c580ffc-metrics-certs") pod "controller-5bb64cd5d7-vnp7h" (UID: "10953a36-52e8-4614-af9d-7df97c580ffc") : secret "controller-certs-secret" not found Apr 04 02:15:35 crc kubenswrapper[4681]: I0404 02:15:35.079485 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wn7bd" Apr 04 02:15:35 crc kubenswrapper[4681]: I0404 02:15:35.079939 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b9303934-434e-47f9-8c2b-36d6e6320ab2-metallb-excludel2\") pod \"speaker-lvp5w\" (UID: \"b9303934-434e-47f9-8c2b-36d6e6320ab2\") " pod="metallb-system/speaker-lvp5w" Apr 04 02:15:35 crc kubenswrapper[4681]: I0404 02:15:35.083857 4681 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Apr 04 02:15:35 crc kubenswrapper[4681]: I0404 02:15:35.084079 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b9303934-434e-47f9-8c2b-36d6e6320ab2-metrics-certs\") pod \"speaker-lvp5w\" (UID: \"b9303934-434e-47f9-8c2b-36d6e6320ab2\") " pod="metallb-system/speaker-lvp5w" Apr 04 02:15:35 crc kubenswrapper[4681]: I0404 02:15:35.094392 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10953a36-52e8-4614-af9d-7df97c580ffc-cert\") pod \"controller-5bb64cd5d7-vnp7h\" (UID: \"10953a36-52e8-4614-af9d-7df97c580ffc\") " pod="metallb-system/controller-5bb64cd5d7-vnp7h" Apr 04 02:15:35 crc kubenswrapper[4681]: I0404 02:15:35.105919 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjm5q\" (UniqueName: \"kubernetes.io/projected/b9303934-434e-47f9-8c2b-36d6e6320ab2-kube-api-access-qjm5q\") pod \"speaker-lvp5w\" (UID: \"b9303934-434e-47f9-8c2b-36d6e6320ab2\") " pod="metallb-system/speaker-lvp5w" Apr 04 02:15:35 crc kubenswrapper[4681]: I0404 02:15:35.106848 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rdbt\" (UniqueName: \"kubernetes.io/projected/10953a36-52e8-4614-af9d-7df97c580ffc-kube-api-access-6rdbt\") pod \"controller-5bb64cd5d7-vnp7h\" (UID: \"10953a36-52e8-4614-af9d-7df97c580ffc\") " pod="metallb-system/controller-5bb64cd5d7-vnp7h" Apr 04 02:15:35 crc kubenswrapper[4681]: I0404 02:15:35.306725 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 04 02:15:35 crc kubenswrapper[4681]: I0404 02:15:35.391864 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-wn7bd"] Apr 04 02:15:35 crc kubenswrapper[4681]: W0404 02:15:35.393578 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82278f5d_bc0c_45d9_9efd_170e322295dd.slice/crio-7eae60c201891a205e1f770f18cb9676ff6a2937b7bc923ccf2d8fcf0c0f2f74 WatchSource:0}: Error finding container 7eae60c201891a205e1f770f18cb9676ff6a2937b7bc923ccf2d8fcf0c0f2f74: Status 404 returned error can't find the container with id 7eae60c201891a205e1f770f18cb9676ff6a2937b7bc923ccf2d8fcf0c0f2f74 Apr 04 02:15:35 crc kubenswrapper[4681]: I0404 02:15:35.581056 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/10953a36-52e8-4614-af9d-7df97c580ffc-metrics-certs\") pod \"controller-5bb64cd5d7-vnp7h\" (UID: \"10953a36-52e8-4614-af9d-7df97c580ffc\") " pod="metallb-system/controller-5bb64cd5d7-vnp7h" Apr 04 02:15:35 crc kubenswrapper[4681]: I0404 02:15:35.581124 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b9303934-434e-47f9-8c2b-36d6e6320ab2-memberlist\") pod \"speaker-lvp5w\" (UID: \"b9303934-434e-47f9-8c2b-36d6e6320ab2\") " pod="metallb-system/speaker-lvp5w" Apr 04 02:15:35 crc kubenswrapper[4681]: E0404 02:15:35.581259 4681 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Apr 04 02:15:35 crc kubenswrapper[4681]: E0404 02:15:35.581345 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b9303934-434e-47f9-8c2b-36d6e6320ab2-memberlist podName:b9303934-434e-47f9-8c2b-36d6e6320ab2 nodeName:}" failed. No retries permitted until 2026-04-04 02:15:36.581326953 +0000 UTC m=+1216.247102073 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b9303934-434e-47f9-8c2b-36d6e6320ab2-memberlist") pod "speaker-lvp5w" (UID: "b9303934-434e-47f9-8c2b-36d6e6320ab2") : secret "metallb-memberlist" not found Apr 04 02:15:35 crc kubenswrapper[4681]: I0404 02:15:35.585670 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/10953a36-52e8-4614-af9d-7df97c580ffc-metrics-certs\") pod \"controller-5bb64cd5d7-vnp7h\" (UID: \"10953a36-52e8-4614-af9d-7df97c580ffc\") " pod="metallb-system/controller-5bb64cd5d7-vnp7h" Apr 04 02:15:35 crc kubenswrapper[4681]: I0404 02:15:35.810782 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bb64cd5d7-vnp7h" Apr 04 02:15:36 crc kubenswrapper[4681]: I0404 02:15:36.088105 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kj4kn" event={"ID":"873d7ebe-9962-4fd0-84e5-4dbc1c576644","Type":"ContainerStarted","Data":"aa486d2695258e06a0da3e508624b410c0b8bc6cf845b407c49240e78483df3d"} Apr 04 02:15:36 crc kubenswrapper[4681]: I0404 02:15:36.089311 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wn7bd" event={"ID":"82278f5d-bc0c-45d9-9efd-170e322295dd","Type":"ContainerStarted","Data":"7eae60c201891a205e1f770f18cb9676ff6a2937b7bc923ccf2d8fcf0c0f2f74"} Apr 04 02:15:36 crc kubenswrapper[4681]: I0404 02:15:36.201890 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bb64cd5d7-vnp7h"] Apr 04 02:15:36 crc kubenswrapper[4681]: I0404 02:15:36.599361 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b9303934-434e-47f9-8c2b-36d6e6320ab2-memberlist\") pod \"speaker-lvp5w\" (UID: \"b9303934-434e-47f9-8c2b-36d6e6320ab2\") " pod="metallb-system/speaker-lvp5w" Apr 04 02:15:36 crc kubenswrapper[4681]: I0404 02:15:36.605204 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b9303934-434e-47f9-8c2b-36d6e6320ab2-memberlist\") pod \"speaker-lvp5w\" (UID: \"b9303934-434e-47f9-8c2b-36d6e6320ab2\") " pod="metallb-system/speaker-lvp5w" Apr 04 02:15:36 crc kubenswrapper[4681]: I0404 02:15:36.660544 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-lvp5w" Apr 04 02:15:36 crc kubenswrapper[4681]: W0404 02:15:36.698526 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9303934_434e_47f9_8c2b_36d6e6320ab2.slice/crio-940b47a27752defc939313859c48da8e5a1b75d09aaef6d0e27b172e1e314bf5 WatchSource:0}: Error finding container 940b47a27752defc939313859c48da8e5a1b75d09aaef6d0e27b172e1e314bf5: Status 404 returned error can't find the container with id 940b47a27752defc939313859c48da8e5a1b75d09aaef6d0e27b172e1e314bf5 Apr 04 02:15:37 crc kubenswrapper[4681]: I0404 02:15:37.098973 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bb64cd5d7-vnp7h" event={"ID":"10953a36-52e8-4614-af9d-7df97c580ffc","Type":"ContainerStarted","Data":"4022cfdb78e77750d98626290e26bc385b5a293f0cd7a699d1400f126a2ea156"} Apr 04 02:15:37 crc kubenswrapper[4681]: I0404 02:15:37.099014 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bb64cd5d7-vnp7h" event={"ID":"10953a36-52e8-4614-af9d-7df97c580ffc","Type":"ContainerStarted","Data":"3aa368561a0ddda0644addc9c541b802ab16ee5e2b4ed12662937a4af1973e70"} Apr 04 02:15:37 crc kubenswrapper[4681]: I0404 02:15:37.099026 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bb64cd5d7-vnp7h" event={"ID":"10953a36-52e8-4614-af9d-7df97c580ffc","Type":"ContainerStarted","Data":"052a9da9dee9732837ca02ac1ec657e56f7972f2081679aa97138f99ef0913a1"} Apr 04 02:15:37 crc kubenswrapper[4681]: I0404 02:15:37.099141 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5bb64cd5d7-vnp7h" Apr 04 02:15:37 crc kubenswrapper[4681]: I0404 02:15:37.103808 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lvp5w" event={"ID":"b9303934-434e-47f9-8c2b-36d6e6320ab2","Type":"ContainerStarted","Data":"f0c0fe3629cf11e9531ca73bca14403480c24d6b93c50ca336c337d0f49f1f6c"} Apr 04 02:15:37 crc kubenswrapper[4681]: I0404 02:15:37.103871 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lvp5w" event={"ID":"b9303934-434e-47f9-8c2b-36d6e6320ab2","Type":"ContainerStarted","Data":"940b47a27752defc939313859c48da8e5a1b75d09aaef6d0e27b172e1e314bf5"} Apr 04 02:15:37 crc kubenswrapper[4681]: I0404 02:15:37.131847 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5bb64cd5d7-vnp7h" podStartSLOduration=3.131822879 podStartE2EDuration="3.131822879s" podCreationTimestamp="2026-04-04 02:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:15:37.119654156 +0000 UTC m=+1216.785429286" watchObservedRunningTime="2026-04-04 02:15:37.131822879 +0000 UTC m=+1216.797598009" Apr 04 02:15:38 crc kubenswrapper[4681]: I0404 02:15:38.128109 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-lvp5w" event={"ID":"b9303934-434e-47f9-8c2b-36d6e6320ab2","Type":"ContainerStarted","Data":"cc365855b1fc6740717f3119af13a4e4037f5c1c319c3b1bed6cc18492c5bbba"} Apr 04 02:15:38 crc kubenswrapper[4681]: I0404 02:15:38.128344 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-lvp5w" Apr 04 02:15:38 crc kubenswrapper[4681]: I0404 02:15:38.147253 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-lvp5w" podStartSLOduration=4.147237429 podStartE2EDuration="4.147237429s" podCreationTimestamp="2026-04-04 02:15:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:15:38.144966125 +0000 UTC m=+1217.810741255" watchObservedRunningTime="2026-04-04 02:15:38.147237429 +0000 UTC m=+1217.813012549" Apr 04 02:15:44 crc kubenswrapper[4681]: I0404 02:15:44.173258 4681 generic.go:334] "Generic (PLEG): container finished" podID="873d7ebe-9962-4fd0-84e5-4dbc1c576644" containerID="2db62935b555505763f0ce35bd33226262891ec423a700dffefeec206a60789c" exitCode=0 Apr 04 02:15:44 crc kubenswrapper[4681]: I0404 02:15:44.173378 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kj4kn" event={"ID":"873d7ebe-9962-4fd0-84e5-4dbc1c576644","Type":"ContainerDied","Data":"2db62935b555505763f0ce35bd33226262891ec423a700dffefeec206a60789c"} Apr 04 02:15:44 crc kubenswrapper[4681]: I0404 02:15:44.175621 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wn7bd" event={"ID":"82278f5d-bc0c-45d9-9efd-170e322295dd","Type":"ContainerStarted","Data":"ba9f703c0780388d44283a07bc33552eec28c671cc56d4fef27c5b65a6e3330d"} Apr 04 02:15:44 crc kubenswrapper[4681]: I0404 02:15:44.175804 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wn7bd" Apr 04 02:15:44 crc kubenswrapper[4681]: I0404 02:15:44.224869 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wn7bd" podStartSLOduration=2.68558514 podStartE2EDuration="10.224851138s" podCreationTimestamp="2026-04-04 02:15:34 +0000 UTC" firstStartedPulling="2026-04-04 02:15:35.395226518 +0000 UTC m=+1215.061001638" lastFinishedPulling="2026-04-04 02:15:42.934492516 +0000 UTC m=+1222.600267636" observedRunningTime="2026-04-04 02:15:44.221709369 +0000 UTC m=+1223.887484509" watchObservedRunningTime="2026-04-04 02:15:44.224851138 +0000 UTC m=+1223.890626258" Apr 04 02:15:45 crc kubenswrapper[4681]: I0404 02:15:45.186959 4681 generic.go:334] "Generic (PLEG): container finished" podID="873d7ebe-9962-4fd0-84e5-4dbc1c576644" containerID="fea3004cabf3c0e1f4d520def91a69b60a746ec581f2e3222ca5a4b7e61cedcf" exitCode=0 Apr 04 02:15:45 crc kubenswrapper[4681]: I0404 02:15:45.187021 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kj4kn" event={"ID":"873d7ebe-9962-4fd0-84e5-4dbc1c576644","Type":"ContainerDied","Data":"fea3004cabf3c0e1f4d520def91a69b60a746ec581f2e3222ca5a4b7e61cedcf"} Apr 04 02:15:46 crc kubenswrapper[4681]: I0404 02:15:46.198712 4681 generic.go:334] "Generic (PLEG): container finished" podID="873d7ebe-9962-4fd0-84e5-4dbc1c576644" containerID="aad8d8427a7b4935e676068b299a961b06d5715b0aaee5d18f59e8ac19a4ffab" exitCode=0 Apr 04 02:15:46 crc kubenswrapper[4681]: I0404 02:15:46.198782 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kj4kn" event={"ID":"873d7ebe-9962-4fd0-84e5-4dbc1c576644","Type":"ContainerDied","Data":"aad8d8427a7b4935e676068b299a961b06d5715b0aaee5d18f59e8ac19a4ffab"} Apr 04 02:15:46 crc kubenswrapper[4681]: I0404 02:15:46.665482 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-lvp5w" Apr 04 02:15:47 crc kubenswrapper[4681]: I0404 02:15:47.214024 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kj4kn" event={"ID":"873d7ebe-9962-4fd0-84e5-4dbc1c576644","Type":"ContainerStarted","Data":"8ee1284067940762212d16046de73a03f52606c1d931e4908162b4d320dccbb4"} Apr 04 02:15:47 crc kubenswrapper[4681]: I0404 02:15:47.214327 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kj4kn" event={"ID":"873d7ebe-9962-4fd0-84e5-4dbc1c576644","Type":"ContainerStarted","Data":"47f87debe67b14101b8e32644b52c6f0ba114b13af91d2f9c32dfd0ea5d20df5"} Apr 04 02:15:47 crc kubenswrapper[4681]: I0404 02:15:47.214337 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kj4kn" event={"ID":"873d7ebe-9962-4fd0-84e5-4dbc1c576644","Type":"ContainerStarted","Data":"27ff52ba499dcee206b40fc12395b43ebd50f2c99f24db869a09dba69f71b88d"} Apr 04 02:15:47 crc kubenswrapper[4681]: I0404 02:15:47.214346 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kj4kn" event={"ID":"873d7ebe-9962-4fd0-84e5-4dbc1c576644","Type":"ContainerStarted","Data":"417ee868adfd7e1d8db425c45ac92412b61b177b5e6056bc10f537d9a6595f59"} Apr 04 02:15:49 crc kubenswrapper[4681]: I0404 02:15:49.186599 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-w8fgh"] Apr 04 02:15:49 crc kubenswrapper[4681]: I0404 02:15:49.187618 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-w8fgh" Apr 04 02:15:49 crc kubenswrapper[4681]: I0404 02:15:49.189840 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-jw9np" Apr 04 02:15:49 crc kubenswrapper[4681]: I0404 02:15:49.191005 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m6bl\" (UniqueName: \"kubernetes.io/projected/4d9c28be-2347-44e3-ada1-3f84dc34ad7b-kube-api-access-4m6bl\") pod \"openstack-operator-index-w8fgh\" (UID: \"4d9c28be-2347-44e3-ada1-3f84dc34ad7b\") " pod="openstack-operators/openstack-operator-index-w8fgh" Apr 04 02:15:49 crc kubenswrapper[4681]: I0404 02:15:49.192541 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Apr 04 02:15:49 crc kubenswrapper[4681]: I0404 02:15:49.193074 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Apr 04 02:15:49 crc kubenswrapper[4681]: I0404 02:15:49.231188 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-w8fgh"] Apr 04 02:15:49 crc kubenswrapper[4681]: I0404 02:15:49.277021 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kj4kn" event={"ID":"873d7ebe-9962-4fd0-84e5-4dbc1c576644","Type":"ContainerStarted","Data":"954f59c9b5ba6f87608f7848ee95b6bd076d53c44055920e5bd423e4a5aee33e"} Apr 04 02:15:49 crc kubenswrapper[4681]: I0404 02:15:49.277296 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-kj4kn" event={"ID":"873d7ebe-9962-4fd0-84e5-4dbc1c576644","Type":"ContainerStarted","Data":"609a29b7200991e9249f2edd070f5b2103c10b177be6b7111d493dc7874f7caf"} Apr 04 02:15:49 crc kubenswrapper[4681]: I0404 02:15:49.277713 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-kj4kn" Apr 04 02:15:49 crc kubenswrapper[4681]: I0404 02:15:49.291864 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m6bl\" (UniqueName: \"kubernetes.io/projected/4d9c28be-2347-44e3-ada1-3f84dc34ad7b-kube-api-access-4m6bl\") pod \"openstack-operator-index-w8fgh\" (UID: \"4d9c28be-2347-44e3-ada1-3f84dc34ad7b\") " pod="openstack-operators/openstack-operator-index-w8fgh" Apr 04 02:15:49 crc kubenswrapper[4681]: I0404 02:15:49.307292 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-kj4kn" podStartSLOduration=7.658603352 podStartE2EDuration="15.307276292s" podCreationTimestamp="2026-04-04 02:15:34 +0000 UTC" firstStartedPulling="2026-04-04 02:15:35.306470886 +0000 UTC m=+1214.972246006" lastFinishedPulling="2026-04-04 02:15:42.955143836 +0000 UTC m=+1222.620918946" observedRunningTime="2026-04-04 02:15:49.304244297 +0000 UTC m=+1228.970019417" watchObservedRunningTime="2026-04-04 02:15:49.307276292 +0000 UTC m=+1228.973051412" Apr 04 02:15:49 crc kubenswrapper[4681]: I0404 02:15:49.313080 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m6bl\" (UniqueName: \"kubernetes.io/projected/4d9c28be-2347-44e3-ada1-3f84dc34ad7b-kube-api-access-4m6bl\") pod \"openstack-operator-index-w8fgh\" (UID: \"4d9c28be-2347-44e3-ada1-3f84dc34ad7b\") " pod="openstack-operators/openstack-operator-index-w8fgh" Apr 04 02:15:49 crc kubenswrapper[4681]: I0404 02:15:49.518799 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-w8fgh" Apr 04 02:15:49 crc kubenswrapper[4681]: I0404 02:15:49.774777 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-w8fgh"] Apr 04 02:15:49 crc kubenswrapper[4681]: W0404 02:15:49.780300 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d9c28be_2347_44e3_ada1_3f84dc34ad7b.slice/crio-d7aa6207b12a48e3a967e22ad3e4942c8c598727d012a81889f125aa8d45c391 WatchSource:0}: Error finding container d7aa6207b12a48e3a967e22ad3e4942c8c598727d012a81889f125aa8d45c391: Status 404 returned error can't find the container with id d7aa6207b12a48e3a967e22ad3e4942c8c598727d012a81889f125aa8d45c391 Apr 04 02:15:50 crc kubenswrapper[4681]: I0404 02:15:50.055530 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-kj4kn" Apr 04 02:15:50 crc kubenswrapper[4681]: I0404 02:15:50.094635 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-kj4kn" Apr 04 02:15:50 crc kubenswrapper[4681]: I0404 02:15:50.285014 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-w8fgh" event={"ID":"4d9c28be-2347-44e3-ada1-3f84dc34ad7b","Type":"ContainerStarted","Data":"d7aa6207b12a48e3a967e22ad3e4942c8c598727d012a81889f125aa8d45c391"} Apr 04 02:15:52 crc kubenswrapper[4681]: I0404 02:15:52.550557 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-w8fgh"] Apr 04 02:15:53 crc kubenswrapper[4681]: I0404 02:15:53.189145 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-nc9vv"] Apr 04 02:15:53 crc kubenswrapper[4681]: I0404 02:15:53.191480 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nc9vv" Apr 04 02:15:53 crc kubenswrapper[4681]: I0404 02:15:53.215523 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nc9vv"] Apr 04 02:15:53 crc kubenswrapper[4681]: I0404 02:15:53.305638 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-w8fgh" event={"ID":"4d9c28be-2347-44e3-ada1-3f84dc34ad7b","Type":"ContainerStarted","Data":"0a5dfb685ae314a2da82fdaf513d3200f5951471869f9802a4ef9749634642a0"} Apr 04 02:15:53 crc kubenswrapper[4681]: I0404 02:15:53.322912 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-w8fgh" podStartSLOduration=1.786771224 podStartE2EDuration="4.322886472s" podCreationTimestamp="2026-04-04 02:15:49 +0000 UTC" firstStartedPulling="2026-04-04 02:15:49.782733903 +0000 UTC m=+1229.448509023" lastFinishedPulling="2026-04-04 02:15:52.318849151 +0000 UTC m=+1231.984624271" observedRunningTime="2026-04-04 02:15:53.322427159 +0000 UTC m=+1232.988202319" watchObservedRunningTime="2026-04-04 02:15:53.322886472 +0000 UTC m=+1232.988661622" Apr 04 02:15:53 crc kubenswrapper[4681]: I0404 02:15:53.354960 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt85c\" (UniqueName: \"kubernetes.io/projected/4551c34d-f733-4478-9613-7618e59322b5-kube-api-access-bt85c\") pod \"openstack-operator-index-nc9vv\" (UID: \"4551c34d-f733-4478-9613-7618e59322b5\") " pod="openstack-operators/openstack-operator-index-nc9vv" Apr 04 02:15:53 crc kubenswrapper[4681]: I0404 02:15:53.456076 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt85c\" (UniqueName: \"kubernetes.io/projected/4551c34d-f733-4478-9613-7618e59322b5-kube-api-access-bt85c\") pod \"openstack-operator-index-nc9vv\" (UID: \"4551c34d-f733-4478-9613-7618e59322b5\") " pod="openstack-operators/openstack-operator-index-nc9vv" Apr 04 02:15:53 crc kubenswrapper[4681]: I0404 02:15:53.503385 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt85c\" (UniqueName: \"kubernetes.io/projected/4551c34d-f733-4478-9613-7618e59322b5-kube-api-access-bt85c\") pod \"openstack-operator-index-nc9vv\" (UID: \"4551c34d-f733-4478-9613-7618e59322b5\") " pod="openstack-operators/openstack-operator-index-nc9vv" Apr 04 02:15:53 crc kubenswrapper[4681]: I0404 02:15:53.511213 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nc9vv" Apr 04 02:15:53 crc kubenswrapper[4681]: I0404 02:15:53.738569 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nc9vv"] Apr 04 02:15:53 crc kubenswrapper[4681]: W0404 02:15:53.743386 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4551c34d_f733_4478_9613_7618e59322b5.slice/crio-e60f4630fb07a133a8362cdd4c56acf80d26039e0e00c4dc06847dcd74c571c0 WatchSource:0}: Error finding container e60f4630fb07a133a8362cdd4c56acf80d26039e0e00c4dc06847dcd74c571c0: Status 404 returned error can't find the container with id e60f4630fb07a133a8362cdd4c56acf80d26039e0e00c4dc06847dcd74c571c0 Apr 04 02:15:54 crc kubenswrapper[4681]: I0404 02:15:54.314194 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-w8fgh" podUID="4d9c28be-2347-44e3-ada1-3f84dc34ad7b" containerName="registry-server" containerID="cri-o://0a5dfb685ae314a2da82fdaf513d3200f5951471869f9802a4ef9749634642a0" gracePeriod=2 Apr 04 02:15:54 crc kubenswrapper[4681]: I0404 02:15:54.315343 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nc9vv" event={"ID":"4551c34d-f733-4478-9613-7618e59322b5","Type":"ContainerStarted","Data":"61236023ec724852c480e40286d2aa926f7003c32d5ca49f33bda4677a76e2b3"} Apr 04 02:15:54 crc kubenswrapper[4681]: I0404 02:15:54.315374 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nc9vv" event={"ID":"4551c34d-f733-4478-9613-7618e59322b5","Type":"ContainerStarted","Data":"e60f4630fb07a133a8362cdd4c56acf80d26039e0e00c4dc06847dcd74c571c0"} Apr 04 02:15:54 crc kubenswrapper[4681]: I0404 02:15:54.341079 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-nc9vv" podStartSLOduration=1.271914038 podStartE2EDuration="1.34106416s" podCreationTimestamp="2026-04-04 02:15:53 +0000 UTC" firstStartedPulling="2026-04-04 02:15:53.749001156 +0000 UTC m=+1233.414776276" lastFinishedPulling="2026-04-04 02:15:53.818151278 +0000 UTC m=+1233.483926398" observedRunningTime="2026-04-04 02:15:54.33608549 +0000 UTC m=+1234.001860610" watchObservedRunningTime="2026-04-04 02:15:54.34106416 +0000 UTC m=+1234.006839280" Apr 04 02:15:54 crc kubenswrapper[4681]: I0404 02:15:54.663864 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-w8fgh" Apr 04 02:15:54 crc kubenswrapper[4681]: I0404 02:15:54.782455 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4m6bl\" (UniqueName: \"kubernetes.io/projected/4d9c28be-2347-44e3-ada1-3f84dc34ad7b-kube-api-access-4m6bl\") pod \"4d9c28be-2347-44e3-ada1-3f84dc34ad7b\" (UID: \"4d9c28be-2347-44e3-ada1-3f84dc34ad7b\") " Apr 04 02:15:54 crc kubenswrapper[4681]: I0404 02:15:54.787294 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d9c28be-2347-44e3-ada1-3f84dc34ad7b-kube-api-access-4m6bl" (OuterVolumeSpecName: "kube-api-access-4m6bl") pod "4d9c28be-2347-44e3-ada1-3f84dc34ad7b" (UID: "4d9c28be-2347-44e3-ada1-3f84dc34ad7b"). InnerVolumeSpecName "kube-api-access-4m6bl". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:15:54 crc kubenswrapper[4681]: I0404 02:15:54.884158 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4m6bl\" (UniqueName: \"kubernetes.io/projected/4d9c28be-2347-44e3-ada1-3f84dc34ad7b-kube-api-access-4m6bl\") on node \"crc\" DevicePath \"\"" Apr 04 02:15:55 crc kubenswrapper[4681]: I0404 02:15:55.089040 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-wn7bd" Apr 04 02:15:55 crc kubenswrapper[4681]: I0404 02:15:55.323945 4681 generic.go:334] "Generic (PLEG): container finished" podID="4d9c28be-2347-44e3-ada1-3f84dc34ad7b" containerID="0a5dfb685ae314a2da82fdaf513d3200f5951471869f9802a4ef9749634642a0" exitCode=0 Apr 04 02:15:55 crc kubenswrapper[4681]: I0404 02:15:55.324055 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-w8fgh" Apr 04 02:15:55 crc kubenswrapper[4681]: I0404 02:15:55.324097 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-w8fgh" event={"ID":"4d9c28be-2347-44e3-ada1-3f84dc34ad7b","Type":"ContainerDied","Data":"0a5dfb685ae314a2da82fdaf513d3200f5951471869f9802a4ef9749634642a0"} Apr 04 02:15:55 crc kubenswrapper[4681]: I0404 02:15:55.324160 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-w8fgh" event={"ID":"4d9c28be-2347-44e3-ada1-3f84dc34ad7b","Type":"ContainerDied","Data":"d7aa6207b12a48e3a967e22ad3e4942c8c598727d012a81889f125aa8d45c391"} Apr 04 02:15:55 crc kubenswrapper[4681]: I0404 02:15:55.324189 4681 scope.go:117] "RemoveContainer" containerID="0a5dfb685ae314a2da82fdaf513d3200f5951471869f9802a4ef9749634642a0" Apr 04 02:15:55 crc kubenswrapper[4681]: I0404 02:15:55.347856 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-w8fgh"] Apr 04 02:15:55 crc kubenswrapper[4681]: I0404 02:15:55.349761 4681 scope.go:117] "RemoveContainer" containerID="0a5dfb685ae314a2da82fdaf513d3200f5951471869f9802a4ef9749634642a0" Apr 04 02:15:55 crc kubenswrapper[4681]: E0404 02:15:55.350289 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a5dfb685ae314a2da82fdaf513d3200f5951471869f9802a4ef9749634642a0\": container with ID starting with 0a5dfb685ae314a2da82fdaf513d3200f5951471869f9802a4ef9749634642a0 not found: ID does not exist" containerID="0a5dfb685ae314a2da82fdaf513d3200f5951471869f9802a4ef9749634642a0" Apr 04 02:15:55 crc kubenswrapper[4681]: I0404 02:15:55.350322 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a5dfb685ae314a2da82fdaf513d3200f5951471869f9802a4ef9749634642a0"} err="failed to get container status \"0a5dfb685ae314a2da82fdaf513d3200f5951471869f9802a4ef9749634642a0\": rpc error: code = NotFound desc = could not find container \"0a5dfb685ae314a2da82fdaf513d3200f5951471869f9802a4ef9749634642a0\": container with ID starting with 0a5dfb685ae314a2da82fdaf513d3200f5951471869f9802a4ef9749634642a0 not found: ID does not exist" Apr 04 02:15:55 crc kubenswrapper[4681]: I0404 02:15:55.352767 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-w8fgh"] Apr 04 02:15:55 crc kubenswrapper[4681]: I0404 02:15:55.814666 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5bb64cd5d7-vnp7h" Apr 04 02:15:57 crc kubenswrapper[4681]: I0404 02:15:57.210030 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d9c28be-2347-44e3-ada1-3f84dc34ad7b" path="/var/lib/kubelet/pods/4d9c28be-2347-44e3-ada1-3f84dc34ad7b/volumes" Apr 04 02:16:00 crc kubenswrapper[4681]: I0404 02:16:00.136705 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587816-dlnrp"] Apr 04 02:16:00 crc kubenswrapper[4681]: E0404 02:16:00.137539 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d9c28be-2347-44e3-ada1-3f84dc34ad7b" containerName="registry-server" Apr 04 02:16:00 crc kubenswrapper[4681]: I0404 02:16:00.137661 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d9c28be-2347-44e3-ada1-3f84dc34ad7b" containerName="registry-server" Apr 04 02:16:00 crc kubenswrapper[4681]: I0404 02:16:00.137956 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d9c28be-2347-44e3-ada1-3f84dc34ad7b" containerName="registry-server" Apr 04 02:16:00 crc kubenswrapper[4681]: I0404 02:16:00.141012 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587816-dlnrp" Apr 04 02:16:00 crc kubenswrapper[4681]: I0404 02:16:00.144135 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 02:16:00 crc kubenswrapper[4681]: I0404 02:16:00.144591 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 02:16:00 crc kubenswrapper[4681]: I0404 02:16:00.145721 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 02:16:00 crc kubenswrapper[4681]: I0404 02:16:00.146373 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587816-dlnrp"] Apr 04 02:16:00 crc kubenswrapper[4681]: I0404 02:16:00.158259 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh6sh\" (UniqueName: \"kubernetes.io/projected/21cbe3ab-4b9a-49b1-90c6-a86457d33b81-kube-api-access-dh6sh\") pod \"auto-csr-approver-29587816-dlnrp\" (UID: \"21cbe3ab-4b9a-49b1-90c6-a86457d33b81\") " pod="openshift-infra/auto-csr-approver-29587816-dlnrp" Apr 04 02:16:00 crc kubenswrapper[4681]: I0404 02:16:00.259296 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh6sh\" (UniqueName: \"kubernetes.io/projected/21cbe3ab-4b9a-49b1-90c6-a86457d33b81-kube-api-access-dh6sh\") pod \"auto-csr-approver-29587816-dlnrp\" (UID: \"21cbe3ab-4b9a-49b1-90c6-a86457d33b81\") " pod="openshift-infra/auto-csr-approver-29587816-dlnrp" Apr 04 02:16:00 crc kubenswrapper[4681]: I0404 02:16:00.287634 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh6sh\" (UniqueName: \"kubernetes.io/projected/21cbe3ab-4b9a-49b1-90c6-a86457d33b81-kube-api-access-dh6sh\") pod \"auto-csr-approver-29587816-dlnrp\" (UID: \"21cbe3ab-4b9a-49b1-90c6-a86457d33b81\") " pod="openshift-infra/auto-csr-approver-29587816-dlnrp" Apr 04 02:16:00 crc kubenswrapper[4681]: I0404 02:16:00.461330 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587816-dlnrp" Apr 04 02:16:00 crc kubenswrapper[4681]: I0404 02:16:00.869325 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587816-dlnrp"] Apr 04 02:16:01 crc kubenswrapper[4681]: I0404 02:16:01.380654 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587816-dlnrp" event={"ID":"21cbe3ab-4b9a-49b1-90c6-a86457d33b81","Type":"ContainerStarted","Data":"6ecfaf1ed686d79674ae91e6d9b0fce737746b56ca56d63f1199a43004e0f89b"} Apr 04 02:16:02 crc kubenswrapper[4681]: I0404 02:16:02.388797 4681 generic.go:334] "Generic (PLEG): container finished" podID="21cbe3ab-4b9a-49b1-90c6-a86457d33b81" containerID="c457b473bb243d16a05cb24de21f923745e72352c272eee3ba000a0b53241a12" exitCode=0 Apr 04 02:16:02 crc kubenswrapper[4681]: I0404 02:16:02.388922 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587816-dlnrp" event={"ID":"21cbe3ab-4b9a-49b1-90c6-a86457d33b81","Type":"ContainerDied","Data":"c457b473bb243d16a05cb24de21f923745e72352c272eee3ba000a0b53241a12"} Apr 04 02:16:03 crc kubenswrapper[4681]: I0404 02:16:03.512299 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-nc9vv" Apr 04 02:16:03 crc kubenswrapper[4681]: I0404 02:16:03.512830 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-nc9vv" Apr 04 02:16:03 crc kubenswrapper[4681]: I0404 02:16:03.549912 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-nc9vv" Apr 04 02:16:03 crc kubenswrapper[4681]: I0404 02:16:03.713522 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587816-dlnrp" Apr 04 02:16:03 crc kubenswrapper[4681]: I0404 02:16:03.908622 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dh6sh\" (UniqueName: \"kubernetes.io/projected/21cbe3ab-4b9a-49b1-90c6-a86457d33b81-kube-api-access-dh6sh\") pod \"21cbe3ab-4b9a-49b1-90c6-a86457d33b81\" (UID: \"21cbe3ab-4b9a-49b1-90c6-a86457d33b81\") " Apr 04 02:16:03 crc kubenswrapper[4681]: I0404 02:16:03.916789 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21cbe3ab-4b9a-49b1-90c6-a86457d33b81-kube-api-access-dh6sh" (OuterVolumeSpecName: "kube-api-access-dh6sh") pod "21cbe3ab-4b9a-49b1-90c6-a86457d33b81" (UID: "21cbe3ab-4b9a-49b1-90c6-a86457d33b81"). InnerVolumeSpecName "kube-api-access-dh6sh". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:16:04 crc kubenswrapper[4681]: I0404 02:16:04.010596 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dh6sh\" (UniqueName: \"kubernetes.io/projected/21cbe3ab-4b9a-49b1-90c6-a86457d33b81-kube-api-access-dh6sh\") on node \"crc\" DevicePath \"\"" Apr 04 02:16:04 crc kubenswrapper[4681]: I0404 02:16:04.411816 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587816-dlnrp" Apr 04 02:16:04 crc kubenswrapper[4681]: I0404 02:16:04.411827 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587816-dlnrp" event={"ID":"21cbe3ab-4b9a-49b1-90c6-a86457d33b81","Type":"ContainerDied","Data":"6ecfaf1ed686d79674ae91e6d9b0fce737746b56ca56d63f1199a43004e0f89b"} Apr 04 02:16:04 crc kubenswrapper[4681]: I0404 02:16:04.411874 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ecfaf1ed686d79674ae91e6d9b0fce737746b56ca56d63f1199a43004e0f89b" Apr 04 02:16:04 crc kubenswrapper[4681]: I0404 02:16:04.444690 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-nc9vv" Apr 04 02:16:04 crc kubenswrapper[4681]: I0404 02:16:04.786741 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587810-4s29h"] Apr 04 02:16:04 crc kubenswrapper[4681]: I0404 02:16:04.797346 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587810-4s29h"] Apr 04 02:16:05 crc kubenswrapper[4681]: I0404 02:16:05.060893 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-kj4kn" Apr 04 02:16:05 crc kubenswrapper[4681]: I0404 02:16:05.212843 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bb9cb84-0f19-49e1-8c8a-f8394d24935b" path="/var/lib/kubelet/pods/6bb9cb84-0f19-49e1-8c8a-f8394d24935b/volumes" Apr 04 02:16:11 crc kubenswrapper[4681]: I0404 02:16:11.164107 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx"] Apr 04 02:16:11 crc kubenswrapper[4681]: E0404 02:16:11.164620 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21cbe3ab-4b9a-49b1-90c6-a86457d33b81" containerName="oc" Apr 04 02:16:11 crc kubenswrapper[4681]: I0404 02:16:11.164631 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="21cbe3ab-4b9a-49b1-90c6-a86457d33b81" containerName="oc" Apr 04 02:16:11 crc kubenswrapper[4681]: I0404 02:16:11.164748 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="21cbe3ab-4b9a-49b1-90c6-a86457d33b81" containerName="oc" Apr 04 02:16:11 crc kubenswrapper[4681]: I0404 02:16:11.165602 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx" Apr 04 02:16:11 crc kubenswrapper[4681]: I0404 02:16:11.170369 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-xf67k" Apr 04 02:16:11 crc kubenswrapper[4681]: I0404 02:16:11.176068 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx"] Apr 04 02:16:11 crc kubenswrapper[4681]: I0404 02:16:11.320093 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kswkl\" (UniqueName: \"kubernetes.io/projected/622ff9fc-9cc4-4167-86f8-012c6031b393-kube-api-access-kswkl\") pod \"f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx\" (UID: \"622ff9fc-9cc4-4167-86f8-012c6031b393\") " pod="openstack-operators/f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx" Apr 04 02:16:11 crc kubenswrapper[4681]: I0404 02:16:11.320344 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/622ff9fc-9cc4-4167-86f8-012c6031b393-bundle\") pod \"f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx\" (UID: \"622ff9fc-9cc4-4167-86f8-012c6031b393\") " pod="openstack-operators/f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx" Apr 04 02:16:11 crc kubenswrapper[4681]: I0404 02:16:11.320500 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/622ff9fc-9cc4-4167-86f8-012c6031b393-util\") pod \"f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx\" (UID: \"622ff9fc-9cc4-4167-86f8-012c6031b393\") " pod="openstack-operators/f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx" Apr 04 02:16:11 crc kubenswrapper[4681]: I0404 02:16:11.421496 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/622ff9fc-9cc4-4167-86f8-012c6031b393-bundle\") pod \"f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx\" (UID: \"622ff9fc-9cc4-4167-86f8-012c6031b393\") " pod="openstack-operators/f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx" Apr 04 02:16:11 crc kubenswrapper[4681]: I0404 02:16:11.421544 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/622ff9fc-9cc4-4167-86f8-012c6031b393-util\") pod \"f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx\" (UID: \"622ff9fc-9cc4-4167-86f8-012c6031b393\") " pod="openstack-operators/f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx" Apr 04 02:16:11 crc kubenswrapper[4681]: I0404 02:16:11.421593 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kswkl\" (UniqueName: \"kubernetes.io/projected/622ff9fc-9cc4-4167-86f8-012c6031b393-kube-api-access-kswkl\") pod \"f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx\" (UID: \"622ff9fc-9cc4-4167-86f8-012c6031b393\") " pod="openstack-operators/f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx" Apr 04 02:16:11 crc kubenswrapper[4681]: I0404 02:16:11.422335 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/622ff9fc-9cc4-4167-86f8-012c6031b393-bundle\") pod \"f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx\" (UID: \"622ff9fc-9cc4-4167-86f8-012c6031b393\") " pod="openstack-operators/f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx" Apr 04 02:16:11 crc kubenswrapper[4681]: I0404 02:16:11.422948 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/622ff9fc-9cc4-4167-86f8-012c6031b393-util\") pod \"f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx\" (UID: \"622ff9fc-9cc4-4167-86f8-012c6031b393\") " pod="openstack-operators/f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx" Apr 04 02:16:11 crc kubenswrapper[4681]: I0404 02:16:11.441621 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kswkl\" (UniqueName: \"kubernetes.io/projected/622ff9fc-9cc4-4167-86f8-012c6031b393-kube-api-access-kswkl\") pod \"f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx\" (UID: \"622ff9fc-9cc4-4167-86f8-012c6031b393\") " pod="openstack-operators/f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx" Apr 04 02:16:11 crc kubenswrapper[4681]: I0404 02:16:11.483382 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx" Apr 04 02:16:11 crc kubenswrapper[4681]: I0404 02:16:11.885497 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx"] Apr 04 02:16:11 crc kubenswrapper[4681]: W0404 02:16:11.891542 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod622ff9fc_9cc4_4167_86f8_012c6031b393.slice/crio-036af7660e198d62715076ed944f4ff002836a1cffba15a6381f9c92fbe7af33 WatchSource:0}: Error finding container 036af7660e198d62715076ed944f4ff002836a1cffba15a6381f9c92fbe7af33: Status 404 returned error can't find the container with id 036af7660e198d62715076ed944f4ff002836a1cffba15a6381f9c92fbe7af33 Apr 04 02:16:12 crc kubenswrapper[4681]: I0404 02:16:12.477010 4681 generic.go:334] "Generic (PLEG): container finished" podID="622ff9fc-9cc4-4167-86f8-012c6031b393" containerID="7afaa378a5da4275cd81b3a02e5d4b3b66eff76ab5ca1412b3372d6e1e3270d7" exitCode=0 Apr 04 02:16:12 crc kubenswrapper[4681]: I0404 02:16:12.477082 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx" event={"ID":"622ff9fc-9cc4-4167-86f8-012c6031b393","Type":"ContainerDied","Data":"7afaa378a5da4275cd81b3a02e5d4b3b66eff76ab5ca1412b3372d6e1e3270d7"} Apr 04 02:16:12 crc kubenswrapper[4681]: I0404 02:16:12.477387 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx" event={"ID":"622ff9fc-9cc4-4167-86f8-012c6031b393","Type":"ContainerStarted","Data":"036af7660e198d62715076ed944f4ff002836a1cffba15a6381f9c92fbe7af33"} Apr 04 02:16:14 crc kubenswrapper[4681]: I0404 02:16:14.499986 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx" event={"ID":"622ff9fc-9cc4-4167-86f8-012c6031b393","Type":"ContainerStarted","Data":"39f3c3548a11e7d6ab691bc478ce4c76eb238ffe743ba3934311c671f41efa7d"} Apr 04 02:16:15 crc kubenswrapper[4681]: I0404 02:16:15.511516 4681 generic.go:334] "Generic (PLEG): container finished" podID="622ff9fc-9cc4-4167-86f8-012c6031b393" containerID="39f3c3548a11e7d6ab691bc478ce4c76eb238ffe743ba3934311c671f41efa7d" exitCode=0 Apr 04 02:16:15 crc kubenswrapper[4681]: I0404 02:16:15.511569 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx" event={"ID":"622ff9fc-9cc4-4167-86f8-012c6031b393","Type":"ContainerDied","Data":"39f3c3548a11e7d6ab691bc478ce4c76eb238ffe743ba3934311c671f41efa7d"} Apr 04 02:16:16 crc kubenswrapper[4681]: I0404 02:16:16.523220 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx" event={"ID":"622ff9fc-9cc4-4167-86f8-012c6031b393","Type":"ContainerStarted","Data":"675e7cb46869a38845edd72e5f96fe29ba4616d65bc79685df2f8a4258c53b74"} Apr 04 02:16:16 crc kubenswrapper[4681]: E0404 02:16:16.624136 4681 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod622ff9fc_9cc4_4167_86f8_012c6031b393.slice/crio-conmon-675e7cb46869a38845edd72e5f96fe29ba4616d65bc79685df2f8a4258c53b74.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod622ff9fc_9cc4_4167_86f8_012c6031b393.slice/crio-675e7cb46869a38845edd72e5f96fe29ba4616d65bc79685df2f8a4258c53b74.scope\": RecentStats: unable to find data in memory cache]" Apr 04 02:16:17 crc kubenswrapper[4681]: I0404 02:16:17.538092 4681 generic.go:334] "Generic (PLEG): container finished" podID="622ff9fc-9cc4-4167-86f8-012c6031b393" containerID="675e7cb46869a38845edd72e5f96fe29ba4616d65bc79685df2f8a4258c53b74" exitCode=0 Apr 04 02:16:17 crc kubenswrapper[4681]: I0404 02:16:17.538552 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx" event={"ID":"622ff9fc-9cc4-4167-86f8-012c6031b393","Type":"ContainerDied","Data":"675e7cb46869a38845edd72e5f96fe29ba4616d65bc79685df2f8a4258c53b74"} Apr 04 02:16:18 crc kubenswrapper[4681]: I0404 02:16:18.806431 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx" Apr 04 02:16:18 crc kubenswrapper[4681]: I0404 02:16:18.937537 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/622ff9fc-9cc4-4167-86f8-012c6031b393-bundle\") pod \"622ff9fc-9cc4-4167-86f8-012c6031b393\" (UID: \"622ff9fc-9cc4-4167-86f8-012c6031b393\") " Apr 04 02:16:18 crc kubenswrapper[4681]: I0404 02:16:18.937633 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kswkl\" (UniqueName: \"kubernetes.io/projected/622ff9fc-9cc4-4167-86f8-012c6031b393-kube-api-access-kswkl\") pod \"622ff9fc-9cc4-4167-86f8-012c6031b393\" (UID: \"622ff9fc-9cc4-4167-86f8-012c6031b393\") " Apr 04 02:16:18 crc kubenswrapper[4681]: I0404 02:16:18.937680 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/622ff9fc-9cc4-4167-86f8-012c6031b393-util\") pod \"622ff9fc-9cc4-4167-86f8-012c6031b393\" (UID: \"622ff9fc-9cc4-4167-86f8-012c6031b393\") " Apr 04 02:16:18 crc kubenswrapper[4681]: I0404 02:16:18.938809 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/622ff9fc-9cc4-4167-86f8-012c6031b393-bundle" (OuterVolumeSpecName: "bundle") pod "622ff9fc-9cc4-4167-86f8-012c6031b393" (UID: "622ff9fc-9cc4-4167-86f8-012c6031b393"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:16:18 crc kubenswrapper[4681]: I0404 02:16:18.943154 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/622ff9fc-9cc4-4167-86f8-012c6031b393-kube-api-access-kswkl" (OuterVolumeSpecName: "kube-api-access-kswkl") pod "622ff9fc-9cc4-4167-86f8-012c6031b393" (UID: "622ff9fc-9cc4-4167-86f8-012c6031b393"). InnerVolumeSpecName "kube-api-access-kswkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:16:18 crc kubenswrapper[4681]: I0404 02:16:18.947935 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/622ff9fc-9cc4-4167-86f8-012c6031b393-util" (OuterVolumeSpecName: "util") pod "622ff9fc-9cc4-4167-86f8-012c6031b393" (UID: "622ff9fc-9cc4-4167-86f8-012c6031b393"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:16:19 crc kubenswrapper[4681]: I0404 02:16:19.039337 4681 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/622ff9fc-9cc4-4167-86f8-012c6031b393-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:16:19 crc kubenswrapper[4681]: I0404 02:16:19.039418 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kswkl\" (UniqueName: \"kubernetes.io/projected/622ff9fc-9cc4-4167-86f8-012c6031b393-kube-api-access-kswkl\") on node \"crc\" DevicePath \"\"" Apr 04 02:16:19 crc kubenswrapper[4681]: I0404 02:16:19.039444 4681 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/622ff9fc-9cc4-4167-86f8-012c6031b393-util\") on node \"crc\" DevicePath \"\"" Apr 04 02:16:19 crc kubenswrapper[4681]: I0404 02:16:19.561444 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx" event={"ID":"622ff9fc-9cc4-4167-86f8-012c6031b393","Type":"ContainerDied","Data":"036af7660e198d62715076ed944f4ff002836a1cffba15a6381f9c92fbe7af33"} Apr 04 02:16:19 crc kubenswrapper[4681]: I0404 02:16:19.561498 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="036af7660e198d62715076ed944f4ff002836a1cffba15a6381f9c92fbe7af33" Apr 04 02:16:19 crc kubenswrapper[4681]: I0404 02:16:19.561540 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx" Apr 04 02:16:23 crc kubenswrapper[4681]: I0404 02:16:23.466239 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5645c5b4f-jmkvr"] Apr 04 02:16:23 crc kubenswrapper[4681]: E0404 02:16:23.467026 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="622ff9fc-9cc4-4167-86f8-012c6031b393" containerName="extract" Apr 04 02:16:23 crc kubenswrapper[4681]: I0404 02:16:23.467039 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="622ff9fc-9cc4-4167-86f8-012c6031b393" containerName="extract" Apr 04 02:16:23 crc kubenswrapper[4681]: E0404 02:16:23.467059 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="622ff9fc-9cc4-4167-86f8-012c6031b393" containerName="util" Apr 04 02:16:23 crc kubenswrapper[4681]: I0404 02:16:23.467070 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="622ff9fc-9cc4-4167-86f8-012c6031b393" containerName="util" Apr 04 02:16:23 crc kubenswrapper[4681]: E0404 02:16:23.467080 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="622ff9fc-9cc4-4167-86f8-012c6031b393" containerName="pull" Apr 04 02:16:23 crc kubenswrapper[4681]: I0404 02:16:23.467086 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="622ff9fc-9cc4-4167-86f8-012c6031b393" containerName="pull" Apr 04 02:16:23 crc kubenswrapper[4681]: I0404 02:16:23.467203 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="622ff9fc-9cc4-4167-86f8-012c6031b393" containerName="extract" Apr 04 02:16:23 crc kubenswrapper[4681]: I0404 02:16:23.467739 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5645c5b4f-jmkvr" Apr 04 02:16:23 crc kubenswrapper[4681]: I0404 02:16:23.476767 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-zccgx" Apr 04 02:16:23 crc kubenswrapper[4681]: I0404 02:16:23.509974 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5645c5b4f-jmkvr"] Apr 04 02:16:23 crc kubenswrapper[4681]: I0404 02:16:23.614489 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnb67\" (UniqueName: \"kubernetes.io/projected/08d96c31-f9c1-4308-ba34-bb5135a86eb8-kube-api-access-mnb67\") pod \"openstack-operator-controller-init-5645c5b4f-jmkvr\" (UID: \"08d96c31-f9c1-4308-ba34-bb5135a86eb8\") " pod="openstack-operators/openstack-operator-controller-init-5645c5b4f-jmkvr" Apr 04 02:16:23 crc kubenswrapper[4681]: I0404 02:16:23.715731 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnb67\" (UniqueName: \"kubernetes.io/projected/08d96c31-f9c1-4308-ba34-bb5135a86eb8-kube-api-access-mnb67\") pod \"openstack-operator-controller-init-5645c5b4f-jmkvr\" (UID: \"08d96c31-f9c1-4308-ba34-bb5135a86eb8\") " pod="openstack-operators/openstack-operator-controller-init-5645c5b4f-jmkvr" Apr 04 02:16:23 crc kubenswrapper[4681]: I0404 02:16:23.737064 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnb67\" (UniqueName: \"kubernetes.io/projected/08d96c31-f9c1-4308-ba34-bb5135a86eb8-kube-api-access-mnb67\") pod \"openstack-operator-controller-init-5645c5b4f-jmkvr\" (UID: \"08d96c31-f9c1-4308-ba34-bb5135a86eb8\") " pod="openstack-operators/openstack-operator-controller-init-5645c5b4f-jmkvr" Apr 04 02:16:23 crc kubenswrapper[4681]: I0404 02:16:23.788314 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5645c5b4f-jmkvr" Apr 04 02:16:24 crc kubenswrapper[4681]: I0404 02:16:24.043002 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5645c5b4f-jmkvr"] Apr 04 02:16:24 crc kubenswrapper[4681]: W0404 02:16:24.049176 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08d96c31_f9c1_4308_ba34_bb5135a86eb8.slice/crio-d36985532f769325b96a510cee95b5dd3faad21913dbdefb7dae8f5c9b379194 WatchSource:0}: Error finding container d36985532f769325b96a510cee95b5dd3faad21913dbdefb7dae8f5c9b379194: Status 404 returned error can't find the container with id d36985532f769325b96a510cee95b5dd3faad21913dbdefb7dae8f5c9b379194 Apr 04 02:16:24 crc kubenswrapper[4681]: I0404 02:16:24.612378 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5645c5b4f-jmkvr" event={"ID":"08d96c31-f9c1-4308-ba34-bb5135a86eb8","Type":"ContainerStarted","Data":"d36985532f769325b96a510cee95b5dd3faad21913dbdefb7dae8f5c9b379194"} Apr 04 02:16:28 crc kubenswrapper[4681]: I0404 02:16:28.639659 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5645c5b4f-jmkvr" event={"ID":"08d96c31-f9c1-4308-ba34-bb5135a86eb8","Type":"ContainerStarted","Data":"18a011f7eacd4d91e8bd45ecd54f9b6e426703443f971c35da7c2453cf9cf50d"} Apr 04 02:16:28 crc kubenswrapper[4681]: I0404 02:16:28.640171 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5645c5b4f-jmkvr" Apr 04 02:16:33 crc kubenswrapper[4681]: I0404 02:16:33.791785 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5645c5b4f-jmkvr" Apr 04 02:16:33 crc kubenswrapper[4681]: I0404 02:16:33.824799 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5645c5b4f-jmkvr" podStartSLOduration=7.181472647 podStartE2EDuration="10.824785333s" podCreationTimestamp="2026-04-04 02:16:23 +0000 UTC" firstStartedPulling="2026-04-04 02:16:24.054927207 +0000 UTC m=+1263.720702327" lastFinishedPulling="2026-04-04 02:16:27.698239893 +0000 UTC m=+1267.364015013" observedRunningTime="2026-04-04 02:16:28.673089005 +0000 UTC m=+1268.338864125" watchObservedRunningTime="2026-04-04 02:16:33.824785333 +0000 UTC m=+1273.490560453" Apr 04 02:16:56 crc kubenswrapper[4681]: I0404 02:16:56.524427 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:16:56 crc kubenswrapper[4681]: I0404 02:16:56.525544 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:17:03 crc kubenswrapper[4681]: I0404 02:17:03.147646 4681 scope.go:117] "RemoveContainer" containerID="b0f838f6731e25e64a6c8e686d39f8b7f91432cf81314f15a4aae78d06a103a2" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.365219 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86644c9c9c-kvnd9"] Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.367713 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86644c9c9c-kvnd9" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.369928 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-566bl" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.374077 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d46cccfb9-ttwtp"] Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.375101 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d46cccfb9-ttwtp" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.377583 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-n6wzg" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.382642 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86644c9c9c-kvnd9"] Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.396115 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-58689c6fff-rnnzd"] Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.397166 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-58689c6fff-rnnzd" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.399918 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-5snq9" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.419574 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-648bdc7f99-skr68"] Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.420616 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-648bdc7f99-skr68" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.424884 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-8tb8c" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.431028 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d46cccfb9-ttwtp"] Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.441211 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-648bdc7f99-skr68"] Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.449015 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-58689c6fff-rnnzd"] Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.473329 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-8684f86954-4z752"] Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.474191 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-8684f86954-4z752" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.475537 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvsb5\" (UniqueName: \"kubernetes.io/projected/23b37abe-289b-45e9-b55b-e2985e411401-kube-api-access-kvsb5\") pod \"cinder-operator-controller-manager-5d46cccfb9-ttwtp\" (UID: \"23b37abe-289b-45e9-b55b-e2985e411401\") " pod="openstack-operators/cinder-operator-controller-manager-5d46cccfb9-ttwtp" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.475588 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b5st\" (UniqueName: \"kubernetes.io/projected/6a8bc05a-a0b8-4ba3-b778-b3cb57b19a99-kube-api-access-6b5st\") pod \"designate-operator-controller-manager-58689c6fff-rnnzd\" (UID: \"6a8bc05a-a0b8-4ba3-b778-b3cb57b19a99\") " pod="openstack-operators/designate-operator-controller-manager-58689c6fff-rnnzd" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.475631 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v675\" (UniqueName: \"kubernetes.io/projected/895bcf63-b464-4408-a0f2-8217d1a6179b-kube-api-access-8v675\") pod \"barbican-operator-controller-manager-86644c9c9c-kvnd9\" (UID: \"895bcf63-b464-4408-a0f2-8217d1a6179b\") " pod="openstack-operators/barbican-operator-controller-manager-86644c9c9c-kvnd9" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.476995 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-n6wvp" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.486441 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-8684f86954-4z752"] Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.506486 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6ccfd84cb4-sq9cm"] Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.507375 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6ccfd84cb4-sq9cm" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.520513 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-jr96q" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.534231 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-gcbfv"] Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.535082 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-gcbfv" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.537925 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-grgpv" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.538116 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.550888 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-gcbfv"] Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.561916 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6ccfd84cb4-sq9cm"] Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.577347 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f96574b5-82k6f"] Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.578272 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f96574b5-82k6f" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.578244 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69jfk\" (UniqueName: \"kubernetes.io/projected/06717285-4d9d-4b9d-919e-106dd0ec0274-kube-api-access-69jfk\") pod \"heat-operator-controller-manager-8684f86954-4z752\" (UID: \"06717285-4d9d-4b9d-919e-106dd0ec0274\") " pod="openstack-operators/heat-operator-controller-manager-8684f86954-4z752" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.578346 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvsb5\" (UniqueName: \"kubernetes.io/projected/23b37abe-289b-45e9-b55b-e2985e411401-kube-api-access-kvsb5\") pod \"cinder-operator-controller-manager-5d46cccfb9-ttwtp\" (UID: \"23b37abe-289b-45e9-b55b-e2985e411401\") " pod="openstack-operators/cinder-operator-controller-manager-5d46cccfb9-ttwtp" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.578369 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8l4r\" (UniqueName: \"kubernetes.io/projected/be876d09-d6fd-46f7-a03c-8c13f72bee75-kube-api-access-h8l4r\") pod \"horizon-operator-controller-manager-6ccfd84cb4-sq9cm\" (UID: \"be876d09-d6fd-46f7-a03c-8c13f72bee75\") " pod="openstack-operators/horizon-operator-controller-manager-6ccfd84cb4-sq9cm" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.578396 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b5st\" (UniqueName: \"kubernetes.io/projected/6a8bc05a-a0b8-4ba3-b778-b3cb57b19a99-kube-api-access-6b5st\") pod \"designate-operator-controller-manager-58689c6fff-rnnzd\" (UID: \"6a8bc05a-a0b8-4ba3-b778-b3cb57b19a99\") " pod="openstack-operators/designate-operator-controller-manager-58689c6fff-rnnzd" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.578419 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44sr7\" (UniqueName: \"kubernetes.io/projected/4513182b-1bdb-40a2-ba02-2e8aa8567819-kube-api-access-44sr7\") pod \"glance-operator-controller-manager-648bdc7f99-skr68\" (UID: \"4513182b-1bdb-40a2-ba02-2e8aa8567819\") " pod="openstack-operators/glance-operator-controller-manager-648bdc7f99-skr68" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.578447 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v675\" (UniqueName: \"kubernetes.io/projected/895bcf63-b464-4408-a0f2-8217d1a6179b-kube-api-access-8v675\") pod \"barbican-operator-controller-manager-86644c9c9c-kvnd9\" (UID: \"895bcf63-b464-4408-a0f2-8217d1a6179b\") " pod="openstack-operators/barbican-operator-controller-manager-86644c9c9c-kvnd9" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.583899 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-jw4gk" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.603346 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f96574b5-82k6f"] Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.607582 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvsb5\" (UniqueName: \"kubernetes.io/projected/23b37abe-289b-45e9-b55b-e2985e411401-kube-api-access-kvsb5\") pod \"cinder-operator-controller-manager-5d46cccfb9-ttwtp\" (UID: \"23b37abe-289b-45e9-b55b-e2985e411401\") " pod="openstack-operators/cinder-operator-controller-manager-5d46cccfb9-ttwtp" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.617316 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-dbf8bb784-4gx6m"] Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.618129 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-dbf8bb784-4gx6m" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.618966 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b5st\" (UniqueName: \"kubernetes.io/projected/6a8bc05a-a0b8-4ba3-b778-b3cb57b19a99-kube-api-access-6b5st\") pod \"designate-operator-controller-manager-58689c6fff-rnnzd\" (UID: \"6a8bc05a-a0b8-4ba3-b778-b3cb57b19a99\") " pod="openstack-operators/designate-operator-controller-manager-58689c6fff-rnnzd" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.623323 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-6b7497dc59-tllnk"] Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.624376 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6b7497dc59-tllnk" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.635294 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-hhml6" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.635562 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-6bshg" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.636397 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v675\" (UniqueName: \"kubernetes.io/projected/895bcf63-b464-4408-a0f2-8217d1a6179b-kube-api-access-8v675\") pod \"barbican-operator-controller-manager-86644c9c9c-kvnd9\" (UID: \"895bcf63-b464-4408-a0f2-8217d1a6179b\") " pod="openstack-operators/barbican-operator-controller-manager-86644c9c9c-kvnd9" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.636475 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6b7497dc59-tllnk"] Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.644622 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-dbf8bb784-4gx6m"] Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.670334 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6554749d88-tj6wj"] Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.671178 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6554749d88-tj6wj" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.679984 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69jfk\" (UniqueName: \"kubernetes.io/projected/06717285-4d9d-4b9d-919e-106dd0ec0274-kube-api-access-69jfk\") pod \"heat-operator-controller-manager-8684f86954-4z752\" (UID: \"06717285-4d9d-4b9d-919e-106dd0ec0274\") " pod="openstack-operators/heat-operator-controller-manager-8684f86954-4z752" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.680059 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8l4r\" (UniqueName: \"kubernetes.io/projected/be876d09-d6fd-46f7-a03c-8c13f72bee75-kube-api-access-h8l4r\") pod \"horizon-operator-controller-manager-6ccfd84cb4-sq9cm\" (UID: \"be876d09-d6fd-46f7-a03c-8c13f72bee75\") " pod="openstack-operators/horizon-operator-controller-manager-6ccfd84cb4-sq9cm" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.680094 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bf4b\" (UniqueName: \"kubernetes.io/projected/4536a628-89aa-4f79-b180-9199d3cf390a-kube-api-access-8bf4b\") pod \"infra-operator-controller-manager-7ffb6b7cdc-gcbfv\" (UID: \"4536a628-89aa-4f79-b180-9199d3cf390a\") " pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-gcbfv" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.680135 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44sr7\" (UniqueName: \"kubernetes.io/projected/4513182b-1bdb-40a2-ba02-2e8aa8567819-kube-api-access-44sr7\") pod \"glance-operator-controller-manager-648bdc7f99-skr68\" (UID: \"4513182b-1bdb-40a2-ba02-2e8aa8567819\") " pod="openstack-operators/glance-operator-controller-manager-648bdc7f99-skr68" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.680176 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4536a628-89aa-4f79-b180-9199d3cf390a-cert\") pod \"infra-operator-controller-manager-7ffb6b7cdc-gcbfv\" (UID: \"4536a628-89aa-4f79-b180-9199d3cf390a\") " pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-gcbfv" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.680244 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5krgp\" (UniqueName: \"kubernetes.io/projected/1a4403a6-7904-4764-aba4-02a2bcc4bc19-kube-api-access-5krgp\") pod \"ironic-operator-controller-manager-5f96574b5-82k6f\" (UID: \"1a4403a6-7904-4764-aba4-02a2bcc4bc19\") " pod="openstack-operators/ironic-operator-controller-manager-5f96574b5-82k6f" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.681623 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-c8hzb" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.695545 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86644c9c9c-kvnd9" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.696136 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6554749d88-tj6wj"] Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.707196 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-74tl4"] Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.708315 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-767865f676-74tl4" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.718481 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d46cccfb9-ttwtp" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.721855 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-7nr6v" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.722599 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d6f9fd68c-x7x9p"] Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.723626 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5d6f9fd68c-x7x9p" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.724237 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-58689c6fff-rnnzd" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.739040 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44sr7\" (UniqueName: \"kubernetes.io/projected/4513182b-1bdb-40a2-ba02-2e8aa8567819-kube-api-access-44sr7\") pod \"glance-operator-controller-manager-648bdc7f99-skr68\" (UID: \"4513182b-1bdb-40a2-ba02-2e8aa8567819\") " pod="openstack-operators/glance-operator-controller-manager-648bdc7f99-skr68" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.748171 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-648bdc7f99-skr68" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.753895 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-5wdzv" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.754090 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8l4r\" (UniqueName: \"kubernetes.io/projected/be876d09-d6fd-46f7-a03c-8c13f72bee75-kube-api-access-h8l4r\") pod \"horizon-operator-controller-manager-6ccfd84cb4-sq9cm\" (UID: \"be876d09-d6fd-46f7-a03c-8c13f72bee75\") " pod="openstack-operators/horizon-operator-controller-manager-6ccfd84cb4-sq9cm" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.763184 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69jfk\" (UniqueName: \"kubernetes.io/projected/06717285-4d9d-4b9d-919e-106dd0ec0274-kube-api-access-69jfk\") pod \"heat-operator-controller-manager-8684f86954-4z752\" (UID: \"06717285-4d9d-4b9d-919e-106dd0ec0274\") " pod="openstack-operators/heat-operator-controller-manager-8684f86954-4z752" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.808579 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4536a628-89aa-4f79-b180-9199d3cf390a-cert\") pod \"infra-operator-controller-manager-7ffb6b7cdc-gcbfv\" (UID: \"4536a628-89aa-4f79-b180-9199d3cf390a\") " pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-gcbfv" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.808648 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr5sz\" (UniqueName: \"kubernetes.io/projected/3ac3008b-06b0-4ab7-a59f-3e7682627410-kube-api-access-pr5sz\") pod \"nova-operator-controller-manager-5d6f9fd68c-x7x9p\" (UID: \"3ac3008b-06b0-4ab7-a59f-3e7682627410\") " pod="openstack-operators/nova-operator-controller-manager-5d6f9fd68c-x7x9p" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.808705 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc628\" (UniqueName: \"kubernetes.io/projected/28828ebb-13dc-4ba1-98e1-39c6f38e9245-kube-api-access-zc628\") pod \"mariadb-operator-controller-manager-6554749d88-tj6wj\" (UID: \"28828ebb-13dc-4ba1-98e1-39c6f38e9245\") " pod="openstack-operators/mariadb-operator-controller-manager-6554749d88-tj6wj" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.808757 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5krgp\" (UniqueName: \"kubernetes.io/projected/1a4403a6-7904-4764-aba4-02a2bcc4bc19-kube-api-access-5krgp\") pod \"ironic-operator-controller-manager-5f96574b5-82k6f\" (UID: \"1a4403a6-7904-4764-aba4-02a2bcc4bc19\") " pod="openstack-operators/ironic-operator-controller-manager-5f96574b5-82k6f" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.808802 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zq4x\" (UniqueName: \"kubernetes.io/projected/80023eb5-5d8e-49ff-bd8d-d0fb3e290ccf-kube-api-access-7zq4x\") pod \"keystone-operator-controller-manager-dbf8bb784-4gx6m\" (UID: \"80023eb5-5d8e-49ff-bd8d-d0fb3e290ccf\") " pod="openstack-operators/keystone-operator-controller-manager-dbf8bb784-4gx6m" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.808853 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bf4b\" (UniqueName: \"kubernetes.io/projected/4536a628-89aa-4f79-b180-9199d3cf390a-kube-api-access-8bf4b\") pod \"infra-operator-controller-manager-7ffb6b7cdc-gcbfv\" (UID: \"4536a628-89aa-4f79-b180-9199d3cf390a\") " pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-gcbfv" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.808913 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d25v6\" (UniqueName: \"kubernetes.io/projected/82ce5791-77cb-418c-b3d2-7f49f625ccf1-kube-api-access-d25v6\") pod \"manila-operator-controller-manager-6b7497dc59-tllnk\" (UID: \"82ce5791-77cb-418c-b3d2-7f49f625ccf1\") " pod="openstack-operators/manila-operator-controller-manager-6b7497dc59-tllnk" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.808946 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29xvr\" (UniqueName: \"kubernetes.io/projected/856d74a1-4df8-446a-a82b-3dcc76f1af70-kube-api-access-29xvr\") pod \"neutron-operator-controller-manager-767865f676-74tl4\" (UID: \"856d74a1-4df8-446a-a82b-3dcc76f1af70\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-74tl4" Apr 04 02:17:11 crc kubenswrapper[4681]: E0404 02:17:11.809137 4681 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Apr 04 02:17:11 crc kubenswrapper[4681]: E0404 02:17:11.809203 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4536a628-89aa-4f79-b180-9199d3cf390a-cert podName:4536a628-89aa-4f79-b180-9199d3cf390a nodeName:}" failed. No retries permitted until 2026-04-04 02:17:12.309176736 +0000 UTC m=+1311.974951856 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4536a628-89aa-4f79-b180-9199d3cf390a-cert") pod "infra-operator-controller-manager-7ffb6b7cdc-gcbfv" (UID: "4536a628-89aa-4f79-b180-9199d3cf390a") : secret "infra-operator-webhook-server-cert" not found Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.812528 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d6f9fd68c-x7x9p"] Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.813067 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-8684f86954-4z752" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.832896 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bf4b\" (UniqueName: \"kubernetes.io/projected/4536a628-89aa-4f79-b180-9199d3cf390a-kube-api-access-8bf4b\") pod \"infra-operator-controller-manager-7ffb6b7cdc-gcbfv\" (UID: \"4536a628-89aa-4f79-b180-9199d3cf390a\") " pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-gcbfv" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.839172 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5krgp\" (UniqueName: \"kubernetes.io/projected/1a4403a6-7904-4764-aba4-02a2bcc4bc19-kube-api-access-5krgp\") pod \"ironic-operator-controller-manager-5f96574b5-82k6f\" (UID: \"1a4403a6-7904-4764-aba4-02a2bcc4bc19\") " pod="openstack-operators/ironic-operator-controller-manager-5f96574b5-82k6f" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.852369 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-74tl4"] Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.853117 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6ccfd84cb4-sq9cm" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.868783 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7594f57946-c9j8w"] Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.869819 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7594f57946-c9j8w" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.874743 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-7kdf6" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.909591 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29xvr\" (UniqueName: \"kubernetes.io/projected/856d74a1-4df8-446a-a82b-3dcc76f1af70-kube-api-access-29xvr\") pod \"neutron-operator-controller-manager-767865f676-74tl4\" (UID: \"856d74a1-4df8-446a-a82b-3dcc76f1af70\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-74tl4" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.909646 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr5sz\" (UniqueName: \"kubernetes.io/projected/3ac3008b-06b0-4ab7-a59f-3e7682627410-kube-api-access-pr5sz\") pod \"nova-operator-controller-manager-5d6f9fd68c-x7x9p\" (UID: \"3ac3008b-06b0-4ab7-a59f-3e7682627410\") " pod="openstack-operators/nova-operator-controller-manager-5d6f9fd68c-x7x9p" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.909682 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc628\" (UniqueName: \"kubernetes.io/projected/28828ebb-13dc-4ba1-98e1-39c6f38e9245-kube-api-access-zc628\") pod \"mariadb-operator-controller-manager-6554749d88-tj6wj\" (UID: \"28828ebb-13dc-4ba1-98e1-39c6f38e9245\") " pod="openstack-operators/mariadb-operator-controller-manager-6554749d88-tj6wj" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.909718 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zq4x\" (UniqueName: \"kubernetes.io/projected/80023eb5-5d8e-49ff-bd8d-d0fb3e290ccf-kube-api-access-7zq4x\") pod \"keystone-operator-controller-manager-dbf8bb784-4gx6m\" (UID: \"80023eb5-5d8e-49ff-bd8d-d0fb3e290ccf\") " pod="openstack-operators/keystone-operator-controller-manager-dbf8bb784-4gx6m" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.909763 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d25v6\" (UniqueName: \"kubernetes.io/projected/82ce5791-77cb-418c-b3d2-7f49f625ccf1-kube-api-access-d25v6\") pod \"manila-operator-controller-manager-6b7497dc59-tllnk\" (UID: \"82ce5791-77cb-418c-b3d2-7f49f625ccf1\") " pod="openstack-operators/manila-operator-controller-manager-6b7497dc59-tllnk" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.918525 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7594f57946-c9j8w"] Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.931059 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29xvr\" (UniqueName: \"kubernetes.io/projected/856d74a1-4df8-446a-a82b-3dcc76f1af70-kube-api-access-29xvr\") pod \"neutron-operator-controller-manager-767865f676-74tl4\" (UID: \"856d74a1-4df8-446a-a82b-3dcc76f1af70\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-74tl4" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.934061 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc628\" (UniqueName: \"kubernetes.io/projected/28828ebb-13dc-4ba1-98e1-39c6f38e9245-kube-api-access-zc628\") pod \"mariadb-operator-controller-manager-6554749d88-tj6wj\" (UID: \"28828ebb-13dc-4ba1-98e1-39c6f38e9245\") " pod="openstack-operators/mariadb-operator-controller-manager-6554749d88-tj6wj" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.946447 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr5sz\" (UniqueName: \"kubernetes.io/projected/3ac3008b-06b0-4ab7-a59f-3e7682627410-kube-api-access-pr5sz\") pod \"nova-operator-controller-manager-5d6f9fd68c-x7x9p\" (UID: \"3ac3008b-06b0-4ab7-a59f-3e7682627410\") " pod="openstack-operators/nova-operator-controller-manager-5d6f9fd68c-x7x9p" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.956724 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d25v6\" (UniqueName: \"kubernetes.io/projected/82ce5791-77cb-418c-b3d2-7f49f625ccf1-kube-api-access-d25v6\") pod \"manila-operator-controller-manager-6b7497dc59-tllnk\" (UID: \"82ce5791-77cb-418c-b3d2-7f49f625ccf1\") " pod="openstack-operators/manila-operator-controller-manager-6b7497dc59-tllnk" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.971010 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zq4x\" (UniqueName: \"kubernetes.io/projected/80023eb5-5d8e-49ff-bd8d-d0fb3e290ccf-kube-api-access-7zq4x\") pod \"keystone-operator-controller-manager-dbf8bb784-4gx6m\" (UID: \"80023eb5-5d8e-49ff-bd8d-d0fb3e290ccf\") " pod="openstack-operators/keystone-operator-controller-manager-dbf8bb784-4gx6m" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.975640 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-b7b49d78f-skbql"] Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.977963 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b7b49d78f-skbql" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.980584 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.980939 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f96574b5-82k6f" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.981425 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-cf9g2" Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.991127 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-fbdcf7f7b-844tj"] Apr 04 02:17:11 crc kubenswrapper[4681]: I0404 02:17:11.992017 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-fbdcf7f7b-844tj" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:11.996274 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-kk7f8" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.012476 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-559d8fdb6b-tmg65"] Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.013410 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4gvk\" (UniqueName: \"kubernetes.io/projected/d8331de2-1469-4856-a56c-f1e107779ca4-kube-api-access-h4gvk\") pod \"octavia-operator-controller-manager-7594f57946-c9j8w\" (UID: \"d8331de2-1469-4856-a56c-f1e107779ca4\") " pod="openstack-operators/octavia-operator-controller-manager-7594f57946-c9j8w" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.013583 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-559d8fdb6b-tmg65" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.019558 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-xsw6h" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.020634 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-84464c7c78-brc8n"] Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.021744 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-84464c7c78-brc8n" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.029438 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d6f76d4c7-2vrfg"] Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.030445 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-d6f76d4c7-2vrfg" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.036753 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-wvtbs" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.037846 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-fbdcf7f7b-844tj"] Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.039258 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-92phq" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.052526 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-b7b49d78f-skbql"] Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.054555 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-dbf8bb784-4gx6m" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.077509 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-559d8fdb6b-tmg65"] Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.089312 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56ccc97cf5-j87hk"] Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.090128 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56ccc97cf5-j87hk" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.094112 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-4d4f7" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.100562 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d6f76d4c7-2vrfg"] Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.114459 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clpjv\" (UniqueName: \"kubernetes.io/projected/b54a4f45-de00-4dd5-95d4-f96a21d34189-kube-api-access-clpjv\") pod \"openstack-baremetal-operator-controller-manager-b7b49d78f-skbql\" (UID: \"b54a4f45-de00-4dd5-95d4-f96a21d34189\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b7b49d78f-skbql" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.114503 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x8nz\" (UniqueName: \"kubernetes.io/projected/8e44912b-0956-49e8-ad3e-140b3d60838e-kube-api-access-8x8nz\") pod \"telemetry-operator-controller-manager-d6f76d4c7-2vrfg\" (UID: \"8e44912b-0956-49e8-ad3e-140b3d60838e\") " pod="openstack-operators/telemetry-operator-controller-manager-d6f76d4c7-2vrfg" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.114525 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4xbf\" (UniqueName: \"kubernetes.io/projected/eb76f1dc-bae9-491f-a58e-3cc1f9c15571-kube-api-access-m4xbf\") pod \"swift-operator-controller-manager-fbdcf7f7b-844tj\" (UID: \"eb76f1dc-bae9-491f-a58e-3cc1f9c15571\") " pod="openstack-operators/swift-operator-controller-manager-fbdcf7f7b-844tj" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.114656 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2g6f\" (UniqueName: \"kubernetes.io/projected/b50e9e2f-832f-4de0-b38c-cb9f5f0d62ab-kube-api-access-s2g6f\") pod \"ovn-operator-controller-manager-84464c7c78-brc8n\" (UID: \"b50e9e2f-832f-4de0-b38c-cb9f5f0d62ab\") " pod="openstack-operators/ovn-operator-controller-manager-84464c7c78-brc8n" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.114711 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj9nc\" (UniqueName: \"kubernetes.io/projected/6479782a-b4ab-4e90-a9bd-29ef0a41f9d7-kube-api-access-sj9nc\") pod \"placement-operator-controller-manager-559d8fdb6b-tmg65\" (UID: \"6479782a-b4ab-4e90-a9bd-29ef0a41f9d7\") " pod="openstack-operators/placement-operator-controller-manager-559d8fdb6b-tmg65" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.114867 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b54a4f45-de00-4dd5-95d4-f96a21d34189-cert\") pod \"openstack-baremetal-operator-controller-manager-b7b49d78f-skbql\" (UID: \"b54a4f45-de00-4dd5-95d4-f96a21d34189\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b7b49d78f-skbql" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.114923 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4gvk\" (UniqueName: \"kubernetes.io/projected/d8331de2-1469-4856-a56c-f1e107779ca4-kube-api-access-h4gvk\") pod \"octavia-operator-controller-manager-7594f57946-c9j8w\" (UID: \"d8331de2-1469-4856-a56c-f1e107779ca4\") " pod="openstack-operators/octavia-operator-controller-manager-7594f57946-c9j8w" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.132937 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-84464c7c78-brc8n"] Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.136497 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6b7497dc59-tllnk" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.152347 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6554749d88-tj6wj" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.156135 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4gvk\" (UniqueName: \"kubernetes.io/projected/d8331de2-1469-4856-a56c-f1e107779ca4-kube-api-access-h4gvk\") pod \"octavia-operator-controller-manager-7594f57946-c9j8w\" (UID: \"d8331de2-1469-4856-a56c-f1e107779ca4\") " pod="openstack-operators/octavia-operator-controller-manager-7594f57946-c9j8w" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.179971 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56ccc97cf5-j87hk"] Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.180251 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-767865f676-74tl4" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.204826 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-58b78987f4-nwsmd"] Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.204995 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5d6f9fd68c-x7x9p" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.206736 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-58b78987f4-nwsmd" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.213194 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-2pql8" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.216284 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b54a4f45-de00-4dd5-95d4-f96a21d34189-cert\") pod \"openstack-baremetal-operator-controller-manager-b7b49d78f-skbql\" (UID: \"b54a4f45-de00-4dd5-95d4-f96a21d34189\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b7b49d78f-skbql" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.216411 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clpjv\" (UniqueName: \"kubernetes.io/projected/b54a4f45-de00-4dd5-95d4-f96a21d34189-kube-api-access-clpjv\") pod \"openstack-baremetal-operator-controller-manager-b7b49d78f-skbql\" (UID: \"b54a4f45-de00-4dd5-95d4-f96a21d34189\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b7b49d78f-skbql" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.216457 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrc9g\" (UniqueName: \"kubernetes.io/projected/da0d70da-b61c-41ee-938b-f4a931300f75-kube-api-access-lrc9g\") pod \"test-operator-controller-manager-56ccc97cf5-j87hk\" (UID: \"da0d70da-b61c-41ee-938b-f4a931300f75\") " pod="openstack-operators/test-operator-controller-manager-56ccc97cf5-j87hk" Apr 04 02:17:12 crc kubenswrapper[4681]: E0404 02:17:12.216545 4681 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Apr 04 02:17:12 crc kubenswrapper[4681]: E0404 02:17:12.216613 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54a4f45-de00-4dd5-95d4-f96a21d34189-cert podName:b54a4f45-de00-4dd5-95d4-f96a21d34189 nodeName:}" failed. No retries permitted until 2026-04-04 02:17:12.716593206 +0000 UTC m=+1312.382368326 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b54a4f45-de00-4dd5-95d4-f96a21d34189-cert") pod "openstack-baremetal-operator-controller-manager-b7b49d78f-skbql" (UID: "b54a4f45-de00-4dd5-95d4-f96a21d34189") : secret "openstack-baremetal-operator-webhook-server-cert" not found Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.216486 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x8nz\" (UniqueName: \"kubernetes.io/projected/8e44912b-0956-49e8-ad3e-140b3d60838e-kube-api-access-8x8nz\") pod \"telemetry-operator-controller-manager-d6f76d4c7-2vrfg\" (UID: \"8e44912b-0956-49e8-ad3e-140b3d60838e\") " pod="openstack-operators/telemetry-operator-controller-manager-d6f76d4c7-2vrfg" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.216852 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4xbf\" (UniqueName: \"kubernetes.io/projected/eb76f1dc-bae9-491f-a58e-3cc1f9c15571-kube-api-access-m4xbf\") pod \"swift-operator-controller-manager-fbdcf7f7b-844tj\" (UID: \"eb76f1dc-bae9-491f-a58e-3cc1f9c15571\") " pod="openstack-operators/swift-operator-controller-manager-fbdcf7f7b-844tj" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.216887 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2g6f\" (UniqueName: \"kubernetes.io/projected/b50e9e2f-832f-4de0-b38c-cb9f5f0d62ab-kube-api-access-s2g6f\") pod \"ovn-operator-controller-manager-84464c7c78-brc8n\" (UID: \"b50e9e2f-832f-4de0-b38c-cb9f5f0d62ab\") " pod="openstack-operators/ovn-operator-controller-manager-84464c7c78-brc8n" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.216911 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj9nc\" (UniqueName: \"kubernetes.io/projected/6479782a-b4ab-4e90-a9bd-29ef0a41f9d7-kube-api-access-sj9nc\") pod \"placement-operator-controller-manager-559d8fdb6b-tmg65\" (UID: \"6479782a-b4ab-4e90-a9bd-29ef0a41f9d7\") " pod="openstack-operators/placement-operator-controller-manager-559d8fdb6b-tmg65" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.218486 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-58b78987f4-nwsmd"] Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.219852 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7594f57946-c9j8w" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.251402 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2g6f\" (UniqueName: \"kubernetes.io/projected/b50e9e2f-832f-4de0-b38c-cb9f5f0d62ab-kube-api-access-s2g6f\") pod \"ovn-operator-controller-manager-84464c7c78-brc8n\" (UID: \"b50e9e2f-832f-4de0-b38c-cb9f5f0d62ab\") " pod="openstack-operators/ovn-operator-controller-manager-84464c7c78-brc8n" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.253163 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4xbf\" (UniqueName: \"kubernetes.io/projected/eb76f1dc-bae9-491f-a58e-3cc1f9c15571-kube-api-access-m4xbf\") pod \"swift-operator-controller-manager-fbdcf7f7b-844tj\" (UID: \"eb76f1dc-bae9-491f-a58e-3cc1f9c15571\") " pod="openstack-operators/swift-operator-controller-manager-fbdcf7f7b-844tj" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.255252 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x8nz\" (UniqueName: \"kubernetes.io/projected/8e44912b-0956-49e8-ad3e-140b3d60838e-kube-api-access-8x8nz\") pod \"telemetry-operator-controller-manager-d6f76d4c7-2vrfg\" (UID: \"8e44912b-0956-49e8-ad3e-140b3d60838e\") " pod="openstack-operators/telemetry-operator-controller-manager-d6f76d4c7-2vrfg" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.260149 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clpjv\" (UniqueName: \"kubernetes.io/projected/b54a4f45-de00-4dd5-95d4-f96a21d34189-kube-api-access-clpjv\") pod \"openstack-baremetal-operator-controller-manager-b7b49d78f-skbql\" (UID: \"b54a4f45-de00-4dd5-95d4-f96a21d34189\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b7b49d78f-skbql" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.261625 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj9nc\" (UniqueName: \"kubernetes.io/projected/6479782a-b4ab-4e90-a9bd-29ef0a41f9d7-kube-api-access-sj9nc\") pod \"placement-operator-controller-manager-559d8fdb6b-tmg65\" (UID: \"6479782a-b4ab-4e90-a9bd-29ef0a41f9d7\") " pod="openstack-operators/placement-operator-controller-manager-559d8fdb6b-tmg65" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.318595 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrc9g\" (UniqueName: \"kubernetes.io/projected/da0d70da-b61c-41ee-938b-f4a931300f75-kube-api-access-lrc9g\") pod \"test-operator-controller-manager-56ccc97cf5-j87hk\" (UID: \"da0d70da-b61c-41ee-938b-f4a931300f75\") " pod="openstack-operators/test-operator-controller-manager-56ccc97cf5-j87hk" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.318684 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4536a628-89aa-4f79-b180-9199d3cf390a-cert\") pod \"infra-operator-controller-manager-7ffb6b7cdc-gcbfv\" (UID: \"4536a628-89aa-4f79-b180-9199d3cf390a\") " pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-gcbfv" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.318715 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pldqg\" (UniqueName: \"kubernetes.io/projected/a2899081-691e-4ad2-8e98-4fb8b955a0cd-kube-api-access-pldqg\") pod \"watcher-operator-controller-manager-58b78987f4-nwsmd\" (UID: \"a2899081-691e-4ad2-8e98-4fb8b955a0cd\") " pod="openstack-operators/watcher-operator-controller-manager-58b78987f4-nwsmd" Apr 04 02:17:12 crc kubenswrapper[4681]: E0404 02:17:12.319707 4681 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Apr 04 02:17:12 crc kubenswrapper[4681]: E0404 02:17:12.319743 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4536a628-89aa-4f79-b180-9199d3cf390a-cert podName:4536a628-89aa-4f79-b180-9199d3cf390a nodeName:}" failed. No retries permitted until 2026-04-04 02:17:13.319731468 +0000 UTC m=+1312.985506578 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4536a628-89aa-4f79-b180-9199d3cf390a-cert") pod "infra-operator-controller-manager-7ffb6b7cdc-gcbfv" (UID: "4536a628-89aa-4f79-b180-9199d3cf390a") : secret "infra-operator-webhook-server-cert" not found Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.322752 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-fbdcf7f7b-844tj" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.325205 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-667cfd88d7-2k5wm"] Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.326541 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-667cfd88d7-2k5wm" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.331386 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.331603 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-m6zvn" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.332038 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.339661 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrc9g\" (UniqueName: \"kubernetes.io/projected/da0d70da-b61c-41ee-938b-f4a931300f75-kube-api-access-lrc9g\") pod \"test-operator-controller-manager-56ccc97cf5-j87hk\" (UID: \"da0d70da-b61c-41ee-938b-f4a931300f75\") " pod="openstack-operators/test-operator-controller-manager-56ccc97cf5-j87hk" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.358742 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-559d8fdb6b-tmg65" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.392903 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-d6f76d4c7-2vrfg" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.395367 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-667cfd88d7-2k5wm"] Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.407317 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-84464c7c78-brc8n" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.419597 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-webhook-certs\") pod \"openstack-operator-controller-manager-667cfd88d7-2k5wm\" (UID: \"df89fca6-3fb4-4d85-95df-4b48e4a1e884\") " pod="openstack-operators/openstack-operator-controller-manager-667cfd88d7-2k5wm" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.419832 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdh4t\" (UniqueName: \"kubernetes.io/projected/df89fca6-3fb4-4d85-95df-4b48e4a1e884-kube-api-access-cdh4t\") pod \"openstack-operator-controller-manager-667cfd88d7-2k5wm\" (UID: \"df89fca6-3fb4-4d85-95df-4b48e4a1e884\") " pod="openstack-operators/openstack-operator-controller-manager-667cfd88d7-2k5wm" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.419979 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pldqg\" (UniqueName: \"kubernetes.io/projected/a2899081-691e-4ad2-8e98-4fb8b955a0cd-kube-api-access-pldqg\") pod \"watcher-operator-controller-manager-58b78987f4-nwsmd\" (UID: \"a2899081-691e-4ad2-8e98-4fb8b955a0cd\") " pod="openstack-operators/watcher-operator-controller-manager-58b78987f4-nwsmd" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.420097 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-metrics-certs\") pod \"openstack-operator-controller-manager-667cfd88d7-2k5wm\" (UID: \"df89fca6-3fb4-4d85-95df-4b48e4a1e884\") " pod="openstack-operators/openstack-operator-controller-manager-667cfd88d7-2k5wm" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.420686 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zv8zs"] Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.423086 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zv8zs" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.426809 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-k46kc" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.438713 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zv8zs"] Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.444583 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pldqg\" (UniqueName: \"kubernetes.io/projected/a2899081-691e-4ad2-8e98-4fb8b955a0cd-kube-api-access-pldqg\") pod \"watcher-operator-controller-manager-58b78987f4-nwsmd\" (UID: \"a2899081-691e-4ad2-8e98-4fb8b955a0cd\") " pod="openstack-operators/watcher-operator-controller-manager-58b78987f4-nwsmd" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.445518 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56ccc97cf5-j87hk" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.489002 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86644c9c9c-kvnd9"] Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.521172 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-metrics-certs\") pod \"openstack-operator-controller-manager-667cfd88d7-2k5wm\" (UID: \"df89fca6-3fb4-4d85-95df-4b48e4a1e884\") " pod="openstack-operators/openstack-operator-controller-manager-667cfd88d7-2k5wm" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.521228 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-webhook-certs\") pod \"openstack-operator-controller-manager-667cfd88d7-2k5wm\" (UID: \"df89fca6-3fb4-4d85-95df-4b48e4a1e884\") " pod="openstack-operators/openstack-operator-controller-manager-667cfd88d7-2k5wm" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.521333 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdh4t\" (UniqueName: \"kubernetes.io/projected/df89fca6-3fb4-4d85-95df-4b48e4a1e884-kube-api-access-cdh4t\") pod \"openstack-operator-controller-manager-667cfd88d7-2k5wm\" (UID: \"df89fca6-3fb4-4d85-95df-4b48e4a1e884\") " pod="openstack-operators/openstack-operator-controller-manager-667cfd88d7-2k5wm" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.521361 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2jg7\" (UniqueName: \"kubernetes.io/projected/2acd20f5-b31c-411a-989c-f0ad12628894-kube-api-access-l2jg7\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zv8zs\" (UID: \"2acd20f5-b31c-411a-989c-f0ad12628894\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zv8zs" Apr 04 02:17:12 crc kubenswrapper[4681]: E0404 02:17:12.521386 4681 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Apr 04 02:17:12 crc kubenswrapper[4681]: E0404 02:17:12.521459 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-metrics-certs podName:df89fca6-3fb4-4d85-95df-4b48e4a1e884 nodeName:}" failed. No retries permitted until 2026-04-04 02:17:13.021438678 +0000 UTC m=+1312.687213798 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-metrics-certs") pod "openstack-operator-controller-manager-667cfd88d7-2k5wm" (UID: "df89fca6-3fb4-4d85-95df-4b48e4a1e884") : secret "metrics-server-cert" not found Apr 04 02:17:12 crc kubenswrapper[4681]: E0404 02:17:12.521552 4681 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Apr 04 02:17:12 crc kubenswrapper[4681]: E0404 02:17:12.521597 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-webhook-certs podName:df89fca6-3fb4-4d85-95df-4b48e4a1e884 nodeName:}" failed. No retries permitted until 2026-04-04 02:17:13.021581552 +0000 UTC m=+1312.687356662 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-webhook-certs") pod "openstack-operator-controller-manager-667cfd88d7-2k5wm" (UID: "df89fca6-3fb4-4d85-95df-4b48e4a1e884") : secret "webhook-server-cert" not found Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.542604 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdh4t\" (UniqueName: \"kubernetes.io/projected/df89fca6-3fb4-4d85-95df-4b48e4a1e884-kube-api-access-cdh4t\") pod \"openstack-operator-controller-manager-667cfd88d7-2k5wm\" (UID: \"df89fca6-3fb4-4d85-95df-4b48e4a1e884\") " pod="openstack-operators/openstack-operator-controller-manager-667cfd88d7-2k5wm" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.551546 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-58b78987f4-nwsmd" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.624308 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2jg7\" (UniqueName: \"kubernetes.io/projected/2acd20f5-b31c-411a-989c-f0ad12628894-kube-api-access-l2jg7\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zv8zs\" (UID: \"2acd20f5-b31c-411a-989c-f0ad12628894\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zv8zs" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.655078 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-8684f86954-4z752"] Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.655173 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2jg7\" (UniqueName: \"kubernetes.io/projected/2acd20f5-b31c-411a-989c-f0ad12628894-kube-api-access-l2jg7\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zv8zs\" (UID: \"2acd20f5-b31c-411a-989c-f0ad12628894\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zv8zs" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.669061 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-648bdc7f99-skr68"] Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.687600 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d46cccfb9-ttwtp"] Apr 04 02:17:12 crc kubenswrapper[4681]: W0404 02:17:12.714984 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4513182b_1bdb_40a2_ba02_2e8aa8567819.slice/crio-2ee5be6e2da7d1a3519b831baa6953b09f21763d034d808145749cebde9d19d3 WatchSource:0}: Error finding container 2ee5be6e2da7d1a3519b831baa6953b09f21763d034d808145749cebde9d19d3: Status 404 returned error can't find the container with id 2ee5be6e2da7d1a3519b831baa6953b09f21763d034d808145749cebde9d19d3 Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.725665 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b54a4f45-de00-4dd5-95d4-f96a21d34189-cert\") pod \"openstack-baremetal-operator-controller-manager-b7b49d78f-skbql\" (UID: \"b54a4f45-de00-4dd5-95d4-f96a21d34189\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b7b49d78f-skbql" Apr 04 02:17:12 crc kubenswrapper[4681]: E0404 02:17:12.725787 4681 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Apr 04 02:17:12 crc kubenswrapper[4681]: E0404 02:17:12.725825 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54a4f45-de00-4dd5-95d4-f96a21d34189-cert podName:b54a4f45-de00-4dd5-95d4-f96a21d34189 nodeName:}" failed. No retries permitted until 2026-04-04 02:17:13.72581237 +0000 UTC m=+1313.391587490 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b54a4f45-de00-4dd5-95d4-f96a21d34189-cert") pod "openstack-baremetal-operator-controller-manager-b7b49d78f-skbql" (UID: "b54a4f45-de00-4dd5-95d4-f96a21d34189") : secret "openstack-baremetal-operator-webhook-server-cert" not found Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.745975 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zv8zs" Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.779291 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-58689c6fff-rnnzd"] Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.788585 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6ccfd84cb4-sq9cm"] Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.794236 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f96574b5-82k6f"] Apr 04 02:17:12 crc kubenswrapper[4681]: I0404 02:17:12.934426 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-dbf8bb784-4gx6m"] Apr 04 02:17:13 crc kubenswrapper[4681]: I0404 02:17:13.030714 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-webhook-certs\") pod \"openstack-operator-controller-manager-667cfd88d7-2k5wm\" (UID: \"df89fca6-3fb4-4d85-95df-4b48e4a1e884\") " pod="openstack-operators/openstack-operator-controller-manager-667cfd88d7-2k5wm" Apr 04 02:17:13 crc kubenswrapper[4681]: I0404 02:17:13.030878 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-metrics-certs\") pod \"openstack-operator-controller-manager-667cfd88d7-2k5wm\" (UID: \"df89fca6-3fb4-4d85-95df-4b48e4a1e884\") " pod="openstack-operators/openstack-operator-controller-manager-667cfd88d7-2k5wm" Apr 04 02:17:13 crc kubenswrapper[4681]: E0404 02:17:13.031020 4681 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Apr 04 02:17:13 crc kubenswrapper[4681]: E0404 02:17:13.031078 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-metrics-certs podName:df89fca6-3fb4-4d85-95df-4b48e4a1e884 nodeName:}" failed. No retries permitted until 2026-04-04 02:17:14.031060633 +0000 UTC m=+1313.696835753 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-metrics-certs") pod "openstack-operator-controller-manager-667cfd88d7-2k5wm" (UID: "df89fca6-3fb4-4d85-95df-4b48e4a1e884") : secret "metrics-server-cert" not found Apr 04 02:17:13 crc kubenswrapper[4681]: E0404 02:17:13.031521 4681 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Apr 04 02:17:13 crc kubenswrapper[4681]: E0404 02:17:13.031558 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-webhook-certs podName:df89fca6-3fb4-4d85-95df-4b48e4a1e884 nodeName:}" failed. No retries permitted until 2026-04-04 02:17:14.031549017 +0000 UTC m=+1313.697324137 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-webhook-certs") pod "openstack-operator-controller-manager-667cfd88d7-2k5wm" (UID: "df89fca6-3fb4-4d85-95df-4b48e4a1e884") : secret "webhook-server-cert" not found Apr 04 02:17:13 crc kubenswrapper[4681]: I0404 02:17:13.032919 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-8684f86954-4z752" event={"ID":"06717285-4d9d-4b9d-919e-106dd0ec0274","Type":"ContainerStarted","Data":"c6e42afec9e977f278bd4719a01798e590d4ea819842d5939c4f276cd7c178b4"} Apr 04 02:17:13 crc kubenswrapper[4681]: I0404 02:17:13.033835 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86644c9c9c-kvnd9" event={"ID":"895bcf63-b464-4408-a0f2-8217d1a6179b","Type":"ContainerStarted","Data":"f793beaa29ae094f7db30cee5113dab52e5c228f7b938f4a65e04aa46022fd53"} Apr 04 02:17:13 crc kubenswrapper[4681]: I0404 02:17:13.035587 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-58689c6fff-rnnzd" event={"ID":"6a8bc05a-a0b8-4ba3-b778-b3cb57b19a99","Type":"ContainerStarted","Data":"243b22dbdf1f7d6150317c85590e1e8ba13e925c5706e61c630f2e6b30d392ba"} Apr 04 02:17:13 crc kubenswrapper[4681]: I0404 02:17:13.036983 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d46cccfb9-ttwtp" event={"ID":"23b37abe-289b-45e9-b55b-e2985e411401","Type":"ContainerStarted","Data":"20593c159811bbb5e2d734ec779370c0667fcee5206a9d043137a54e95e33415"} Apr 04 02:17:13 crc kubenswrapper[4681]: I0404 02:17:13.038053 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-648bdc7f99-skr68" event={"ID":"4513182b-1bdb-40a2-ba02-2e8aa8567819","Type":"ContainerStarted","Data":"2ee5be6e2da7d1a3519b831baa6953b09f21763d034d808145749cebde9d19d3"} Apr 04 02:17:13 crc kubenswrapper[4681]: I0404 02:17:13.040296 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-dbf8bb784-4gx6m" event={"ID":"80023eb5-5d8e-49ff-bd8d-d0fb3e290ccf","Type":"ContainerStarted","Data":"9e1b8286bb5dfd8908b4b969faaee1d2d0a3a6c872473a620a069ead390e06dd"} Apr 04 02:17:13 crc kubenswrapper[4681]: I0404 02:17:13.041795 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6ccfd84cb4-sq9cm" event={"ID":"be876d09-d6fd-46f7-a03c-8c13f72bee75","Type":"ContainerStarted","Data":"4f5ec3b08a099df7ea1806a684bc17b74eb9cfe1e4728ac0f23299a71a51fc9f"} Apr 04 02:17:13 crc kubenswrapper[4681]: I0404 02:17:13.042920 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f96574b5-82k6f" event={"ID":"1a4403a6-7904-4764-aba4-02a2bcc4bc19","Type":"ContainerStarted","Data":"dc99d98f2c1c624852d49f83bff96bdf89723b673670d0625d592809526a6ab3"} Apr 04 02:17:13 crc kubenswrapper[4681]: I0404 02:17:13.168221 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6b7497dc59-tllnk"] Apr 04 02:17:13 crc kubenswrapper[4681]: I0404 02:17:13.176140 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7594f57946-c9j8w"] Apr 04 02:17:13 crc kubenswrapper[4681]: I0404 02:17:13.186926 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6554749d88-tj6wj"] Apr 04 02:17:13 crc kubenswrapper[4681]: I0404 02:17:13.213303 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d6f9fd68c-x7x9p"] Apr 04 02:17:13 crc kubenswrapper[4681]: I0404 02:17:13.298802 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-fbdcf7f7b-844tj"] Apr 04 02:17:13 crc kubenswrapper[4681]: I0404 02:17:13.307680 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-74tl4"] Apr 04 02:17:13 crc kubenswrapper[4681]: I0404 02:17:13.313274 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-559d8fdb6b-tmg65"] Apr 04 02:17:13 crc kubenswrapper[4681]: I0404 02:17:13.343338 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4536a628-89aa-4f79-b180-9199d3cf390a-cert\") pod \"infra-operator-controller-manager-7ffb6b7cdc-gcbfv\" (UID: \"4536a628-89aa-4f79-b180-9199d3cf390a\") " pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-gcbfv" Apr 04 02:17:13 crc kubenswrapper[4681]: E0404 02:17:13.343562 4681 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Apr 04 02:17:13 crc kubenswrapper[4681]: E0404 02:17:13.343655 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4536a628-89aa-4f79-b180-9199d3cf390a-cert podName:4536a628-89aa-4f79-b180-9199d3cf390a nodeName:}" failed. No retries permitted until 2026-04-04 02:17:15.343633258 +0000 UTC m=+1315.009408448 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4536a628-89aa-4f79-b180-9199d3cf390a-cert") pod "infra-operator-controller-manager-7ffb6b7cdc-gcbfv" (UID: "4536a628-89aa-4f79-b180-9199d3cf390a") : secret "infra-operator-webhook-server-cert" not found Apr 04 02:17:13 crc kubenswrapper[4681]: E0404 02:17:13.403639 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:526f9d4965431e1a5e4f8c3224bcee3f636a3108a5e0767296a994c2a517404a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-29xvr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-767865f676-74tl4_openstack-operators(856d74a1-4df8-446a-a82b-3dcc76f1af70): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Apr 04 02:17:13 crc kubenswrapper[4681]: E0404 02:17:13.413890 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-767865f676-74tl4" podUID="856d74a1-4df8-446a-a82b-3dcc76f1af70" Apr 04 02:17:13 crc kubenswrapper[4681]: I0404 02:17:13.431723 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56ccc97cf5-j87hk"] Apr 04 02:17:13 crc kubenswrapper[4681]: I0404 02:17:13.442902 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-58b78987f4-nwsmd"] Apr 04 02:17:13 crc kubenswrapper[4681]: I0404 02:17:13.450757 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-84464c7c78-brc8n"] Apr 04 02:17:13 crc kubenswrapper[4681]: E0404 02:17:13.453343 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:4b983bc9e9cebbde8a781fdaaf774b8dd13bb30f66f323d94c2187707f6552d9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s2g6f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-84464c7c78-brc8n_openstack-operators(b50e9e2f-832f-4de0-b38c-cb9f5f0d62ab): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Apr 04 02:17:13 crc kubenswrapper[4681]: E0404 02:17:13.453436 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:2c1ef8575d74ef938c900e7ea7e622afeb589db6b4dcf30da544cc5689775296,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lrc9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56ccc97cf5-j87hk_openstack-operators(da0d70da-b61c-41ee-938b-f4a931300f75): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Apr 04 02:17:13 crc kubenswrapper[4681]: E0404 02:17:13.454447 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-84464c7c78-brc8n" podUID="b50e9e2f-832f-4de0-b38c-cb9f5f0d62ab" Apr 04 02:17:13 crc kubenswrapper[4681]: W0404 02:17:13.455306 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2acd20f5_b31c_411a_989c_f0ad12628894.slice/crio-e47bcd2bf04225488291cba6c2b34effee4fbec1e6c0295d555f51276c235af5 WatchSource:0}: Error finding container e47bcd2bf04225488291cba6c2b34effee4fbec1e6c0295d555f51276c235af5: Status 404 returned error can't find the container with id e47bcd2bf04225488291cba6c2b34effee4fbec1e6c0295d555f51276c235af5 Apr 04 02:17:13 crc kubenswrapper[4681]: E0404 02:17:13.455358 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56ccc97cf5-j87hk" podUID="da0d70da-b61c-41ee-938b-f4a931300f75" Apr 04 02:17:13 crc kubenswrapper[4681]: I0404 02:17:13.459150 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d6f76d4c7-2vrfg"] Apr 04 02:17:13 crc kubenswrapper[4681]: E0404 02:17:13.459655 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.110:5001/openstack-k8s-operators/watcher-operator:d9d7f10ace025d42d066d7d36b1c37277ef9b85e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pldqg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-58b78987f4-nwsmd_openstack-operators(a2899081-691e-4ad2-8e98-4fb8b955a0cd): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Apr 04 02:17:13 crc kubenswrapper[4681]: E0404 02:17:13.460605 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l2jg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-zv8zs_openstack-operators(2acd20f5-b31c-411a-989c-f0ad12628894): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Apr 04 02:17:13 crc kubenswrapper[4681]: E0404 02:17:13.460698 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:566b1f4d3f3d50e9620b845e12ef72bf3a27e07233a9c7424c1102045a4e74a2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8x8nz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-d6f76d4c7-2vrfg_openstack-operators(8e44912b-0956-49e8-ad3e-140b3d60838e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Apr 04 02:17:13 crc kubenswrapper[4681]: E0404 02:17:13.460749 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-58b78987f4-nwsmd" podUID="a2899081-691e-4ad2-8e98-4fb8b955a0cd" Apr 04 02:17:13 crc kubenswrapper[4681]: E0404 02:17:13.462113 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zv8zs" podUID="2acd20f5-b31c-411a-989c-f0ad12628894" Apr 04 02:17:13 crc kubenswrapper[4681]: E0404 02:17:13.462118 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-d6f76d4c7-2vrfg" podUID="8e44912b-0956-49e8-ad3e-140b3d60838e" Apr 04 02:17:13 crc kubenswrapper[4681]: I0404 02:17:13.467618 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zv8zs"] Apr 04 02:17:13 crc kubenswrapper[4681]: I0404 02:17:13.748760 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b54a4f45-de00-4dd5-95d4-f96a21d34189-cert\") pod \"openstack-baremetal-operator-controller-manager-b7b49d78f-skbql\" (UID: \"b54a4f45-de00-4dd5-95d4-f96a21d34189\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b7b49d78f-skbql" Apr 04 02:17:13 crc kubenswrapper[4681]: E0404 02:17:13.748962 4681 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Apr 04 02:17:13 crc kubenswrapper[4681]: E0404 02:17:13.749175 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54a4f45-de00-4dd5-95d4-f96a21d34189-cert podName:b54a4f45-de00-4dd5-95d4-f96a21d34189 nodeName:}" failed. No retries permitted until 2026-04-04 02:17:15.749155195 +0000 UTC m=+1315.414930305 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b54a4f45-de00-4dd5-95d4-f96a21d34189-cert") pod "openstack-baremetal-operator-controller-manager-b7b49d78f-skbql" (UID: "b54a4f45-de00-4dd5-95d4-f96a21d34189") : secret "openstack-baremetal-operator-webhook-server-cert" not found Apr 04 02:17:14 crc kubenswrapper[4681]: I0404 02:17:14.051687 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7594f57946-c9j8w" event={"ID":"d8331de2-1469-4856-a56c-f1e107779ca4","Type":"ContainerStarted","Data":"4ae49daeb7511a36e61e370bd06c8d44c780220f6876977115798efec434be34"} Apr 04 02:17:14 crc kubenswrapper[4681]: I0404 02:17:14.054003 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-webhook-certs\") pod \"openstack-operator-controller-manager-667cfd88d7-2k5wm\" (UID: \"df89fca6-3fb4-4d85-95df-4b48e4a1e884\") " pod="openstack-operators/openstack-operator-controller-manager-667cfd88d7-2k5wm" Apr 04 02:17:14 crc kubenswrapper[4681]: I0404 02:17:14.054151 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-metrics-certs\") pod \"openstack-operator-controller-manager-667cfd88d7-2k5wm\" (UID: \"df89fca6-3fb4-4d85-95df-4b48e4a1e884\") " pod="openstack-operators/openstack-operator-controller-manager-667cfd88d7-2k5wm" Apr 04 02:17:14 crc kubenswrapper[4681]: E0404 02:17:14.054213 4681 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Apr 04 02:17:14 crc kubenswrapper[4681]: E0404 02:17:14.054302 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-webhook-certs podName:df89fca6-3fb4-4d85-95df-4b48e4a1e884 nodeName:}" failed. No retries permitted until 2026-04-04 02:17:16.054280605 +0000 UTC m=+1315.720055815 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-webhook-certs") pod "openstack-operator-controller-manager-667cfd88d7-2k5wm" (UID: "df89fca6-3fb4-4d85-95df-4b48e4a1e884") : secret "webhook-server-cert" not found Apr 04 02:17:14 crc kubenswrapper[4681]: E0404 02:17:14.054302 4681 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Apr 04 02:17:14 crc kubenswrapper[4681]: E0404 02:17:14.054371 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-metrics-certs podName:df89fca6-3fb4-4d85-95df-4b48e4a1e884 nodeName:}" failed. No retries permitted until 2026-04-04 02:17:16.054352397 +0000 UTC m=+1315.720127547 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-metrics-certs") pod "openstack-operator-controller-manager-667cfd88d7-2k5wm" (UID: "df89fca6-3fb4-4d85-95df-4b48e4a1e884") : secret "metrics-server-cert" not found Apr 04 02:17:14 crc kubenswrapper[4681]: I0404 02:17:14.055281 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zv8zs" event={"ID":"2acd20f5-b31c-411a-989c-f0ad12628894","Type":"ContainerStarted","Data":"e47bcd2bf04225488291cba6c2b34effee4fbec1e6c0295d555f51276c235af5"} Apr 04 02:17:14 crc kubenswrapper[4681]: E0404 02:17:14.056745 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zv8zs" podUID="2acd20f5-b31c-411a-989c-f0ad12628894" Apr 04 02:17:14 crc kubenswrapper[4681]: I0404 02:17:14.057845 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56ccc97cf5-j87hk" event={"ID":"da0d70da-b61c-41ee-938b-f4a931300f75","Type":"ContainerStarted","Data":"428c492284dd5a0a11201ae33f1668cfa9dd008b0dd0cd4cbdde938e81e48e4e"} Apr 04 02:17:14 crc kubenswrapper[4681]: E0404 02:17:14.059132 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:2c1ef8575d74ef938c900e7ea7e622afeb589db6b4dcf30da544cc5689775296\\\"\"" pod="openstack-operators/test-operator-controller-manager-56ccc97cf5-j87hk" podUID="da0d70da-b61c-41ee-938b-f4a931300f75" Apr 04 02:17:14 crc kubenswrapper[4681]: I0404 02:17:14.059825 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-559d8fdb6b-tmg65" event={"ID":"6479782a-b4ab-4e90-a9bd-29ef0a41f9d7","Type":"ContainerStarted","Data":"2719eeb20605a4e4f04c9b6816f1d2dab9c364b7e23f547375da090666477942"} Apr 04 02:17:14 crc kubenswrapper[4681]: I0404 02:17:14.061035 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6554749d88-tj6wj" event={"ID":"28828ebb-13dc-4ba1-98e1-39c6f38e9245","Type":"ContainerStarted","Data":"d33eadb67c1ce3364f6977da72864155133a8788b696e175ff89bf28212cc744"} Apr 04 02:17:14 crc kubenswrapper[4681]: I0404 02:17:14.063669 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5d6f9fd68c-x7x9p" event={"ID":"3ac3008b-06b0-4ab7-a59f-3e7682627410","Type":"ContainerStarted","Data":"bf521efafa9a3223f603c72aef6f6879cbefde466a7362fb8601345b3debafb0"} Apr 04 02:17:14 crc kubenswrapper[4681]: I0404 02:17:14.066542 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-58b78987f4-nwsmd" event={"ID":"a2899081-691e-4ad2-8e98-4fb8b955a0cd","Type":"ContainerStarted","Data":"2620700025a802411828b7521020b6f02b8e11d922ce3c9e51bbab0ec2c8df60"} Apr 04 02:17:14 crc kubenswrapper[4681]: I0404 02:17:14.069511 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-fbdcf7f7b-844tj" event={"ID":"eb76f1dc-bae9-491f-a58e-3cc1f9c15571","Type":"ContainerStarted","Data":"bed02313766345a3a7c6f6823659a9c411f8bf614cd1b9bec6a2f195c778671a"} Apr 04 02:17:14 crc kubenswrapper[4681]: E0404 02:17:14.070041 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.110:5001/openstack-k8s-operators/watcher-operator:d9d7f10ace025d42d066d7d36b1c37277ef9b85e\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-58b78987f4-nwsmd" podUID="a2899081-691e-4ad2-8e98-4fb8b955a0cd" Apr 04 02:17:14 crc kubenswrapper[4681]: I0404 02:17:14.072037 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-d6f76d4c7-2vrfg" event={"ID":"8e44912b-0956-49e8-ad3e-140b3d60838e","Type":"ContainerStarted","Data":"7e15afae09e9b9ea3f194f6d51354d1fb13cd1f419d5281785f44d9a6458a81d"} Apr 04 02:17:14 crc kubenswrapper[4681]: E0404 02:17:14.073781 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:566b1f4d3f3d50e9620b845e12ef72bf3a27e07233a9c7424c1102045a4e74a2\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-d6f76d4c7-2vrfg" podUID="8e44912b-0956-49e8-ad3e-140b3d60838e" Apr 04 02:17:14 crc kubenswrapper[4681]: I0404 02:17:14.078498 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-84464c7c78-brc8n" event={"ID":"b50e9e2f-832f-4de0-b38c-cb9f5f0d62ab","Type":"ContainerStarted","Data":"f5a56c254d9eceda3c36a4d8008d0d9ac6c32b22ab7a02ca143272e55f8a11d0"} Apr 04 02:17:14 crc kubenswrapper[4681]: E0404 02:17:14.099653 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:4b983bc9e9cebbde8a781fdaaf774b8dd13bb30f66f323d94c2187707f6552d9\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-84464c7c78-brc8n" podUID="b50e9e2f-832f-4de0-b38c-cb9f5f0d62ab" Apr 04 02:17:14 crc kubenswrapper[4681]: I0404 02:17:14.105902 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6b7497dc59-tllnk" event={"ID":"82ce5791-77cb-418c-b3d2-7f49f625ccf1","Type":"ContainerStarted","Data":"894f2af31b85c99119892df73429ebaa89565c2c39efb09bcdb7303b1a2987c4"} Apr 04 02:17:14 crc kubenswrapper[4681]: I0404 02:17:14.107719 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-767865f676-74tl4" event={"ID":"856d74a1-4df8-446a-a82b-3dcc76f1af70","Type":"ContainerStarted","Data":"44dbb9cfe3111c0fa32a0204d51571382020942a4bd209097f3fde459c89beff"} Apr 04 02:17:14 crc kubenswrapper[4681]: E0404 02:17:14.110147 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:526f9d4965431e1a5e4f8c3224bcee3f636a3108a5e0767296a994c2a517404a\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-767865f676-74tl4" podUID="856d74a1-4df8-446a-a82b-3dcc76f1af70" Apr 04 02:17:15 crc kubenswrapper[4681]: E0404 02:17:15.118247 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:4b983bc9e9cebbde8a781fdaaf774b8dd13bb30f66f323d94c2187707f6552d9\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-84464c7c78-brc8n" podUID="b50e9e2f-832f-4de0-b38c-cb9f5f0d62ab" Apr 04 02:17:15 crc kubenswrapper[4681]: E0404 02:17:15.118570 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zv8zs" podUID="2acd20f5-b31c-411a-989c-f0ad12628894" Apr 04 02:17:15 crc kubenswrapper[4681]: E0404 02:17:15.118628 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.110:5001/openstack-k8s-operators/watcher-operator:d9d7f10ace025d42d066d7d36b1c37277ef9b85e\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-58b78987f4-nwsmd" podUID="a2899081-691e-4ad2-8e98-4fb8b955a0cd" Apr 04 02:17:15 crc kubenswrapper[4681]: E0404 02:17:15.118720 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:526f9d4965431e1a5e4f8c3224bcee3f636a3108a5e0767296a994c2a517404a\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-767865f676-74tl4" podUID="856d74a1-4df8-446a-a82b-3dcc76f1af70" Apr 04 02:17:15 crc kubenswrapper[4681]: E0404 02:17:15.118770 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:2c1ef8575d74ef938c900e7ea7e622afeb589db6b4dcf30da544cc5689775296\\\"\"" pod="openstack-operators/test-operator-controller-manager-56ccc97cf5-j87hk" podUID="da0d70da-b61c-41ee-938b-f4a931300f75" Apr 04 02:17:15 crc kubenswrapper[4681]: E0404 02:17:15.118825 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:566b1f4d3f3d50e9620b845e12ef72bf3a27e07233a9c7424c1102045a4e74a2\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-d6f76d4c7-2vrfg" podUID="8e44912b-0956-49e8-ad3e-140b3d60838e" Apr 04 02:17:15 crc kubenswrapper[4681]: I0404 02:17:15.384002 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4536a628-89aa-4f79-b180-9199d3cf390a-cert\") pod \"infra-operator-controller-manager-7ffb6b7cdc-gcbfv\" (UID: \"4536a628-89aa-4f79-b180-9199d3cf390a\") " pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-gcbfv" Apr 04 02:17:15 crc kubenswrapper[4681]: E0404 02:17:15.384289 4681 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Apr 04 02:17:15 crc kubenswrapper[4681]: E0404 02:17:15.384379 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4536a628-89aa-4f79-b180-9199d3cf390a-cert podName:4536a628-89aa-4f79-b180-9199d3cf390a nodeName:}" failed. No retries permitted until 2026-04-04 02:17:19.384357472 +0000 UTC m=+1319.050132672 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4536a628-89aa-4f79-b180-9199d3cf390a-cert") pod "infra-operator-controller-manager-7ffb6b7cdc-gcbfv" (UID: "4536a628-89aa-4f79-b180-9199d3cf390a") : secret "infra-operator-webhook-server-cert" not found Apr 04 02:17:15 crc kubenswrapper[4681]: I0404 02:17:15.789806 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b54a4f45-de00-4dd5-95d4-f96a21d34189-cert\") pod \"openstack-baremetal-operator-controller-manager-b7b49d78f-skbql\" (UID: \"b54a4f45-de00-4dd5-95d4-f96a21d34189\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b7b49d78f-skbql" Apr 04 02:17:15 crc kubenswrapper[4681]: E0404 02:17:15.789960 4681 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Apr 04 02:17:15 crc kubenswrapper[4681]: E0404 02:17:15.790038 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54a4f45-de00-4dd5-95d4-f96a21d34189-cert podName:b54a4f45-de00-4dd5-95d4-f96a21d34189 nodeName:}" failed. No retries permitted until 2026-04-04 02:17:19.790020583 +0000 UTC m=+1319.455795703 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b54a4f45-de00-4dd5-95d4-f96a21d34189-cert") pod "openstack-baremetal-operator-controller-manager-b7b49d78f-skbql" (UID: "b54a4f45-de00-4dd5-95d4-f96a21d34189") : secret "openstack-baremetal-operator-webhook-server-cert" not found Apr 04 02:17:16 crc kubenswrapper[4681]: I0404 02:17:16.095001 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-metrics-certs\") pod \"openstack-operator-controller-manager-667cfd88d7-2k5wm\" (UID: \"df89fca6-3fb4-4d85-95df-4b48e4a1e884\") " pod="openstack-operators/openstack-operator-controller-manager-667cfd88d7-2k5wm" Apr 04 02:17:16 crc kubenswrapper[4681]: I0404 02:17:16.095078 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-webhook-certs\") pod \"openstack-operator-controller-manager-667cfd88d7-2k5wm\" (UID: \"df89fca6-3fb4-4d85-95df-4b48e4a1e884\") " pod="openstack-operators/openstack-operator-controller-manager-667cfd88d7-2k5wm" Apr 04 02:17:16 crc kubenswrapper[4681]: E0404 02:17:16.095183 4681 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Apr 04 02:17:16 crc kubenswrapper[4681]: E0404 02:17:16.095323 4681 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Apr 04 02:17:16 crc kubenswrapper[4681]: E0404 02:17:16.095363 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-metrics-certs podName:df89fca6-3fb4-4d85-95df-4b48e4a1e884 nodeName:}" failed. No retries permitted until 2026-04-04 02:17:20.095337998 +0000 UTC m=+1319.761113108 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-metrics-certs") pod "openstack-operator-controller-manager-667cfd88d7-2k5wm" (UID: "df89fca6-3fb4-4d85-95df-4b48e4a1e884") : secret "metrics-server-cert" not found Apr 04 02:17:16 crc kubenswrapper[4681]: E0404 02:17:16.095391 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-webhook-certs podName:df89fca6-3fb4-4d85-95df-4b48e4a1e884 nodeName:}" failed. No retries permitted until 2026-04-04 02:17:20.09538131 +0000 UTC m=+1319.761156660 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-webhook-certs") pod "openstack-operator-controller-manager-667cfd88d7-2k5wm" (UID: "df89fca6-3fb4-4d85-95df-4b48e4a1e884") : secret "webhook-server-cert" not found Apr 04 02:17:18 crc kubenswrapper[4681]: I0404 02:17:18.953225 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zrps5"] Apr 04 02:17:18 crc kubenswrapper[4681]: I0404 02:17:18.955110 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zrps5" Apr 04 02:17:18 crc kubenswrapper[4681]: I0404 02:17:18.974198 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zrps5"] Apr 04 02:17:19 crc kubenswrapper[4681]: I0404 02:17:19.045863 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b649a9d-e38f-4496-b6b7-39145dfdb158-utilities\") pod \"community-operators-zrps5\" (UID: \"2b649a9d-e38f-4496-b6b7-39145dfdb158\") " pod="openshift-marketplace/community-operators-zrps5" Apr 04 02:17:19 crc kubenswrapper[4681]: I0404 02:17:19.045915 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b649a9d-e38f-4496-b6b7-39145dfdb158-catalog-content\") pod \"community-operators-zrps5\" (UID: \"2b649a9d-e38f-4496-b6b7-39145dfdb158\") " pod="openshift-marketplace/community-operators-zrps5" Apr 04 02:17:19 crc kubenswrapper[4681]: I0404 02:17:19.046038 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xwt9\" (UniqueName: \"kubernetes.io/projected/2b649a9d-e38f-4496-b6b7-39145dfdb158-kube-api-access-7xwt9\") pod \"community-operators-zrps5\" (UID: \"2b649a9d-e38f-4496-b6b7-39145dfdb158\") " pod="openshift-marketplace/community-operators-zrps5" Apr 04 02:17:19 crc kubenswrapper[4681]: I0404 02:17:19.146792 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b649a9d-e38f-4496-b6b7-39145dfdb158-utilities\") pod \"community-operators-zrps5\" (UID: \"2b649a9d-e38f-4496-b6b7-39145dfdb158\") " pod="openshift-marketplace/community-operators-zrps5" Apr 04 02:17:19 crc kubenswrapper[4681]: I0404 02:17:19.146831 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b649a9d-e38f-4496-b6b7-39145dfdb158-catalog-content\") pod \"community-operators-zrps5\" (UID: \"2b649a9d-e38f-4496-b6b7-39145dfdb158\") " pod="openshift-marketplace/community-operators-zrps5" Apr 04 02:17:19 crc kubenswrapper[4681]: I0404 02:17:19.146910 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xwt9\" (UniqueName: \"kubernetes.io/projected/2b649a9d-e38f-4496-b6b7-39145dfdb158-kube-api-access-7xwt9\") pod \"community-operators-zrps5\" (UID: \"2b649a9d-e38f-4496-b6b7-39145dfdb158\") " pod="openshift-marketplace/community-operators-zrps5" Apr 04 02:17:19 crc kubenswrapper[4681]: I0404 02:17:19.147445 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b649a9d-e38f-4496-b6b7-39145dfdb158-utilities\") pod \"community-operators-zrps5\" (UID: \"2b649a9d-e38f-4496-b6b7-39145dfdb158\") " pod="openshift-marketplace/community-operators-zrps5" Apr 04 02:17:19 crc kubenswrapper[4681]: I0404 02:17:19.147468 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b649a9d-e38f-4496-b6b7-39145dfdb158-catalog-content\") pod \"community-operators-zrps5\" (UID: \"2b649a9d-e38f-4496-b6b7-39145dfdb158\") " pod="openshift-marketplace/community-operators-zrps5" Apr 04 02:17:19 crc kubenswrapper[4681]: I0404 02:17:19.177068 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xwt9\" (UniqueName: \"kubernetes.io/projected/2b649a9d-e38f-4496-b6b7-39145dfdb158-kube-api-access-7xwt9\") pod \"community-operators-zrps5\" (UID: \"2b649a9d-e38f-4496-b6b7-39145dfdb158\") " pod="openshift-marketplace/community-operators-zrps5" Apr 04 02:17:19 crc kubenswrapper[4681]: I0404 02:17:19.281181 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zrps5" Apr 04 02:17:19 crc kubenswrapper[4681]: I0404 02:17:19.450500 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4536a628-89aa-4f79-b180-9199d3cf390a-cert\") pod \"infra-operator-controller-manager-7ffb6b7cdc-gcbfv\" (UID: \"4536a628-89aa-4f79-b180-9199d3cf390a\") " pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-gcbfv" Apr 04 02:17:19 crc kubenswrapper[4681]: E0404 02:17:19.450692 4681 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Apr 04 02:17:19 crc kubenswrapper[4681]: E0404 02:17:19.450989 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4536a628-89aa-4f79-b180-9199d3cf390a-cert podName:4536a628-89aa-4f79-b180-9199d3cf390a nodeName:}" failed. No retries permitted until 2026-04-04 02:17:27.450968445 +0000 UTC m=+1327.116743565 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4536a628-89aa-4f79-b180-9199d3cf390a-cert") pod "infra-operator-controller-manager-7ffb6b7cdc-gcbfv" (UID: "4536a628-89aa-4f79-b180-9199d3cf390a") : secret "infra-operator-webhook-server-cert" not found Apr 04 02:17:19 crc kubenswrapper[4681]: I0404 02:17:19.857205 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b54a4f45-de00-4dd5-95d4-f96a21d34189-cert\") pod \"openstack-baremetal-operator-controller-manager-b7b49d78f-skbql\" (UID: \"b54a4f45-de00-4dd5-95d4-f96a21d34189\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b7b49d78f-skbql" Apr 04 02:17:19 crc kubenswrapper[4681]: E0404 02:17:19.857464 4681 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Apr 04 02:17:19 crc kubenswrapper[4681]: E0404 02:17:19.857512 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b54a4f45-de00-4dd5-95d4-f96a21d34189-cert podName:b54a4f45-de00-4dd5-95d4-f96a21d34189 nodeName:}" failed. No retries permitted until 2026-04-04 02:17:27.85749714 +0000 UTC m=+1327.523272260 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b54a4f45-de00-4dd5-95d4-f96a21d34189-cert") pod "openstack-baremetal-operator-controller-manager-b7b49d78f-skbql" (UID: "b54a4f45-de00-4dd5-95d4-f96a21d34189") : secret "openstack-baremetal-operator-webhook-server-cert" not found Apr 04 02:17:20 crc kubenswrapper[4681]: I0404 02:17:20.162219 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-metrics-certs\") pod \"openstack-operator-controller-manager-667cfd88d7-2k5wm\" (UID: \"df89fca6-3fb4-4d85-95df-4b48e4a1e884\") " pod="openstack-operators/openstack-operator-controller-manager-667cfd88d7-2k5wm" Apr 04 02:17:20 crc kubenswrapper[4681]: I0404 02:17:20.162350 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-webhook-certs\") pod \"openstack-operator-controller-manager-667cfd88d7-2k5wm\" (UID: \"df89fca6-3fb4-4d85-95df-4b48e4a1e884\") " pod="openstack-operators/openstack-operator-controller-manager-667cfd88d7-2k5wm" Apr 04 02:17:20 crc kubenswrapper[4681]: E0404 02:17:20.162538 4681 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Apr 04 02:17:20 crc kubenswrapper[4681]: E0404 02:17:20.162587 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-webhook-certs podName:df89fca6-3fb4-4d85-95df-4b48e4a1e884 nodeName:}" failed. No retries permitted until 2026-04-04 02:17:28.162572198 +0000 UTC m=+1327.828347318 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-webhook-certs") pod "openstack-operator-controller-manager-667cfd88d7-2k5wm" (UID: "df89fca6-3fb4-4d85-95df-4b48e4a1e884") : secret "webhook-server-cert" not found Apr 04 02:17:20 crc kubenswrapper[4681]: E0404 02:17:20.163198 4681 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Apr 04 02:17:20 crc kubenswrapper[4681]: E0404 02:17:20.163308 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-metrics-certs podName:df89fca6-3fb4-4d85-95df-4b48e4a1e884 nodeName:}" failed. No retries permitted until 2026-04-04 02:17:28.163288768 +0000 UTC m=+1327.829063878 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-metrics-certs") pod "openstack-operator-controller-manager-667cfd88d7-2k5wm" (UID: "df89fca6-3fb4-4d85-95df-4b48e4a1e884") : secret "metrics-server-cert" not found Apr 04 02:17:24 crc kubenswrapper[4681]: I0404 02:17:24.847832 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fsgjs"] Apr 04 02:17:24 crc kubenswrapper[4681]: I0404 02:17:24.850542 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsgjs" Apr 04 02:17:24 crc kubenswrapper[4681]: I0404 02:17:24.864944 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fsgjs"] Apr 04 02:17:24 crc kubenswrapper[4681]: I0404 02:17:24.948277 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e664bb2-6221-4c7f-8159-4bd460b7d7fd-utilities\") pod \"certified-operators-fsgjs\" (UID: \"4e664bb2-6221-4c7f-8159-4bd460b7d7fd\") " pod="openshift-marketplace/certified-operators-fsgjs" Apr 04 02:17:24 crc kubenswrapper[4681]: I0404 02:17:24.948334 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6pcz\" (UniqueName: \"kubernetes.io/projected/4e664bb2-6221-4c7f-8159-4bd460b7d7fd-kube-api-access-n6pcz\") pod \"certified-operators-fsgjs\" (UID: \"4e664bb2-6221-4c7f-8159-4bd460b7d7fd\") " pod="openshift-marketplace/certified-operators-fsgjs" Apr 04 02:17:24 crc kubenswrapper[4681]: I0404 02:17:24.948361 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e664bb2-6221-4c7f-8159-4bd460b7d7fd-catalog-content\") pod \"certified-operators-fsgjs\" (UID: \"4e664bb2-6221-4c7f-8159-4bd460b7d7fd\") " pod="openshift-marketplace/certified-operators-fsgjs" Apr 04 02:17:25 crc kubenswrapper[4681]: I0404 02:17:25.050110 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e664bb2-6221-4c7f-8159-4bd460b7d7fd-utilities\") pod \"certified-operators-fsgjs\" (UID: \"4e664bb2-6221-4c7f-8159-4bd460b7d7fd\") " pod="openshift-marketplace/certified-operators-fsgjs" Apr 04 02:17:25 crc kubenswrapper[4681]: I0404 02:17:25.050242 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6pcz\" (UniqueName: \"kubernetes.io/projected/4e664bb2-6221-4c7f-8159-4bd460b7d7fd-kube-api-access-n6pcz\") pod \"certified-operators-fsgjs\" (UID: \"4e664bb2-6221-4c7f-8159-4bd460b7d7fd\") " pod="openshift-marketplace/certified-operators-fsgjs" Apr 04 02:17:25 crc kubenswrapper[4681]: I0404 02:17:25.050293 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e664bb2-6221-4c7f-8159-4bd460b7d7fd-catalog-content\") pod \"certified-operators-fsgjs\" (UID: \"4e664bb2-6221-4c7f-8159-4bd460b7d7fd\") " pod="openshift-marketplace/certified-operators-fsgjs" Apr 04 02:17:25 crc kubenswrapper[4681]: I0404 02:17:25.050729 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e664bb2-6221-4c7f-8159-4bd460b7d7fd-utilities\") pod \"certified-operators-fsgjs\" (UID: \"4e664bb2-6221-4c7f-8159-4bd460b7d7fd\") " pod="openshift-marketplace/certified-operators-fsgjs" Apr 04 02:17:25 crc kubenswrapper[4681]: I0404 02:17:25.050784 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e664bb2-6221-4c7f-8159-4bd460b7d7fd-catalog-content\") pod \"certified-operators-fsgjs\" (UID: \"4e664bb2-6221-4c7f-8159-4bd460b7d7fd\") " pod="openshift-marketplace/certified-operators-fsgjs" Apr 04 02:17:25 crc kubenswrapper[4681]: I0404 02:17:25.077289 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6pcz\" (UniqueName: \"kubernetes.io/projected/4e664bb2-6221-4c7f-8159-4bd460b7d7fd-kube-api-access-n6pcz\") pod \"certified-operators-fsgjs\" (UID: \"4e664bb2-6221-4c7f-8159-4bd460b7d7fd\") " pod="openshift-marketplace/certified-operators-fsgjs" Apr 04 02:17:25 crc kubenswrapper[4681]: I0404 02:17:25.171283 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsgjs" Apr 04 02:17:25 crc kubenswrapper[4681]: E0404 02:17:25.214830 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:96eade4f229c073e64fb9ff9c5a8479c93078b1007469ac1ea7d8135e1d29946" Apr 04 02:17:25 crc kubenswrapper[4681]: E0404 02:17:25.214994 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:96eade4f229c073e64fb9ff9c5a8479c93078b1007469ac1ea7d8135e1d29946,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sj9nc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-559d8fdb6b-tmg65_openstack-operators(6479782a-b4ab-4e90-a9bd-29ef0a41f9d7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Apr 04 02:17:25 crc kubenswrapper[4681]: E0404 02:17:25.216139 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-559d8fdb6b-tmg65" podUID="6479782a-b4ab-4e90-a9bd-29ef0a41f9d7" Apr 04 02:17:26 crc kubenswrapper[4681]: E0404 02:17:26.212561 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:96eade4f229c073e64fb9ff9c5a8479c93078b1007469ac1ea7d8135e1d29946\\\"\"" pod="openstack-operators/placement-operator-controller-manager-559d8fdb6b-tmg65" podUID="6479782a-b4ab-4e90-a9bd-29ef0a41f9d7" Apr 04 02:17:26 crc kubenswrapper[4681]: I0404 02:17:26.524612 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:17:26 crc kubenswrapper[4681]: I0404 02:17:26.525129 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:17:27 crc kubenswrapper[4681]: E0404 02:17:27.391050 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:0c226cbc35bf93181b36bb7d0e5e1cd65d370d1f53c66895fdc73f9f84d5a06b" Apr 04 02:17:27 crc kubenswrapper[4681]: E0404 02:17:27.391288 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:0c226cbc35bf93181b36bb7d0e5e1cd65d370d1f53c66895fdc73f9f84d5a06b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kvsb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-5d46cccfb9-ttwtp_openstack-operators(23b37abe-289b-45e9-b55b-e2985e411401): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Apr 04 02:17:27 crc kubenswrapper[4681]: E0404 02:17:27.392558 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-5d46cccfb9-ttwtp" podUID="23b37abe-289b-45e9-b55b-e2985e411401" Apr 04 02:17:27 crc kubenswrapper[4681]: I0404 02:17:27.500324 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4536a628-89aa-4f79-b180-9199d3cf390a-cert\") pod \"infra-operator-controller-manager-7ffb6b7cdc-gcbfv\" (UID: \"4536a628-89aa-4f79-b180-9199d3cf390a\") " pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-gcbfv" Apr 04 02:17:27 crc kubenswrapper[4681]: I0404 02:17:27.517171 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4536a628-89aa-4f79-b180-9199d3cf390a-cert\") pod \"infra-operator-controller-manager-7ffb6b7cdc-gcbfv\" (UID: \"4536a628-89aa-4f79-b180-9199d3cf390a\") " pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-gcbfv" Apr 04 02:17:27 crc kubenswrapper[4681]: I0404 02:17:27.771289 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-grgpv" Apr 04 02:17:27 crc kubenswrapper[4681]: I0404 02:17:27.779889 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-gcbfv" Apr 04 02:17:27 crc kubenswrapper[4681]: I0404 02:17:27.913595 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b54a4f45-de00-4dd5-95d4-f96a21d34189-cert\") pod \"openstack-baremetal-operator-controller-manager-b7b49d78f-skbql\" (UID: \"b54a4f45-de00-4dd5-95d4-f96a21d34189\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b7b49d78f-skbql" Apr 04 02:17:27 crc kubenswrapper[4681]: I0404 02:17:27.926995 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b54a4f45-de00-4dd5-95d4-f96a21d34189-cert\") pod \"openstack-baremetal-operator-controller-manager-b7b49d78f-skbql\" (UID: \"b54a4f45-de00-4dd5-95d4-f96a21d34189\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-b7b49d78f-skbql" Apr 04 02:17:28 crc kubenswrapper[4681]: I0404 02:17:28.201843 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-cf9g2" Apr 04 02:17:28 crc kubenswrapper[4681]: I0404 02:17:28.211146 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b7b49d78f-skbql" Apr 04 02:17:28 crc kubenswrapper[4681]: I0404 02:17:28.228690 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-metrics-certs\") pod \"openstack-operator-controller-manager-667cfd88d7-2k5wm\" (UID: \"df89fca6-3fb4-4d85-95df-4b48e4a1e884\") " pod="openstack-operators/openstack-operator-controller-manager-667cfd88d7-2k5wm" Apr 04 02:17:28 crc kubenswrapper[4681]: I0404 02:17:28.228801 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-webhook-certs\") pod \"openstack-operator-controller-manager-667cfd88d7-2k5wm\" (UID: \"df89fca6-3fb4-4d85-95df-4b48e4a1e884\") " pod="openstack-operators/openstack-operator-controller-manager-667cfd88d7-2k5wm" Apr 04 02:17:28 crc kubenswrapper[4681]: E0404 02:17:28.229160 4681 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Apr 04 02:17:28 crc kubenswrapper[4681]: E0404 02:17:28.229230 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-webhook-certs podName:df89fca6-3fb4-4d85-95df-4b48e4a1e884 nodeName:}" failed. No retries permitted until 2026-04-04 02:17:44.229208325 +0000 UTC m=+1343.894983445 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-webhook-certs") pod "openstack-operator-controller-manager-667cfd88d7-2k5wm" (UID: "df89fca6-3fb4-4d85-95df-4b48e4a1e884") : secret "webhook-server-cert" not found Apr 04 02:17:28 crc kubenswrapper[4681]: I0404 02:17:28.235898 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-metrics-certs\") pod \"openstack-operator-controller-manager-667cfd88d7-2k5wm\" (UID: \"df89fca6-3fb4-4d85-95df-4b48e4a1e884\") " pod="openstack-operators/openstack-operator-controller-manager-667cfd88d7-2k5wm" Apr 04 02:17:28 crc kubenswrapper[4681]: E0404 02:17:28.239318 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:0c226cbc35bf93181b36bb7d0e5e1cd65d370d1f53c66895fdc73f9f84d5a06b\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-5d46cccfb9-ttwtp" podUID="23b37abe-289b-45e9-b55b-e2985e411401" Apr 04 02:17:28 crc kubenswrapper[4681]: E0404 02:17:28.397917 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:b6d44a28b047f402b17b4cc07584f04cd6f1168d8742a9a8b17a9ce7c8550c5a" Apr 04 02:17:28 crc kubenswrapper[4681]: E0404 02:17:28.399176 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:b6d44a28b047f402b17b4cc07584f04cd6f1168d8742a9a8b17a9ce7c8550c5a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h4gvk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7594f57946-c9j8w_openstack-operators(d8331de2-1469-4856-a56c-f1e107779ca4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Apr 04 02:17:28 crc kubenswrapper[4681]: E0404 02:17:28.400581 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7594f57946-c9j8w" podUID="d8331de2-1469-4856-a56c-f1e107779ca4" Apr 04 02:17:29 crc kubenswrapper[4681]: E0404 02:17:29.244507 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:b6d44a28b047f402b17b4cc07584f04cd6f1168d8742a9a8b17a9ce7c8550c5a\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7594f57946-c9j8w" podUID="d8331de2-1469-4856-a56c-f1e107779ca4" Apr 04 02:17:38 crc kubenswrapper[4681]: E0404 02:17:38.071072 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:67c5689bf3ea12b55f2c76e8dbefad03a980a6545d46f16493004cdcff4bfee4" Apr 04 02:17:38 crc kubenswrapper[4681]: E0404 02:17:38.071565 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:67c5689bf3ea12b55f2c76e8dbefad03a980a6545d46f16493004cdcff4bfee4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h8l4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-6ccfd84cb4-sq9cm_openstack-operators(be876d09-d6fd-46f7-a03c-8c13f72bee75): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Apr 04 02:17:38 crc kubenswrapper[4681]: E0404 02:17:38.073003 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-6ccfd84cb4-sq9cm" podUID="be876d09-d6fd-46f7-a03c-8c13f72bee75" Apr 04 02:17:38 crc kubenswrapper[4681]: E0404 02:17:38.313414 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:67c5689bf3ea12b55f2c76e8dbefad03a980a6545d46f16493004cdcff4bfee4\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-6ccfd84cb4-sq9cm" podUID="be876d09-d6fd-46f7-a03c-8c13f72bee75" Apr 04 02:17:39 crc kubenswrapper[4681]: I0404 02:17:39.486286 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n9lp2"] Apr 04 02:17:39 crc kubenswrapper[4681]: I0404 02:17:39.487857 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n9lp2" Apr 04 02:17:39 crc kubenswrapper[4681]: I0404 02:17:39.499115 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9lp2"] Apr 04 02:17:39 crc kubenswrapper[4681]: I0404 02:17:39.604626 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgbt6\" (UniqueName: \"kubernetes.io/projected/a90c835b-2256-4a2c-93bb-9e0d92db2f25-kube-api-access-qgbt6\") pod \"redhat-marketplace-n9lp2\" (UID: \"a90c835b-2256-4a2c-93bb-9e0d92db2f25\") " pod="openshift-marketplace/redhat-marketplace-n9lp2" Apr 04 02:17:39 crc kubenswrapper[4681]: I0404 02:17:39.604694 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a90c835b-2256-4a2c-93bb-9e0d92db2f25-catalog-content\") pod \"redhat-marketplace-n9lp2\" (UID: \"a90c835b-2256-4a2c-93bb-9e0d92db2f25\") " pod="openshift-marketplace/redhat-marketplace-n9lp2" Apr 04 02:17:39 crc kubenswrapper[4681]: I0404 02:17:39.604723 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a90c835b-2256-4a2c-93bb-9e0d92db2f25-utilities\") pod \"redhat-marketplace-n9lp2\" (UID: \"a90c835b-2256-4a2c-93bb-9e0d92db2f25\") " pod="openshift-marketplace/redhat-marketplace-n9lp2" Apr 04 02:17:39 crc kubenswrapper[4681]: I0404 02:17:39.707401 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a90c835b-2256-4a2c-93bb-9e0d92db2f25-catalog-content\") pod \"redhat-marketplace-n9lp2\" (UID: \"a90c835b-2256-4a2c-93bb-9e0d92db2f25\") " pod="openshift-marketplace/redhat-marketplace-n9lp2" Apr 04 02:17:39 crc kubenswrapper[4681]: I0404 02:17:39.707462 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a90c835b-2256-4a2c-93bb-9e0d92db2f25-utilities\") pod \"redhat-marketplace-n9lp2\" (UID: \"a90c835b-2256-4a2c-93bb-9e0d92db2f25\") " pod="openshift-marketplace/redhat-marketplace-n9lp2" Apr 04 02:17:39 crc kubenswrapper[4681]: I0404 02:17:39.707574 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgbt6\" (UniqueName: \"kubernetes.io/projected/a90c835b-2256-4a2c-93bb-9e0d92db2f25-kube-api-access-qgbt6\") pod \"redhat-marketplace-n9lp2\" (UID: \"a90c835b-2256-4a2c-93bb-9e0d92db2f25\") " pod="openshift-marketplace/redhat-marketplace-n9lp2" Apr 04 02:17:39 crc kubenswrapper[4681]: I0404 02:17:39.707933 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a90c835b-2256-4a2c-93bb-9e0d92db2f25-catalog-content\") pod \"redhat-marketplace-n9lp2\" (UID: \"a90c835b-2256-4a2c-93bb-9e0d92db2f25\") " pod="openshift-marketplace/redhat-marketplace-n9lp2" Apr 04 02:17:39 crc kubenswrapper[4681]: I0404 02:17:39.708056 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a90c835b-2256-4a2c-93bb-9e0d92db2f25-utilities\") pod \"redhat-marketplace-n9lp2\" (UID: \"a90c835b-2256-4a2c-93bb-9e0d92db2f25\") " pod="openshift-marketplace/redhat-marketplace-n9lp2" Apr 04 02:17:39 crc kubenswrapper[4681]: I0404 02:17:39.726927 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgbt6\" (UniqueName: \"kubernetes.io/projected/a90c835b-2256-4a2c-93bb-9e0d92db2f25-kube-api-access-qgbt6\") pod \"redhat-marketplace-n9lp2\" (UID: \"a90c835b-2256-4a2c-93bb-9e0d92db2f25\") " pod="openshift-marketplace/redhat-marketplace-n9lp2" Apr 04 02:17:39 crc kubenswrapper[4681]: I0404 02:17:39.808176 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n9lp2" Apr 04 02:17:40 crc kubenswrapper[4681]: E0404 02:17:40.064051 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage3301084981/2\": happened during read: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Apr 04 02:17:40 crc kubenswrapper[4681]: E0404 02:17:40.064255 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l2jg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-zv8zs_openstack-operators(2acd20f5-b31c-411a-989c-f0ad12628894): ErrImagePull: rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage3301084981/2\": happened during read: context canceled" logger="UnhandledError" Apr 04 02:17:40 crc kubenswrapper[4681]: E0404 02:17:40.065490 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = writing blob: storing blob to file \\\"/var/tmp/container_images_storage3301084981/2\\\": happened during read: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zv8zs" podUID="2acd20f5-b31c-411a-989c-f0ad12628894" Apr 04 02:17:40 crc kubenswrapper[4681]: E0404 02:17:40.146754 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:16b276e3f22dc79232c2150ae53becd10ebcf2d9b883f7df4ff98a929eefac91" Apr 04 02:17:40 crc kubenswrapper[4681]: E0404 02:17:40.146977 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:16b276e3f22dc79232c2150ae53becd10ebcf2d9b883f7df4ff98a929eefac91,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zc628,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6554749d88-tj6wj_openstack-operators(28828ebb-13dc-4ba1-98e1-39c6f38e9245): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Apr 04 02:17:40 crc kubenswrapper[4681]: E0404 02:17:40.148486 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6554749d88-tj6wj" podUID="28828ebb-13dc-4ba1-98e1-39c6f38e9245" Apr 04 02:17:40 crc kubenswrapper[4681]: E0404 02:17:40.321909 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:16b276e3f22dc79232c2150ae53becd10ebcf2d9b883f7df4ff98a929eefac91\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6554749d88-tj6wj" podUID="28828ebb-13dc-4ba1-98e1-39c6f38e9245" Apr 04 02:17:40 crc kubenswrapper[4681]: E0404 02:17:40.730245 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:e5f1303497321c083933cd8ab46e0c95c3f7f3f4101e0c2c3df79eb089abab9e" Apr 04 02:17:40 crc kubenswrapper[4681]: E0404 02:17:40.730490 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:e5f1303497321c083933cd8ab46e0c95c3f7f3f4101e0c2c3df79eb089abab9e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d25v6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-6b7497dc59-tllnk_openstack-operators(82ce5791-77cb-418c-b3d2-7f49f625ccf1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Apr 04 02:17:40 crc kubenswrapper[4681]: E0404 02:17:40.731707 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-6b7497dc59-tllnk" podUID="82ce5791-77cb-418c-b3d2-7f49f625ccf1" Apr 04 02:17:41 crc kubenswrapper[4681]: E0404 02:17:41.327366 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:e5f1303497321c083933cd8ab46e0c95c3f7f3f4101e0c2c3df79eb089abab9e\\\"\"" pod="openstack-operators/manila-operator-controller-manager-6b7497dc59-tllnk" podUID="82ce5791-77cb-418c-b3d2-7f49f625ccf1" Apr 04 02:17:41 crc kubenswrapper[4681]: E0404 02:17:41.656846 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:741c1bc38c0d9430995a0d0aae6adae5c1f490b23d620564595cdd40683df68b" Apr 04 02:17:41 crc kubenswrapper[4681]: E0404 02:17:41.657337 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:741c1bc38c0d9430995a0d0aae6adae5c1f490b23d620564595cdd40683df68b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5krgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5f96574b5-82k6f_openstack-operators(1a4403a6-7904-4764-aba4-02a2bcc4bc19): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Apr 04 02:17:41 crc kubenswrapper[4681]: E0404 02:17:41.659032 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-5f96574b5-82k6f" podUID="1a4403a6-7904-4764-aba4-02a2bcc4bc19" Apr 04 02:17:42 crc kubenswrapper[4681]: E0404 02:17:42.333911 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:741c1bc38c0d9430995a0d0aae6adae5c1f490b23d620564595cdd40683df68b\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5f96574b5-82k6f" podUID="1a4403a6-7904-4764-aba4-02a2bcc4bc19" Apr 04 02:17:42 crc kubenswrapper[4681]: E0404 02:17:42.359904 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:cbc03ca8837c64974a4670506a8df688c44432c4aab095f3fa7f1330e72bd3bd" Apr 04 02:17:42 crc kubenswrapper[4681]: E0404 02:17:42.360193 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:cbc03ca8837c64974a4670506a8df688c44432c4aab095f3fa7f1330e72bd3bd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m4xbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-fbdcf7f7b-844tj_openstack-operators(eb76f1dc-bae9-491f-a58e-3cc1f9c15571): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Apr 04 02:17:42 crc kubenswrapper[4681]: E0404 02:17:42.361482 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-fbdcf7f7b-844tj" podUID="eb76f1dc-bae9-491f-a58e-3cc1f9c15571" Apr 04 02:17:43 crc kubenswrapper[4681]: E0404 02:17:43.577297 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:4c7f06f3d9676d55c6f6f726df750fe370f71756048184106e06713923612267" Apr 04 02:17:43 crc kubenswrapper[4681]: E0404 02:17:43.577490 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:4c7f06f3d9676d55c6f6f726df750fe370f71756048184106e06713923612267,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6b5st,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-58689c6fff-rnnzd_openstack-operators(6a8bc05a-a0b8-4ba3-b778-b3cb57b19a99): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Apr 04 02:17:43 crc kubenswrapper[4681]: E0404 02:17:43.578690 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-58689c6fff-rnnzd" podUID="6a8bc05a-a0b8-4ba3-b778-b3cb57b19a99" Apr 04 02:17:43 crc kubenswrapper[4681]: E0404 02:17:43.641618 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:cbc03ca8837c64974a4670506a8df688c44432c4aab095f3fa7f1330e72bd3bd\\\"\"" pod="openstack-operators/swift-operator-controller-manager-fbdcf7f7b-844tj" podUID="eb76f1dc-bae9-491f-a58e-3cc1f9c15571" Apr 04 02:17:44 crc kubenswrapper[4681]: I0404 02:17:44.234639 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-webhook-certs\") pod \"openstack-operator-controller-manager-667cfd88d7-2k5wm\" (UID: \"df89fca6-3fb4-4d85-95df-4b48e4a1e884\") " pod="openstack-operators/openstack-operator-controller-manager-667cfd88d7-2k5wm" Apr 04 02:17:44 crc kubenswrapper[4681]: I0404 02:17:44.240120 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/df89fca6-3fb4-4d85-95df-4b48e4a1e884-webhook-certs\") pod \"openstack-operator-controller-manager-667cfd88d7-2k5wm\" (UID: \"df89fca6-3fb4-4d85-95df-4b48e4a1e884\") " pod="openstack-operators/openstack-operator-controller-manager-667cfd88d7-2k5wm" Apr 04 02:17:44 crc kubenswrapper[4681]: E0404 02:17:44.353857 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:4c7f06f3d9676d55c6f6f726df750fe370f71756048184106e06713923612267\\\"\"" pod="openstack-operators/designate-operator-controller-manager-58689c6fff-rnnzd" podUID="6a8bc05a-a0b8-4ba3-b778-b3cb57b19a99" Apr 04 02:17:44 crc kubenswrapper[4681]: I0404 02:17:44.465861 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-m6zvn" Apr 04 02:17:44 crc kubenswrapper[4681]: I0404 02:17:44.470856 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-667cfd88d7-2k5wm" Apr 04 02:17:44 crc kubenswrapper[4681]: E0404 02:17:44.486167 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:f649f31aca138e78f72963c98bafaeb0da514133f1d731019552c76dad08394c" Apr 04 02:17:44 crc kubenswrapper[4681]: E0404 02:17:44.486392 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:f649f31aca138e78f72963c98bafaeb0da514133f1d731019552c76dad08394c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-44sr7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-648bdc7f99-skr68_openstack-operators(4513182b-1bdb-40a2-ba02-2e8aa8567819): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Apr 04 02:17:44 crc kubenswrapper[4681]: E0404 02:17:44.488583 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-648bdc7f99-skr68" podUID="4513182b-1bdb-40a2-ba02-2e8aa8567819" Apr 04 02:17:45 crc kubenswrapper[4681]: E0404 02:17:45.087447 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:388c06cd947e4eaf823e3d64de2d3ba7660dbd9d4c01729d92bd628e5e73bc5f" Apr 04 02:17:45 crc kubenswrapper[4681]: E0404 02:17:45.087673 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:388c06cd947e4eaf823e3d64de2d3ba7660dbd9d4c01729d92bd628e5e73bc5f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pr5sz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5d6f9fd68c-x7x9p_openstack-operators(3ac3008b-06b0-4ab7-a59f-3e7682627410): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Apr 04 02:17:45 crc kubenswrapper[4681]: E0404 02:17:45.088770 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-5d6f9fd68c-x7x9p" podUID="3ac3008b-06b0-4ab7-a59f-3e7682627410" Apr 04 02:17:45 crc kubenswrapper[4681]: E0404 02:17:45.360042 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:f649f31aca138e78f72963c98bafaeb0da514133f1d731019552c76dad08394c\\\"\"" pod="openstack-operators/glance-operator-controller-manager-648bdc7f99-skr68" podUID="4513182b-1bdb-40a2-ba02-2e8aa8567819" Apr 04 02:17:45 crc kubenswrapper[4681]: E0404 02:17:45.360164 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:388c06cd947e4eaf823e3d64de2d3ba7660dbd9d4c01729d92bd628e5e73bc5f\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5d6f9fd68c-x7x9p" podUID="3ac3008b-06b0-4ab7-a59f-3e7682627410" Apr 04 02:17:45 crc kubenswrapper[4681]: E0404 02:17:45.893125 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:9fd3681c6c8549a78b12dc5e83676bc0956558b01327b95598aa424d62acb189" Apr 04 02:17:45 crc kubenswrapper[4681]: E0404 02:17:45.893704 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:9fd3681c6c8549a78b12dc5e83676bc0956558b01327b95598aa424d62acb189,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7zq4x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-dbf8bb784-4gx6m_openstack-operators(80023eb5-5d8e-49ff-bd8d-d0fb3e290ccf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Apr 04 02:17:45 crc kubenswrapper[4681]: E0404 02:17:45.895136 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-dbf8bb784-4gx6m" podUID="80023eb5-5d8e-49ff-bd8d-d0fb3e290ccf" Apr 04 02:17:46 crc kubenswrapper[4681]: E0404 02:17:46.370232 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:9fd3681c6c8549a78b12dc5e83676bc0956558b01327b95598aa424d62acb189\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-dbf8bb784-4gx6m" podUID="80023eb5-5d8e-49ff-bd8d-d0fb3e290ccf" Apr 04 02:17:46 crc kubenswrapper[4681]: E0404 02:17:46.799953 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:566b1f4d3f3d50e9620b845e12ef72bf3a27e07233a9c7424c1102045a4e74a2" Apr 04 02:17:46 crc kubenswrapper[4681]: E0404 02:17:46.800616 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:566b1f4d3f3d50e9620b845e12ef72bf3a27e07233a9c7424c1102045a4e74a2,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8x8nz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-d6f76d4c7-2vrfg_openstack-operators(8e44912b-0956-49e8-ad3e-140b3d60838e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Apr 04 02:17:46 crc kubenswrapper[4681]: E0404 02:17:46.801917 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-d6f76d4c7-2vrfg" podUID="8e44912b-0956-49e8-ad3e-140b3d60838e" Apr 04 02:17:47 crc kubenswrapper[4681]: E0404 02:17:47.343417 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:4b983bc9e9cebbde8a781fdaaf774b8dd13bb30f66f323d94c2187707f6552d9" Apr 04 02:17:47 crc kubenswrapper[4681]: E0404 02:17:47.343587 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:4b983bc9e9cebbde8a781fdaaf774b8dd13bb30f66f323d94c2187707f6552d9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s2g6f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-84464c7c78-brc8n_openstack-operators(b50e9e2f-832f-4de0-b38c-cb9f5f0d62ab): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Apr 04 02:17:47 crc kubenswrapper[4681]: E0404 02:17:47.344702 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-84464c7c78-brc8n" podUID="b50e9e2f-832f-4de0-b38c-cb9f5f0d62ab" Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.122719 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zrps5"] Apr 04 02:17:49 crc kubenswrapper[4681]: W0404 02:17:49.153387 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b649a9d_e38f_4496_b6b7_39145dfdb158.slice/crio-628b1ccaf472dfad8d7abad5220c34e904e8b67b8ab0084c52159ee24b37f4e5 WatchSource:0}: Error finding container 628b1ccaf472dfad8d7abad5220c34e904e8b67b8ab0084c52159ee24b37f4e5: Status 404 returned error can't find the container with id 628b1ccaf472dfad8d7abad5220c34e904e8b67b8ab0084c52159ee24b37f4e5 Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.300417 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-b7b49d78f-skbql"] Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.308415 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fsgjs"] Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.395937 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-8684f86954-4z752" event={"ID":"06717285-4d9d-4b9d-919e-106dd0ec0274","Type":"ContainerStarted","Data":"8fa2170ca911daa9f261c6d40271e86cea4f11c8c40aad360592304e37f1fe0e"} Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.397058 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-8684f86954-4z752" Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.404085 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86644c9c9c-kvnd9" event={"ID":"895bcf63-b464-4408-a0f2-8217d1a6179b","Type":"ContainerStarted","Data":"d2e9efcf5f86425a5242551eea147143fbd17e0732338b1b2241305e31055fe9"} Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.404236 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86644c9c9c-kvnd9" Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.410025 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d46cccfb9-ttwtp" event={"ID":"23b37abe-289b-45e9-b55b-e2985e411401","Type":"ContainerStarted","Data":"1a47d68912b76d3e0309ec29bdb569a606b9a85acbdc3cf34c874427dfb5c801"} Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.410381 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d46cccfb9-ttwtp" Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.416765 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-gcbfv"] Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.421145 4681 generic.go:334] "Generic (PLEG): container finished" podID="2b649a9d-e38f-4496-b6b7-39145dfdb158" containerID="3a87b51ecd43809046a28cb990e4332fa09aee0f669e056bada0ede67553987f" exitCode=0 Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.421239 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrps5" event={"ID":"2b649a9d-e38f-4496-b6b7-39145dfdb158","Type":"ContainerDied","Data":"3a87b51ecd43809046a28cb990e4332fa09aee0f669e056bada0ede67553987f"} Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.421279 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrps5" event={"ID":"2b649a9d-e38f-4496-b6b7-39145dfdb158","Type":"ContainerStarted","Data":"628b1ccaf472dfad8d7abad5220c34e904e8b67b8ab0084c52159ee24b37f4e5"} Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.461021 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7594f57946-c9j8w" event={"ID":"d8331de2-1469-4856-a56c-f1e107779ca4","Type":"ContainerStarted","Data":"163f5ec3abcde420bd375800995e734d14358d2d8045e237a1b6e46ece44d7b5"} Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.461904 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7594f57946-c9j8w" Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.474376 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b7b49d78f-skbql" event={"ID":"b54a4f45-de00-4dd5-95d4-f96a21d34189","Type":"ContainerStarted","Data":"6d6d39b3696d2fbdc956080bbd632b6e7f714300c2fc99672ea9addf6946df57"} Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.486972 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d46cccfb9-ttwtp" podStartSLOduration=2.391260317 podStartE2EDuration="38.486949714s" podCreationTimestamp="2026-04-04 02:17:11 +0000 UTC" firstStartedPulling="2026-04-04 02:17:12.729898673 +0000 UTC m=+1312.395673783" lastFinishedPulling="2026-04-04 02:17:48.82558806 +0000 UTC m=+1348.491363180" observedRunningTime="2026-04-04 02:17:49.474685257 +0000 UTC m=+1349.140460377" watchObservedRunningTime="2026-04-04 02:17:49.486949714 +0000 UTC m=+1349.152724834" Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.491305 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-8684f86954-4z752" podStartSLOduration=6.827587433 podStartE2EDuration="38.491289723s" podCreationTimestamp="2026-04-04 02:17:11 +0000 UTC" firstStartedPulling="2026-04-04 02:17:12.713966245 +0000 UTC m=+1312.379741365" lastFinishedPulling="2026-04-04 02:17:44.377668515 +0000 UTC m=+1344.043443655" observedRunningTime="2026-04-04 02:17:49.418248206 +0000 UTC m=+1349.084023316" watchObservedRunningTime="2026-04-04 02:17:49.491289723 +0000 UTC m=+1349.157064833" Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.495971 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56ccc97cf5-j87hk" event={"ID":"da0d70da-b61c-41ee-938b-f4a931300f75","Type":"ContainerStarted","Data":"bdfe276e7ff88a1e910505e22aeb20645285b92caf6479872769bf086c0fcd49"} Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.497344 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56ccc97cf5-j87hk" Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.522015 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsgjs" event={"ID":"4e664bb2-6221-4c7f-8159-4bd460b7d7fd","Type":"ContainerStarted","Data":"55112aa033c3820c7b12352502fa075793afc2d814c6252b20cdc56a31a327ff"} Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.527239 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-58b78987f4-nwsmd" event={"ID":"a2899081-691e-4ad2-8e98-4fb8b955a0cd","Type":"ContainerStarted","Data":"978a27bea426537ef4d8fd7f73b5a8943a2bc91dea569b8f8512e0953e27642a"} Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.528077 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-58b78987f4-nwsmd" Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.534669 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-86644c9c9c-kvnd9" podStartSLOduration=10.437975366 podStartE2EDuration="38.534652784s" podCreationTimestamp="2026-04-04 02:17:11 +0000 UTC" firstStartedPulling="2026-04-04 02:17:12.569475337 +0000 UTC m=+1312.235250457" lastFinishedPulling="2026-04-04 02:17:40.666152755 +0000 UTC m=+1340.331927875" observedRunningTime="2026-04-04 02:17:49.530648794 +0000 UTC m=+1349.196423924" watchObservedRunningTime="2026-04-04 02:17:49.534652784 +0000 UTC m=+1349.200427904" Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.545486 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-767865f676-74tl4" event={"ID":"856d74a1-4df8-446a-a82b-3dcc76f1af70","Type":"ContainerStarted","Data":"aba7a397ddcedc44cda1bdae3454d3895deaa2e9aa8278b9f1f870378bf6d4b9"} Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.545677 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-767865f676-74tl4" Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.558833 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-559d8fdb6b-tmg65" event={"ID":"6479782a-b4ab-4e90-a9bd-29ef0a41f9d7","Type":"ContainerStarted","Data":"4b1b3a8fc36e423d30a4100c96c075a045cf7646cc936cd509d9596feddaaa83"} Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.559404 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-559d8fdb6b-tmg65" Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.572908 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9lp2"] Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.588334 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-667cfd88d7-2k5wm"] Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.601097 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-58b78987f4-nwsmd" podStartSLOduration=3.141549281 podStartE2EDuration="38.601076708s" podCreationTimestamp="2026-04-04 02:17:11 +0000 UTC" firstStartedPulling="2026-04-04 02:17:13.459329615 +0000 UTC m=+1313.125104735" lastFinishedPulling="2026-04-04 02:17:48.918857042 +0000 UTC m=+1348.584632162" observedRunningTime="2026-04-04 02:17:49.572776551 +0000 UTC m=+1349.238551681" watchObservedRunningTime="2026-04-04 02:17:49.601076708 +0000 UTC m=+1349.266851828" Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.614658 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7594f57946-c9j8w" podStartSLOduration=3.176993085 podStartE2EDuration="38.614630601s" podCreationTimestamp="2026-04-04 02:17:11 +0000 UTC" firstStartedPulling="2026-04-04 02:17:13.387970215 +0000 UTC m=+1313.053745335" lastFinishedPulling="2026-04-04 02:17:48.825607731 +0000 UTC m=+1348.491382851" observedRunningTime="2026-04-04 02:17:49.593841779 +0000 UTC m=+1349.259616899" watchObservedRunningTime="2026-04-04 02:17:49.614630601 +0000 UTC m=+1349.280405721" Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.618864 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56ccc97cf5-j87hk" podStartSLOduration=3.244949001 podStartE2EDuration="38.618841066s" podCreationTimestamp="2026-04-04 02:17:11 +0000 UTC" firstStartedPulling="2026-04-04 02:17:13.453284929 +0000 UTC m=+1313.119060049" lastFinishedPulling="2026-04-04 02:17:48.827176994 +0000 UTC m=+1348.492952114" observedRunningTime="2026-04-04 02:17:49.615414682 +0000 UTC m=+1349.281189812" watchObservedRunningTime="2026-04-04 02:17:49.618841066 +0000 UTC m=+1349.284616206" Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.657807 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-559d8fdb6b-tmg65" podStartSLOduration=3.341273135 podStartE2EDuration="38.657781545s" podCreationTimestamp="2026-04-04 02:17:11 +0000 UTC" firstStartedPulling="2026-04-04 02:17:13.400951241 +0000 UTC m=+1313.066726361" lastFinishedPulling="2026-04-04 02:17:48.717459651 +0000 UTC m=+1348.383234771" observedRunningTime="2026-04-04 02:17:49.651489253 +0000 UTC m=+1349.317264383" watchObservedRunningTime="2026-04-04 02:17:49.657781545 +0000 UTC m=+1349.323556675" Apr 04 02:17:49 crc kubenswrapper[4681]: I0404 02:17:49.677972 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-767865f676-74tl4" podStartSLOduration=3.254412791 podStartE2EDuration="38.677956689s" podCreationTimestamp="2026-04-04 02:17:11 +0000 UTC" firstStartedPulling="2026-04-04 02:17:13.403515262 +0000 UTC m=+1313.069290382" lastFinishedPulling="2026-04-04 02:17:48.82705916 +0000 UTC m=+1348.492834280" observedRunningTime="2026-04-04 02:17:49.671234605 +0000 UTC m=+1349.337009725" watchObservedRunningTime="2026-04-04 02:17:49.677956689 +0000 UTC m=+1349.343731809" Apr 04 02:17:50 crc kubenswrapper[4681]: I0404 02:17:50.571139 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-gcbfv" event={"ID":"4536a628-89aa-4f79-b180-9199d3cf390a","Type":"ContainerStarted","Data":"0d5c7a2667b9df9a17e83c3363a0fea167e51f40ec7695c2b0adc34cf79d5823"} Apr 04 02:17:50 crc kubenswrapper[4681]: I0404 02:17:50.574028 4681 generic.go:334] "Generic (PLEG): container finished" podID="4e664bb2-6221-4c7f-8159-4bd460b7d7fd" containerID="2cc6a532607c697e9bb62fe1560c22bb0a811b6c57b3950a2612dc4ee4ff1115" exitCode=0 Apr 04 02:17:50 crc kubenswrapper[4681]: I0404 02:17:50.574078 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsgjs" event={"ID":"4e664bb2-6221-4c7f-8159-4bd460b7d7fd","Type":"ContainerDied","Data":"2cc6a532607c697e9bb62fe1560c22bb0a811b6c57b3950a2612dc4ee4ff1115"} Apr 04 02:17:50 crc kubenswrapper[4681]: I0404 02:17:50.578189 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-667cfd88d7-2k5wm" event={"ID":"df89fca6-3fb4-4d85-95df-4b48e4a1e884","Type":"ContainerStarted","Data":"e6e04b3e7f4da95f97501579eef8388f19ab321d8864419b86206923167a7388"} Apr 04 02:17:50 crc kubenswrapper[4681]: I0404 02:17:50.578216 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-667cfd88d7-2k5wm" event={"ID":"df89fca6-3fb4-4d85-95df-4b48e4a1e884","Type":"ContainerStarted","Data":"5be6ba14f7345e75ed5ba8f0f09034193d89be7934fd3a6fff631d6d69cda961"} Apr 04 02:17:50 crc kubenswrapper[4681]: I0404 02:17:50.578606 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-667cfd88d7-2k5wm" Apr 04 02:17:50 crc kubenswrapper[4681]: I0404 02:17:50.579841 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6ccfd84cb4-sq9cm" event={"ID":"be876d09-d6fd-46f7-a03c-8c13f72bee75","Type":"ContainerStarted","Data":"613aca84db55131d83eccd8307b917cf978d7ed85c9f4949a28da5da1968ae02"} Apr 04 02:17:50 crc kubenswrapper[4681]: I0404 02:17:50.580154 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-6ccfd84cb4-sq9cm" Apr 04 02:17:50 crc kubenswrapper[4681]: I0404 02:17:50.581379 4681 generic.go:334] "Generic (PLEG): container finished" podID="a90c835b-2256-4a2c-93bb-9e0d92db2f25" containerID="8ec8b822824c864bd51fe804279935945c588ae5b65baf236e99310da86b67e4" exitCode=0 Apr 04 02:17:50 crc kubenswrapper[4681]: I0404 02:17:50.582068 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9lp2" event={"ID":"a90c835b-2256-4a2c-93bb-9e0d92db2f25","Type":"ContainerDied","Data":"8ec8b822824c864bd51fe804279935945c588ae5b65baf236e99310da86b67e4"} Apr 04 02:17:50 crc kubenswrapper[4681]: I0404 02:17:50.582083 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9lp2" event={"ID":"a90c835b-2256-4a2c-93bb-9e0d92db2f25","Type":"ContainerStarted","Data":"c24677847449dd7b6a56a7c6f4b58ebf314595802e2997bb50e940eb7f70e4e1"} Apr 04 02:17:50 crc kubenswrapper[4681]: I0404 02:17:50.620583 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-6ccfd84cb4-sq9cm" podStartSLOduration=2.6715424949999997 podStartE2EDuration="39.620560687s" podCreationTimestamp="2026-04-04 02:17:11 +0000 UTC" firstStartedPulling="2026-04-04 02:17:12.838188417 +0000 UTC m=+1312.503963547" lastFinishedPulling="2026-04-04 02:17:49.787206619 +0000 UTC m=+1349.452981739" observedRunningTime="2026-04-04 02:17:50.616743361 +0000 UTC m=+1350.282518471" watchObservedRunningTime="2026-04-04 02:17:50.620560687 +0000 UTC m=+1350.286335807" Apr 04 02:17:50 crc kubenswrapper[4681]: I0404 02:17:50.679903 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-667cfd88d7-2k5wm" podStartSLOduration=38.679881416 podStartE2EDuration="38.679881416s" podCreationTimestamp="2026-04-04 02:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:17:50.674394195 +0000 UTC m=+1350.340169315" watchObservedRunningTime="2026-04-04 02:17:50.679881416 +0000 UTC m=+1350.345656536" Apr 04 02:17:51 crc kubenswrapper[4681]: I0404 02:17:51.591826 4681 generic.go:334] "Generic (PLEG): container finished" podID="2b649a9d-e38f-4496-b6b7-39145dfdb158" containerID="871697c07291c094db880103fc8a57301ab3394d88b2689d0e243e225e6b4de6" exitCode=0 Apr 04 02:17:51 crc kubenswrapper[4681]: I0404 02:17:51.592611 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrps5" event={"ID":"2b649a9d-e38f-4496-b6b7-39145dfdb158","Type":"ContainerDied","Data":"871697c07291c094db880103fc8a57301ab3394d88b2689d0e243e225e6b4de6"} Apr 04 02:17:52 crc kubenswrapper[4681]: I0404 02:17:52.608560 4681 generic.go:334] "Generic (PLEG): container finished" podID="a90c835b-2256-4a2c-93bb-9e0d92db2f25" containerID="c9f853411308d27149baba2023e2f0497a3e09a26ea37fb624697009daeb04eb" exitCode=0 Apr 04 02:17:52 crc kubenswrapper[4681]: I0404 02:17:52.608616 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9lp2" event={"ID":"a90c835b-2256-4a2c-93bb-9e0d92db2f25","Type":"ContainerDied","Data":"c9f853411308d27149baba2023e2f0497a3e09a26ea37fb624697009daeb04eb"} Apr 04 02:17:53 crc kubenswrapper[4681]: E0404 02:17:53.204002 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zv8zs" podUID="2acd20f5-b31c-411a-989c-f0ad12628894" Apr 04 02:17:54 crc kubenswrapper[4681]: I0404 02:17:54.476949 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-667cfd88d7-2k5wm" Apr 04 02:17:56 crc kubenswrapper[4681]: I0404 02:17:56.524458 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:17:56 crc kubenswrapper[4681]: I0404 02:17:56.525420 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:17:56 crc kubenswrapper[4681]: I0404 02:17:56.525597 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 02:17:56 crc kubenswrapper[4681]: I0404 02:17:56.526426 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"29e9a58ef2bccc789fece86b7ac9bb80cce347a67979c6787d7300d3e52c5b75"} pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Apr 04 02:17:56 crc kubenswrapper[4681]: I0404 02:17:56.526639 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" containerID="cri-o://29e9a58ef2bccc789fece86b7ac9bb80cce347a67979c6787d7300d3e52c5b75" gracePeriod=600 Apr 04 02:17:57 crc kubenswrapper[4681]: I0404 02:17:57.655064 4681 generic.go:334] "Generic (PLEG): container finished" podID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerID="29e9a58ef2bccc789fece86b7ac9bb80cce347a67979c6787d7300d3e52c5b75" exitCode=0 Apr 04 02:17:57 crc kubenswrapper[4681]: I0404 02:17:57.655139 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerDied","Data":"29e9a58ef2bccc789fece86b7ac9bb80cce347a67979c6787d7300d3e52c5b75"} Apr 04 02:17:57 crc kubenswrapper[4681]: I0404 02:17:57.655470 4681 scope.go:117] "RemoveContainer" containerID="e2e43546dbe2461b9e3426a18769af831106cbff42f433957bda33adde473ed0" Apr 04 02:18:00 crc kubenswrapper[4681]: I0404 02:18:00.164156 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587818-zwhsm"] Apr 04 02:18:00 crc kubenswrapper[4681]: I0404 02:18:00.165459 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587818-zwhsm" Apr 04 02:18:00 crc kubenswrapper[4681]: I0404 02:18:00.168011 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 02:18:00 crc kubenswrapper[4681]: I0404 02:18:00.168656 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 02:18:00 crc kubenswrapper[4681]: I0404 02:18:00.168713 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 02:18:00 crc kubenswrapper[4681]: I0404 02:18:00.180605 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587818-zwhsm"] Apr 04 02:18:00 crc kubenswrapper[4681]: I0404 02:18:00.279306 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vlb4\" (UniqueName: \"kubernetes.io/projected/8e6e9171-5cc4-45fb-9668-dee0d4a7df22-kube-api-access-5vlb4\") pod \"auto-csr-approver-29587818-zwhsm\" (UID: \"8e6e9171-5cc4-45fb-9668-dee0d4a7df22\") " pod="openshift-infra/auto-csr-approver-29587818-zwhsm" Apr 04 02:18:00 crc kubenswrapper[4681]: I0404 02:18:00.380194 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vlb4\" (UniqueName: \"kubernetes.io/projected/8e6e9171-5cc4-45fb-9668-dee0d4a7df22-kube-api-access-5vlb4\") pod \"auto-csr-approver-29587818-zwhsm\" (UID: \"8e6e9171-5cc4-45fb-9668-dee0d4a7df22\") " pod="openshift-infra/auto-csr-approver-29587818-zwhsm" Apr 04 02:18:00 crc kubenswrapper[4681]: I0404 02:18:00.408038 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vlb4\" (UniqueName: \"kubernetes.io/projected/8e6e9171-5cc4-45fb-9668-dee0d4a7df22-kube-api-access-5vlb4\") pod \"auto-csr-approver-29587818-zwhsm\" (UID: \"8e6e9171-5cc4-45fb-9668-dee0d4a7df22\") " pod="openshift-infra/auto-csr-approver-29587818-zwhsm" Apr 04 02:18:00 crc kubenswrapper[4681]: I0404 02:18:00.494852 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587818-zwhsm" Apr 04 02:18:00 crc kubenswrapper[4681]: E0404 02:18:00.579573 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:4b983bc9e9cebbde8a781fdaaf774b8dd13bb30f66f323d94c2187707f6552d9\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-84464c7c78-brc8n" podUID="b50e9e2f-832f-4de0-b38c-cb9f5f0d62ab" Apr 04 02:18:01 crc kubenswrapper[4681]: E0404 02:18:01.204194 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:566b1f4d3f3d50e9620b845e12ef72bf3a27e07233a9c7424c1102045a4e74a2\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-d6f76d4c7-2vrfg" podUID="8e44912b-0956-49e8-ad3e-140b3d60838e" Apr 04 02:18:01 crc kubenswrapper[4681]: I0404 02:18:01.699229 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-86644c9c9c-kvnd9" Apr 04 02:18:01 crc kubenswrapper[4681]: I0404 02:18:01.722451 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d46cccfb9-ttwtp" Apr 04 02:18:01 crc kubenswrapper[4681]: I0404 02:18:01.817056 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-8684f86954-4z752" Apr 04 02:18:01 crc kubenswrapper[4681]: I0404 02:18:01.859366 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-6ccfd84cb4-sq9cm" Apr 04 02:18:01 crc kubenswrapper[4681]: I0404 02:18:01.908147 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587818-zwhsm"] Apr 04 02:18:02 crc kubenswrapper[4681]: I0404 02:18:02.183900 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-767865f676-74tl4" Apr 04 02:18:02 crc kubenswrapper[4681]: I0404 02:18:02.226412 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7594f57946-c9j8w" Apr 04 02:18:02 crc kubenswrapper[4681]: I0404 02:18:02.362779 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-559d8fdb6b-tmg65" Apr 04 02:18:02 crc kubenswrapper[4681]: I0404 02:18:02.448668 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56ccc97cf5-j87hk" Apr 04 02:18:02 crc kubenswrapper[4681]: W0404 02:18:02.460542 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e6e9171_5cc4_45fb_9668_dee0d4a7df22.slice/crio-32ae30ead980ca5ff2231733742b82d2a0185c38aee34388796841d8adde6aa2 WatchSource:0}: Error finding container 32ae30ead980ca5ff2231733742b82d2a0185c38aee34388796841d8adde6aa2: Status 404 returned error can't find the container with id 32ae30ead980ca5ff2231733742b82d2a0185c38aee34388796841d8adde6aa2 Apr 04 02:18:02 crc kubenswrapper[4681]: I0404 02:18:02.555446 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-58b78987f4-nwsmd" Apr 04 02:18:02 crc kubenswrapper[4681]: I0404 02:18:02.704942 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587818-zwhsm" event={"ID":"8e6e9171-5cc4-45fb-9668-dee0d4a7df22","Type":"ContainerStarted","Data":"32ae30ead980ca5ff2231733742b82d2a0185c38aee34388796841d8adde6aa2"} Apr 04 02:18:09 crc kubenswrapper[4681]: I0404 02:18:09.758860 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerStarted","Data":"3654cfff66d5807945bfb8fd6cd5a2240bc45afc78ca743b318542d8aeaa09d5"} Apr 04 02:18:18 crc kubenswrapper[4681]: I0404 02:18:18.833646 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsgjs" event={"ID":"4e664bb2-6221-4c7f-8159-4bd460b7d7fd","Type":"ContainerStarted","Data":"71335e500d45453de345173dbecbae45f442514949e2052d76521b716d682061"} Apr 04 02:18:19 crc kubenswrapper[4681]: I0404 02:18:19.855391 4681 generic.go:334] "Generic (PLEG): container finished" podID="4e664bb2-6221-4c7f-8159-4bd460b7d7fd" containerID="71335e500d45453de345173dbecbae45f442514949e2052d76521b716d682061" exitCode=0 Apr 04 02:18:19 crc kubenswrapper[4681]: I0404 02:18:19.855437 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsgjs" event={"ID":"4e664bb2-6221-4c7f-8159-4bd460b7d7fd","Type":"ContainerDied","Data":"71335e500d45453de345173dbecbae45f442514949e2052d76521b716d682061"} Apr 04 02:18:21 crc kubenswrapper[4681]: I0404 02:18:21.876976 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6b7497dc59-tllnk" event={"ID":"82ce5791-77cb-418c-b3d2-7f49f625ccf1","Type":"ContainerStarted","Data":"aa0b0c47763b3e840261fc064faf4001cd0067e10234f9b01323135f1e65d48b"} Apr 04 02:18:21 crc kubenswrapper[4681]: I0404 02:18:21.877921 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-6b7497dc59-tllnk" Apr 04 02:18:21 crc kubenswrapper[4681]: I0404 02:18:21.880555 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrps5" event={"ID":"2b649a9d-e38f-4496-b6b7-39145dfdb158","Type":"ContainerStarted","Data":"90d90aa2ff4aa0b96b81f243f07ee4c665ac9a98457f572593d244f49257849b"} Apr 04 02:18:21 crc kubenswrapper[4681]: I0404 02:18:21.883734 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9lp2" event={"ID":"a90c835b-2256-4a2c-93bb-9e0d92db2f25","Type":"ContainerStarted","Data":"bae30a9c8f9008340e80437202b1a37deeb64b1ca587aca5c7bfac9ec418144a"} Apr 04 02:18:21 crc kubenswrapper[4681]: I0404 02:18:21.904007 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-6b7497dc59-tllnk" podStartSLOduration=4.04272032 podStartE2EDuration="1m10.903982773s" podCreationTimestamp="2026-04-04 02:17:11 +0000 UTC" firstStartedPulling="2026-04-04 02:17:13.179489079 +0000 UTC m=+1312.845264199" lastFinishedPulling="2026-04-04 02:18:20.040751532 +0000 UTC m=+1379.706526652" observedRunningTime="2026-04-04 02:18:21.893362631 +0000 UTC m=+1381.559137751" watchObservedRunningTime="2026-04-04 02:18:21.903982773 +0000 UTC m=+1381.569757913" Apr 04 02:18:21 crc kubenswrapper[4681]: I0404 02:18:21.918575 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n9lp2" podStartSLOduration=13.368613555 podStartE2EDuration="42.918560893s" podCreationTimestamp="2026-04-04 02:17:39 +0000 UTC" firstStartedPulling="2026-04-04 02:17:50.584298331 +0000 UTC m=+1350.250073451" lastFinishedPulling="2026-04-04 02:18:20.134245669 +0000 UTC m=+1379.800020789" observedRunningTime="2026-04-04 02:18:21.913781642 +0000 UTC m=+1381.579556762" watchObservedRunningTime="2026-04-04 02:18:21.918560893 +0000 UTC m=+1381.584336013" Apr 04 02:18:21 crc kubenswrapper[4681]: I0404 02:18:21.938000 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zrps5" podStartSLOduration=33.723967399 podStartE2EDuration="1m3.937979397s" podCreationTimestamp="2026-04-04 02:17:18 +0000 UTC" firstStartedPulling="2026-04-04 02:17:49.440176269 +0000 UTC m=+1349.105951389" lastFinishedPulling="2026-04-04 02:18:19.654188267 +0000 UTC m=+1379.319963387" observedRunningTime="2026-04-04 02:18:21.930319736 +0000 UTC m=+1381.596094856" watchObservedRunningTime="2026-04-04 02:18:21.937979397 +0000 UTC m=+1381.603754517" Apr 04 02:18:22 crc kubenswrapper[4681]: I0404 02:18:22.910023 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587818-zwhsm" event={"ID":"8e6e9171-5cc4-45fb-9668-dee0d4a7df22","Type":"ContainerStarted","Data":"a08da319b1adb8c73595ab79416fd2ad1b62d1a5ad788992d80fc4351c63101a"} Apr 04 02:18:22 crc kubenswrapper[4681]: I0404 02:18:22.937047 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29587818-zwhsm" podStartSLOduration=20.087710983 podStartE2EDuration="22.937028624s" podCreationTimestamp="2026-04-04 02:18:00 +0000 UTC" firstStartedPulling="2026-04-04 02:18:18.382691727 +0000 UTC m=+1378.048466847" lastFinishedPulling="2026-04-04 02:18:21.232009368 +0000 UTC m=+1380.897784488" observedRunningTime="2026-04-04 02:18:22.934514435 +0000 UTC m=+1382.600289555" watchObservedRunningTime="2026-04-04 02:18:22.937028624 +0000 UTC m=+1382.602803744" Apr 04 02:18:22 crc kubenswrapper[4681]: I0404 02:18:22.938988 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-84464c7c78-brc8n" event={"ID":"b50e9e2f-832f-4de0-b38c-cb9f5f0d62ab","Type":"ContainerStarted","Data":"8c7931f8f9204629bfec91a74f09c8a141989c72490ff0f3d317047ad1050da9"} Apr 04 02:18:22 crc kubenswrapper[4681]: I0404 02:18:22.939644 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-84464c7c78-brc8n" Apr 04 02:18:22 crc kubenswrapper[4681]: I0404 02:18:22.940948 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5d6f9fd68c-x7x9p" event={"ID":"3ac3008b-06b0-4ab7-a59f-3e7682627410","Type":"ContainerStarted","Data":"1a6eaef67c515be7dcc6fdca728e12809d324cb1053da312887f324a1505b6de"} Apr 04 02:18:22 crc kubenswrapper[4681]: I0404 02:18:22.941329 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5d6f9fd68c-x7x9p" Apr 04 02:18:22 crc kubenswrapper[4681]: I0404 02:18:22.960977 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-fbdcf7f7b-844tj" event={"ID":"eb76f1dc-bae9-491f-a58e-3cc1f9c15571","Type":"ContainerStarted","Data":"f8c23510f5edf3b1a198aae1a0d46565dea19eacd485c85970af7667d05851dd"} Apr 04 02:18:22 crc kubenswrapper[4681]: I0404 02:18:22.961702 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-fbdcf7f7b-844tj" Apr 04 02:18:22 crc kubenswrapper[4681]: I0404 02:18:22.963243 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-d6f76d4c7-2vrfg" event={"ID":"8e44912b-0956-49e8-ad3e-140b3d60838e","Type":"ContainerStarted","Data":"dd17bd5f372beb7fb76a079071078ce24cfaf7f941c31022cf8c6c0894e684e7"} Apr 04 02:18:22 crc kubenswrapper[4681]: I0404 02:18:22.963634 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-d6f76d4c7-2vrfg" Apr 04 02:18:22 crc kubenswrapper[4681]: I0404 02:18:22.964718 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-gcbfv" event={"ID":"4536a628-89aa-4f79-b180-9199d3cf390a","Type":"ContainerStarted","Data":"053b5c844ad8a412754b76927dcda9589da6234eccc10414205738cc44d23040"} Apr 04 02:18:22 crc kubenswrapper[4681]: I0404 02:18:22.965172 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-gcbfv" Apr 04 02:18:22 crc kubenswrapper[4681]: I0404 02:18:22.979412 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-84464c7c78-brc8n" podStartSLOduration=4.64200506 podStartE2EDuration="1m11.979386968s" podCreationTimestamp="2026-04-04 02:17:11 +0000 UTC" firstStartedPulling="2026-04-04 02:17:13.453206787 +0000 UTC m=+1313.118981897" lastFinishedPulling="2026-04-04 02:18:20.790588665 +0000 UTC m=+1380.456363805" observedRunningTime="2026-04-04 02:18:22.972057886 +0000 UTC m=+1382.637833026" watchObservedRunningTime="2026-04-04 02:18:22.979386968 +0000 UTC m=+1382.645162098" Apr 04 02:18:22 crc kubenswrapper[4681]: I0404 02:18:22.994312 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6554749d88-tj6wj" event={"ID":"28828ebb-13dc-4ba1-98e1-39c6f38e9245","Type":"ContainerStarted","Data":"0e61ddd1356130e968e87f4b582926164f2add9783cc43f16d5e1852fcea637a"} Apr 04 02:18:22 crc kubenswrapper[4681]: I0404 02:18:22.994610 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6554749d88-tj6wj" Apr 04 02:18:22 crc kubenswrapper[4681]: I0404 02:18:22.996451 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-58689c6fff-rnnzd" event={"ID":"6a8bc05a-a0b8-4ba3-b778-b3cb57b19a99","Type":"ContainerStarted","Data":"966951e990e0cdb063de709b245177d04079e6f9a781403a0fca9a8ac0a4e040"} Apr 04 02:18:22 crc kubenswrapper[4681]: I0404 02:18:22.997159 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-58689c6fff-rnnzd" Apr 04 02:18:23 crc kubenswrapper[4681]: I0404 02:18:23.008486 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-dbf8bb784-4gx6m" event={"ID":"80023eb5-5d8e-49ff-bd8d-d0fb3e290ccf","Type":"ContainerStarted","Data":"13b28adfc28500febc03f7675fac41aa12413fca97ec3846232e6ccaf9c3ab42"} Apr 04 02:18:23 crc kubenswrapper[4681]: I0404 02:18:23.009288 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-dbf8bb784-4gx6m" Apr 04 02:18:23 crc kubenswrapper[4681]: I0404 02:18:23.031214 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b7b49d78f-skbql" event={"ID":"b54a4f45-de00-4dd5-95d4-f96a21d34189","Type":"ContainerStarted","Data":"4ac22d82f8b641321b9dba68a86204193b8d0c99997ae6608fbac8c43df11268"} Apr 04 02:18:23 crc kubenswrapper[4681]: I0404 02:18:23.031935 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b7b49d78f-skbql" Apr 04 02:18:23 crc kubenswrapper[4681]: I0404 02:18:23.036098 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f96574b5-82k6f" event={"ID":"1a4403a6-7904-4764-aba4-02a2bcc4bc19","Type":"ContainerStarted","Data":"c005100f35ec521d3374579dc587c747f702e3a45bb7421efc8ec322ece390a7"} Apr 04 02:18:23 crc kubenswrapper[4681]: I0404 02:18:23.036577 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f96574b5-82k6f" Apr 04 02:18:23 crc kubenswrapper[4681]: I0404 02:18:23.038371 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsgjs" event={"ID":"4e664bb2-6221-4c7f-8159-4bd460b7d7fd","Type":"ContainerStarted","Data":"787d9389a6a6daf6e87071e21819b34f114e55a4694b09914d19f72121a0763a"} Apr 04 02:18:23 crc kubenswrapper[4681]: I0404 02:18:23.040321 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-648bdc7f99-skr68" event={"ID":"4513182b-1bdb-40a2-ba02-2e8aa8567819","Type":"ContainerStarted","Data":"f4ba91cba99f867412de239ac4b795ab22a70bf35c889e55ec7c5cd6da631a11"} Apr 04 02:18:23 crc kubenswrapper[4681]: I0404 02:18:23.051613 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-d6f76d4c7-2vrfg" podStartSLOduration=4.588608703 podStartE2EDuration="1m12.05159631s" podCreationTimestamp="2026-04-04 02:17:11 +0000 UTC" firstStartedPulling="2026-04-04 02:17:13.460618801 +0000 UTC m=+1313.126393921" lastFinishedPulling="2026-04-04 02:18:20.923606408 +0000 UTC m=+1380.589381528" observedRunningTime="2026-04-04 02:18:23.024003983 +0000 UTC m=+1382.689779103" watchObservedRunningTime="2026-04-04 02:18:23.05159631 +0000 UTC m=+1382.717371420" Apr 04 02:18:23 crc kubenswrapper[4681]: I0404 02:18:23.065292 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-fbdcf7f7b-844tj" podStartSLOduration=5.979020306 podStartE2EDuration="1m12.065273116s" podCreationTimestamp="2026-04-04 02:17:11 +0000 UTC" firstStartedPulling="2026-04-04 02:17:13.400985162 +0000 UTC m=+1313.066760282" lastFinishedPulling="2026-04-04 02:18:19.487237972 +0000 UTC m=+1379.153013092" observedRunningTime="2026-04-04 02:18:23.050626644 +0000 UTC m=+1382.716401764" watchObservedRunningTime="2026-04-04 02:18:23.065273116 +0000 UTC m=+1382.731048236" Apr 04 02:18:23 crc kubenswrapper[4681]: I0404 02:18:23.149078 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5d6f9fd68c-x7x9p" podStartSLOduration=4.952987289 podStartE2EDuration="1m12.149060947s" podCreationTimestamp="2026-04-04 02:17:11 +0000 UTC" firstStartedPulling="2026-04-04 02:17:13.389219529 +0000 UTC m=+1313.054994649" lastFinishedPulling="2026-04-04 02:18:20.585293187 +0000 UTC m=+1380.251068307" observedRunningTime="2026-04-04 02:18:23.147521395 +0000 UTC m=+1382.813296515" watchObservedRunningTime="2026-04-04 02:18:23.149060947 +0000 UTC m=+1382.814836067" Apr 04 02:18:23 crc kubenswrapper[4681]: I0404 02:18:23.150771 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-gcbfv" podStartSLOduration=42.121603393 podStartE2EDuration="1m12.150762324s" podCreationTimestamp="2026-04-04 02:17:11 +0000 UTC" firstStartedPulling="2026-04-04 02:17:49.458200014 +0000 UTC m=+1349.123975134" lastFinishedPulling="2026-04-04 02:18:19.487358935 +0000 UTC m=+1379.153134065" observedRunningTime="2026-04-04 02:18:23.112624296 +0000 UTC m=+1382.778399416" watchObservedRunningTime="2026-04-04 02:18:23.150762324 +0000 UTC m=+1382.816537444" Apr 04 02:18:23 crc kubenswrapper[4681]: I0404 02:18:23.193049 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f96574b5-82k6f" podStartSLOduration=4.897197476 podStartE2EDuration="1m12.193024554s" podCreationTimestamp="2026-04-04 02:17:11 +0000 UTC" firstStartedPulling="2026-04-04 02:17:12.837725123 +0000 UTC m=+1312.503500243" lastFinishedPulling="2026-04-04 02:18:20.133552211 +0000 UTC m=+1379.799327321" observedRunningTime="2026-04-04 02:18:23.192995674 +0000 UTC m=+1382.858770794" watchObservedRunningTime="2026-04-04 02:18:23.193024554 +0000 UTC m=+1382.858799674" Apr 04 02:18:23 crc kubenswrapper[4681]: I0404 02:18:23.194772 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6554749d88-tj6wj" podStartSLOduration=5.348906195 podStartE2EDuration="1m12.194759523s" podCreationTimestamp="2026-04-04 02:17:11 +0000 UTC" firstStartedPulling="2026-04-04 02:17:13.194010069 +0000 UTC m=+1312.859785189" lastFinishedPulling="2026-04-04 02:18:20.039863387 +0000 UTC m=+1379.705638517" observedRunningTime="2026-04-04 02:18:23.174756702 +0000 UTC m=+1382.840531822" watchObservedRunningTime="2026-04-04 02:18:23.194759523 +0000 UTC m=+1382.860534643" Apr 04 02:18:23 crc kubenswrapper[4681]: I0404 02:18:23.226800 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-dbf8bb784-4gx6m" podStartSLOduration=5.04660959 podStartE2EDuration="1m12.226782061s" podCreationTimestamp="2026-04-04 02:17:11 +0000 UTC" firstStartedPulling="2026-04-04 02:17:12.954235493 +0000 UTC m=+1312.620010613" lastFinishedPulling="2026-04-04 02:18:20.134407964 +0000 UTC m=+1379.800183084" observedRunningTime="2026-04-04 02:18:23.22419899 +0000 UTC m=+1382.889974110" watchObservedRunningTime="2026-04-04 02:18:23.226782061 +0000 UTC m=+1382.892557181" Apr 04 02:18:23 crc kubenswrapper[4681]: I0404 02:18:23.259236 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fsgjs" podStartSLOduration=28.568538865 podStartE2EDuration="59.259222033s" podCreationTimestamp="2026-04-04 02:17:24 +0000 UTC" firstStartedPulling="2026-04-04 02:17:50.57520052 +0000 UTC m=+1350.240975640" lastFinishedPulling="2026-04-04 02:18:21.265883688 +0000 UTC m=+1380.931658808" observedRunningTime="2026-04-04 02:18:23.253867476 +0000 UTC m=+1382.919642586" watchObservedRunningTime="2026-04-04 02:18:23.259222033 +0000 UTC m=+1382.924997153" Apr 04 02:18:23 crc kubenswrapper[4681]: I0404 02:18:23.289109 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b7b49d78f-skbql" podStartSLOduration=60.176748498 podStartE2EDuration="1m12.289091663s" podCreationTimestamp="2026-04-04 02:17:11 +0000 UTC" firstStartedPulling="2026-04-04 02:17:49.312811531 +0000 UTC m=+1348.978586651" lastFinishedPulling="2026-04-04 02:18:01.425154686 +0000 UTC m=+1361.090929816" observedRunningTime="2026-04-04 02:18:23.283132939 +0000 UTC m=+1382.948908059" watchObservedRunningTime="2026-04-04 02:18:23.289091663 +0000 UTC m=+1382.954866783" Apr 04 02:18:23 crc kubenswrapper[4681]: I0404 02:18:23.310599 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-648bdc7f99-skr68" podStartSLOduration=4.893559258 podStartE2EDuration="1m12.310584253s" podCreationTimestamp="2026-04-04 02:17:11 +0000 UTC" firstStartedPulling="2026-04-04 02:17:12.716477494 +0000 UTC m=+1312.382252614" lastFinishedPulling="2026-04-04 02:18:20.133502489 +0000 UTC m=+1379.799277609" observedRunningTime="2026-04-04 02:18:23.310346037 +0000 UTC m=+1382.976121157" watchObservedRunningTime="2026-04-04 02:18:23.310584253 +0000 UTC m=+1382.976359373" Apr 04 02:18:23 crc kubenswrapper[4681]: I0404 02:18:23.342974 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-58689c6fff-rnnzd" podStartSLOduration=4.614154744 podStartE2EDuration="1m12.342957262s" podCreationTimestamp="2026-04-04 02:17:11 +0000 UTC" firstStartedPulling="2026-04-04 02:17:12.856554021 +0000 UTC m=+1312.522329141" lastFinishedPulling="2026-04-04 02:18:20.585356549 +0000 UTC m=+1380.251131659" observedRunningTime="2026-04-04 02:18:23.341611835 +0000 UTC m=+1383.007386955" watchObservedRunningTime="2026-04-04 02:18:23.342957262 +0000 UTC m=+1383.008732382" Apr 04 02:18:25 crc kubenswrapper[4681]: I0404 02:18:25.171651 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fsgjs" Apr 04 02:18:25 crc kubenswrapper[4681]: I0404 02:18:25.171731 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fsgjs" Apr 04 02:18:25 crc kubenswrapper[4681]: I0404 02:18:25.247611 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fsgjs" Apr 04 02:18:26 crc kubenswrapper[4681]: I0404 02:18:26.068624 4681 generic.go:334] "Generic (PLEG): container finished" podID="8e6e9171-5cc4-45fb-9668-dee0d4a7df22" containerID="a08da319b1adb8c73595ab79416fd2ad1b62d1a5ad788992d80fc4351c63101a" exitCode=0 Apr 04 02:18:26 crc kubenswrapper[4681]: I0404 02:18:26.068716 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587818-zwhsm" event={"ID":"8e6e9171-5cc4-45fb-9668-dee0d4a7df22","Type":"ContainerDied","Data":"a08da319b1adb8c73595ab79416fd2ad1b62d1a5ad788992d80fc4351c63101a"} Apr 04 02:18:27 crc kubenswrapper[4681]: I0404 02:18:27.124595 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fsgjs" Apr 04 02:18:27 crc kubenswrapper[4681]: I0404 02:18:27.175254 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fsgjs"] Apr 04 02:18:27 crc kubenswrapper[4681]: I0404 02:18:27.786158 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7ffb6b7cdc-gcbfv" Apr 04 02:18:28 crc kubenswrapper[4681]: I0404 02:18:28.219695 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-b7b49d78f-skbql" Apr 04 02:18:29 crc kubenswrapper[4681]: I0404 02:18:29.097598 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fsgjs" podUID="4e664bb2-6221-4c7f-8159-4bd460b7d7fd" containerName="registry-server" containerID="cri-o://787d9389a6a6daf6e87071e21819b34f114e55a4694b09914d19f72121a0763a" gracePeriod=2 Apr 04 02:18:29 crc kubenswrapper[4681]: I0404 02:18:29.282152 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zrps5" Apr 04 02:18:29 crc kubenswrapper[4681]: I0404 02:18:29.282216 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zrps5" Apr 04 02:18:29 crc kubenswrapper[4681]: I0404 02:18:29.807707 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n9lp2" Apr 04 02:18:29 crc kubenswrapper[4681]: I0404 02:18:29.808553 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n9lp2" Apr 04 02:18:30 crc kubenswrapper[4681]: I0404 02:18:30.331520 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-zrps5" podUID="2b649a9d-e38f-4496-b6b7-39145dfdb158" containerName="registry-server" probeResult="failure" output=< Apr 04 02:18:30 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Apr 04 02:18:30 crc kubenswrapper[4681]: > Apr 04 02:18:30 crc kubenswrapper[4681]: I0404 02:18:30.848393 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-n9lp2" podUID="a90c835b-2256-4a2c-93bb-9e0d92db2f25" containerName="registry-server" probeResult="failure" output=< Apr 04 02:18:30 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Apr 04 02:18:30 crc kubenswrapper[4681]: > Apr 04 02:18:31 crc kubenswrapper[4681]: I0404 02:18:31.727986 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-58689c6fff-rnnzd" Apr 04 02:18:31 crc kubenswrapper[4681]: I0404 02:18:31.750051 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-648bdc7f99-skr68" Apr 04 02:18:31 crc kubenswrapper[4681]: I0404 02:18:31.756209 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-648bdc7f99-skr68" Apr 04 02:18:31 crc kubenswrapper[4681]: I0404 02:18:31.984251 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f96574b5-82k6f" Apr 04 02:18:32 crc kubenswrapper[4681]: I0404 02:18:32.058738 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-dbf8bb784-4gx6m" Apr 04 02:18:32 crc kubenswrapper[4681]: I0404 02:18:32.139443 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-6b7497dc59-tllnk" Apr 04 02:18:32 crc kubenswrapper[4681]: I0404 02:18:32.156165 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6554749d88-tj6wj" Apr 04 02:18:32 crc kubenswrapper[4681]: I0404 02:18:32.211407 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5d6f9fd68c-x7x9p" Apr 04 02:18:32 crc kubenswrapper[4681]: I0404 02:18:32.326376 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-fbdcf7f7b-844tj" Apr 04 02:18:32 crc kubenswrapper[4681]: I0404 02:18:32.396024 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-d6f76d4c7-2vrfg" Apr 04 02:18:32 crc kubenswrapper[4681]: I0404 02:18:32.412601 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-84464c7c78-brc8n" Apr 04 02:18:35 crc kubenswrapper[4681]: I0404 02:18:35.140798 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fsgjs_4e664bb2-6221-4c7f-8159-4bd460b7d7fd/registry-server/0.log" Apr 04 02:18:35 crc kubenswrapper[4681]: I0404 02:18:35.142065 4681 generic.go:334] "Generic (PLEG): container finished" podID="4e664bb2-6221-4c7f-8159-4bd460b7d7fd" containerID="787d9389a6a6daf6e87071e21819b34f114e55a4694b09914d19f72121a0763a" exitCode=137 Apr 04 02:18:35 crc kubenswrapper[4681]: I0404 02:18:35.142100 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsgjs" event={"ID":"4e664bb2-6221-4c7f-8159-4bd460b7d7fd","Type":"ContainerDied","Data":"787d9389a6a6daf6e87071e21819b34f114e55a4694b09914d19f72121a0763a"} Apr 04 02:18:35 crc kubenswrapper[4681]: E0404 02:18:35.172289 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 787d9389a6a6daf6e87071e21819b34f114e55a4694b09914d19f72121a0763a is running failed: container process not found" containerID="787d9389a6a6daf6e87071e21819b34f114e55a4694b09914d19f72121a0763a" cmd=["grpc_health_probe","-addr=:50051"] Apr 04 02:18:35 crc kubenswrapper[4681]: E0404 02:18:35.172897 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 787d9389a6a6daf6e87071e21819b34f114e55a4694b09914d19f72121a0763a is running failed: container process not found" containerID="787d9389a6a6daf6e87071e21819b34f114e55a4694b09914d19f72121a0763a" cmd=["grpc_health_probe","-addr=:50051"] Apr 04 02:18:35 crc kubenswrapper[4681]: E0404 02:18:35.173309 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 787d9389a6a6daf6e87071e21819b34f114e55a4694b09914d19f72121a0763a is running failed: container process not found" containerID="787d9389a6a6daf6e87071e21819b34f114e55a4694b09914d19f72121a0763a" cmd=["grpc_health_probe","-addr=:50051"] Apr 04 02:18:35 crc kubenswrapper[4681]: E0404 02:18:35.173344 4681 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 787d9389a6a6daf6e87071e21819b34f114e55a4694b09914d19f72121a0763a is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-fsgjs" podUID="4e664bb2-6221-4c7f-8159-4bd460b7d7fd" containerName="registry-server" Apr 04 02:18:39 crc kubenswrapper[4681]: I0404 02:18:39.967023 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587818-zwhsm" Apr 04 02:18:40 crc kubenswrapper[4681]: I0404 02:18:40.092341 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vlb4\" (UniqueName: \"kubernetes.io/projected/8e6e9171-5cc4-45fb-9668-dee0d4a7df22-kube-api-access-5vlb4\") pod \"8e6e9171-5cc4-45fb-9668-dee0d4a7df22\" (UID: \"8e6e9171-5cc4-45fb-9668-dee0d4a7df22\") " Apr 04 02:18:40 crc kubenswrapper[4681]: I0404 02:18:40.097133 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e6e9171-5cc4-45fb-9668-dee0d4a7df22-kube-api-access-5vlb4" (OuterVolumeSpecName: "kube-api-access-5vlb4") pod "8e6e9171-5cc4-45fb-9668-dee0d4a7df22" (UID: "8e6e9171-5cc4-45fb-9668-dee0d4a7df22"). InnerVolumeSpecName "kube-api-access-5vlb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:18:40 crc kubenswrapper[4681]: I0404 02:18:40.184187 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587818-zwhsm" event={"ID":"8e6e9171-5cc4-45fb-9668-dee0d4a7df22","Type":"ContainerDied","Data":"32ae30ead980ca5ff2231733742b82d2a0185c38aee34388796841d8adde6aa2"} Apr 04 02:18:40 crc kubenswrapper[4681]: I0404 02:18:40.184247 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32ae30ead980ca5ff2231733742b82d2a0185c38aee34388796841d8adde6aa2" Apr 04 02:18:40 crc kubenswrapper[4681]: I0404 02:18:40.184403 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587818-zwhsm" Apr 04 02:18:40 crc kubenswrapper[4681]: I0404 02:18:40.195086 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vlb4\" (UniqueName: \"kubernetes.io/projected/8e6e9171-5cc4-45fb-9668-dee0d4a7df22-kube-api-access-5vlb4\") on node \"crc\" DevicePath \"\"" Apr 04 02:18:40 crc kubenswrapper[4681]: I0404 02:18:40.335119 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-zrps5" podUID="2b649a9d-e38f-4496-b6b7-39145dfdb158" containerName="registry-server" probeResult="failure" output=< Apr 04 02:18:40 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Apr 04 02:18:40 crc kubenswrapper[4681]: > Apr 04 02:18:40 crc kubenswrapper[4681]: I0404 02:18:40.852006 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-n9lp2" podUID="a90c835b-2256-4a2c-93bb-9e0d92db2f25" containerName="registry-server" probeResult="failure" output=< Apr 04 02:18:40 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Apr 04 02:18:40 crc kubenswrapper[4681]: > Apr 04 02:18:40 crc kubenswrapper[4681]: I0404 02:18:40.943064 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fsgjs_4e664bb2-6221-4c7f-8159-4bd460b7d7fd/registry-server/0.log" Apr 04 02:18:40 crc kubenswrapper[4681]: I0404 02:18:40.944636 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsgjs" Apr 04 02:18:41 crc kubenswrapper[4681]: I0404 02:18:41.008104 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e664bb2-6221-4c7f-8159-4bd460b7d7fd-utilities\") pod \"4e664bb2-6221-4c7f-8159-4bd460b7d7fd\" (UID: \"4e664bb2-6221-4c7f-8159-4bd460b7d7fd\") " Apr 04 02:18:41 crc kubenswrapper[4681]: I0404 02:18:41.008995 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e664bb2-6221-4c7f-8159-4bd460b7d7fd-utilities" (OuterVolumeSpecName: "utilities") pod "4e664bb2-6221-4c7f-8159-4bd460b7d7fd" (UID: "4e664bb2-6221-4c7f-8159-4bd460b7d7fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:18:41 crc kubenswrapper[4681]: I0404 02:18:41.009129 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e664bb2-6221-4c7f-8159-4bd460b7d7fd-catalog-content\") pod \"4e664bb2-6221-4c7f-8159-4bd460b7d7fd\" (UID: \"4e664bb2-6221-4c7f-8159-4bd460b7d7fd\") " Apr 04 02:18:41 crc kubenswrapper[4681]: I0404 02:18:41.009322 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6pcz\" (UniqueName: \"kubernetes.io/projected/4e664bb2-6221-4c7f-8159-4bd460b7d7fd-kube-api-access-n6pcz\") pod \"4e664bb2-6221-4c7f-8159-4bd460b7d7fd\" (UID: \"4e664bb2-6221-4c7f-8159-4bd460b7d7fd\") " Apr 04 02:18:41 crc kubenswrapper[4681]: I0404 02:18:41.010434 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e664bb2-6221-4c7f-8159-4bd460b7d7fd-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 02:18:41 crc kubenswrapper[4681]: I0404 02:18:41.192738 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fsgjs_4e664bb2-6221-4c7f-8159-4bd460b7d7fd/registry-server/0.log" Apr 04 02:18:41 crc kubenswrapper[4681]: I0404 02:18:41.193275 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsgjs" event={"ID":"4e664bb2-6221-4c7f-8159-4bd460b7d7fd","Type":"ContainerDied","Data":"55112aa033c3820c7b12352502fa075793afc2d814c6252b20cdc56a31a327ff"} Apr 04 02:18:41 crc kubenswrapper[4681]: I0404 02:18:41.193319 4681 scope.go:117] "RemoveContainer" containerID="787d9389a6a6daf6e87071e21819b34f114e55a4694b09914d19f72121a0763a" Apr 04 02:18:41 crc kubenswrapper[4681]: I0404 02:18:41.193429 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsgjs" Apr 04 02:18:41 crc kubenswrapper[4681]: I0404 02:18:41.214659 4681 scope.go:117] "RemoveContainer" containerID="71335e500d45453de345173dbecbae45f442514949e2052d76521b716d682061" Apr 04 02:18:41 crc kubenswrapper[4681]: I0404 02:18:41.712149 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e664bb2-6221-4c7f-8159-4bd460b7d7fd-kube-api-access-n6pcz" (OuterVolumeSpecName: "kube-api-access-n6pcz") pod "4e664bb2-6221-4c7f-8159-4bd460b7d7fd" (UID: "4e664bb2-6221-4c7f-8159-4bd460b7d7fd"). InnerVolumeSpecName "kube-api-access-n6pcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:18:41 crc kubenswrapper[4681]: I0404 02:18:41.725855 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6pcz\" (UniqueName: \"kubernetes.io/projected/4e664bb2-6221-4c7f-8159-4bd460b7d7fd-kube-api-access-n6pcz\") on node \"crc\" DevicePath \"\"" Apr 04 02:18:41 crc kubenswrapper[4681]: I0404 02:18:41.742835 4681 scope.go:117] "RemoveContainer" containerID="2cc6a532607c697e9bb62fe1560c22bb0a811b6c57b3950a2612dc4ee4ff1115" Apr 04 02:18:41 crc kubenswrapper[4681]: E0404 02:18:41.827637 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Apr 04 02:18:41 crc kubenswrapper[4681]: E0404 02:18:41.827796 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l2jg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-zv8zs_openstack-operators(2acd20f5-b31c-411a-989c-f0ad12628894): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Apr 04 02:18:41 crc kubenswrapper[4681]: E0404 02:18:41.829209 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zv8zs" podUID="2acd20f5-b31c-411a-989c-f0ad12628894" Apr 04 02:18:41 crc kubenswrapper[4681]: I0404 02:18:41.831253 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587812-7nb9k"] Apr 04 02:18:41 crc kubenswrapper[4681]: I0404 02:18:41.839338 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587812-7nb9k"] Apr 04 02:18:43 crc kubenswrapper[4681]: I0404 02:18:43.244092 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fa244c5-86ba-46d3-95de-975a1789cf9d" path="/var/lib/kubelet/pods/4fa244c5-86ba-46d3-95de-975a1789cf9d/volumes" Apr 04 02:18:44 crc kubenswrapper[4681]: I0404 02:18:44.247250 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e664bb2-6221-4c7f-8159-4bd460b7d7fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e664bb2-6221-4c7f-8159-4bd460b7d7fd" (UID: "4e664bb2-6221-4c7f-8159-4bd460b7d7fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:18:44 crc kubenswrapper[4681]: I0404 02:18:44.259505 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e664bb2-6221-4c7f-8159-4bd460b7d7fd-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 02:18:44 crc kubenswrapper[4681]: I0404 02:18:44.521424 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fsgjs"] Apr 04 02:18:44 crc kubenswrapper[4681]: I0404 02:18:44.527892 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fsgjs"] Apr 04 02:18:45 crc kubenswrapper[4681]: I0404 02:18:45.210824 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e664bb2-6221-4c7f-8159-4bd460b7d7fd" path="/var/lib/kubelet/pods/4e664bb2-6221-4c7f-8159-4bd460b7d7fd/volumes" Apr 04 02:18:50 crc kubenswrapper[4681]: I0404 02:18:50.328891 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-zrps5" podUID="2b649a9d-e38f-4496-b6b7-39145dfdb158" containerName="registry-server" probeResult="failure" output=< Apr 04 02:18:50 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Apr 04 02:18:50 crc kubenswrapper[4681]: > Apr 04 02:18:50 crc kubenswrapper[4681]: I0404 02:18:50.844878 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-n9lp2" podUID="a90c835b-2256-4a2c-93bb-9e0d92db2f25" containerName="registry-server" probeResult="failure" output=< Apr 04 02:18:50 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Apr 04 02:18:50 crc kubenswrapper[4681]: > Apr 04 02:18:56 crc kubenswrapper[4681]: E0404 02:18:56.203757 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zv8zs" podUID="2acd20f5-b31c-411a-989c-f0ad12628894" Apr 04 02:18:59 crc kubenswrapper[4681]: I0404 02:18:59.331018 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zrps5" Apr 04 02:18:59 crc kubenswrapper[4681]: I0404 02:18:59.392457 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zrps5" Apr 04 02:18:59 crc kubenswrapper[4681]: I0404 02:18:59.564458 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zrps5"] Apr 04 02:19:00 crc kubenswrapper[4681]: I0404 02:19:00.383072 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zrps5" podUID="2b649a9d-e38f-4496-b6b7-39145dfdb158" containerName="registry-server" containerID="cri-o://90d90aa2ff4aa0b96b81f243f07ee4c665ac9a98457f572593d244f49257849b" gracePeriod=2 Apr 04 02:19:00 crc kubenswrapper[4681]: I0404 02:19:00.855117 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-n9lp2" podUID="a90c835b-2256-4a2c-93bb-9e0d92db2f25" containerName="registry-server" probeResult="failure" output=< Apr 04 02:19:00 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Apr 04 02:19:00 crc kubenswrapper[4681]: > Apr 04 02:19:03 crc kubenswrapper[4681]: I0404 02:19:03.240675 4681 scope.go:117] "RemoveContainer" containerID="ac90fea8b09e8ac7eafdf636d72d3114175dd78ea36a2f7b4c5adffe8be30c3f" Apr 04 02:19:03 crc kubenswrapper[4681]: I0404 02:19:03.416965 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zrps5_2b649a9d-e38f-4496-b6b7-39145dfdb158/registry-server/0.log" Apr 04 02:19:03 crc kubenswrapper[4681]: I0404 02:19:03.418024 4681 generic.go:334] "Generic (PLEG): container finished" podID="2b649a9d-e38f-4496-b6b7-39145dfdb158" containerID="90d90aa2ff4aa0b96b81f243f07ee4c665ac9a98457f572593d244f49257849b" exitCode=137 Apr 04 02:19:03 crc kubenswrapper[4681]: I0404 02:19:03.418088 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrps5" event={"ID":"2b649a9d-e38f-4496-b6b7-39145dfdb158","Type":"ContainerDied","Data":"90d90aa2ff4aa0b96b81f243f07ee4c665ac9a98457f572593d244f49257849b"} Apr 04 02:19:03 crc kubenswrapper[4681]: I0404 02:19:03.926433 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zrps5_2b649a9d-e38f-4496-b6b7-39145dfdb158/registry-server/0.log" Apr 04 02:19:03 crc kubenswrapper[4681]: I0404 02:19:03.927799 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zrps5" Apr 04 02:19:04 crc kubenswrapper[4681]: I0404 02:19:04.037673 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b649a9d-e38f-4496-b6b7-39145dfdb158-utilities\") pod \"2b649a9d-e38f-4496-b6b7-39145dfdb158\" (UID: \"2b649a9d-e38f-4496-b6b7-39145dfdb158\") " Apr 04 02:19:04 crc kubenswrapper[4681]: I0404 02:19:04.037784 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xwt9\" (UniqueName: \"kubernetes.io/projected/2b649a9d-e38f-4496-b6b7-39145dfdb158-kube-api-access-7xwt9\") pod \"2b649a9d-e38f-4496-b6b7-39145dfdb158\" (UID: \"2b649a9d-e38f-4496-b6b7-39145dfdb158\") " Apr 04 02:19:04 crc kubenswrapper[4681]: I0404 02:19:04.037823 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b649a9d-e38f-4496-b6b7-39145dfdb158-catalog-content\") pod \"2b649a9d-e38f-4496-b6b7-39145dfdb158\" (UID: \"2b649a9d-e38f-4496-b6b7-39145dfdb158\") " Apr 04 02:19:04 crc kubenswrapper[4681]: I0404 02:19:04.039717 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b649a9d-e38f-4496-b6b7-39145dfdb158-utilities" (OuterVolumeSpecName: "utilities") pod "2b649a9d-e38f-4496-b6b7-39145dfdb158" (UID: "2b649a9d-e38f-4496-b6b7-39145dfdb158"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:19:04 crc kubenswrapper[4681]: I0404 02:19:04.063514 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b649a9d-e38f-4496-b6b7-39145dfdb158-kube-api-access-7xwt9" (OuterVolumeSpecName: "kube-api-access-7xwt9") pod "2b649a9d-e38f-4496-b6b7-39145dfdb158" (UID: "2b649a9d-e38f-4496-b6b7-39145dfdb158"). InnerVolumeSpecName "kube-api-access-7xwt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:19:04 crc kubenswrapper[4681]: I0404 02:19:04.087493 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b649a9d-e38f-4496-b6b7-39145dfdb158-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b649a9d-e38f-4496-b6b7-39145dfdb158" (UID: "2b649a9d-e38f-4496-b6b7-39145dfdb158"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:19:04 crc kubenswrapper[4681]: I0404 02:19:04.139344 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b649a9d-e38f-4496-b6b7-39145dfdb158-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 02:19:04 crc kubenswrapper[4681]: I0404 02:19:04.139380 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xwt9\" (UniqueName: \"kubernetes.io/projected/2b649a9d-e38f-4496-b6b7-39145dfdb158-kube-api-access-7xwt9\") on node \"crc\" DevicePath \"\"" Apr 04 02:19:04 crc kubenswrapper[4681]: I0404 02:19:04.139390 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b649a9d-e38f-4496-b6b7-39145dfdb158-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 02:19:04 crc kubenswrapper[4681]: I0404 02:19:04.426201 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-zrps5_2b649a9d-e38f-4496-b6b7-39145dfdb158/registry-server/0.log" Apr 04 02:19:04 crc kubenswrapper[4681]: I0404 02:19:04.427081 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zrps5" event={"ID":"2b649a9d-e38f-4496-b6b7-39145dfdb158","Type":"ContainerDied","Data":"628b1ccaf472dfad8d7abad5220c34e904e8b67b8ab0084c52159ee24b37f4e5"} Apr 04 02:19:04 crc kubenswrapper[4681]: I0404 02:19:04.427132 4681 scope.go:117] "RemoveContainer" containerID="90d90aa2ff4aa0b96b81f243f07ee4c665ac9a98457f572593d244f49257849b" Apr 04 02:19:04 crc kubenswrapper[4681]: I0404 02:19:04.427139 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zrps5" Apr 04 02:19:04 crc kubenswrapper[4681]: I0404 02:19:04.448000 4681 scope.go:117] "RemoveContainer" containerID="871697c07291c094db880103fc8a57301ab3394d88b2689d0e243e225e6b4de6" Apr 04 02:19:04 crc kubenswrapper[4681]: I0404 02:19:04.457213 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zrps5"] Apr 04 02:19:04 crc kubenswrapper[4681]: I0404 02:19:04.467933 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zrps5"] Apr 04 02:19:04 crc kubenswrapper[4681]: I0404 02:19:04.480172 4681 scope.go:117] "RemoveContainer" containerID="3a87b51ecd43809046a28cb990e4332fa09aee0f669e056bada0ede67553987f" Apr 04 02:19:07 crc kubenswrapper[4681]: I0404 02:19:05.213683 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b649a9d-e38f-4496-b6b7-39145dfdb158" path="/var/lib/kubelet/pods/2b649a9d-e38f-4496-b6b7-39145dfdb158/volumes" Apr 04 02:19:08 crc kubenswrapper[4681]: E0404 02:19:08.203105 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zv8zs" podUID="2acd20f5-b31c-411a-989c-f0ad12628894" Apr 04 02:19:09 crc kubenswrapper[4681]: I0404 02:19:09.846766 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n9lp2" Apr 04 02:19:09 crc kubenswrapper[4681]: I0404 02:19:09.887777 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n9lp2" Apr 04 02:19:10 crc kubenswrapper[4681]: I0404 02:19:10.080956 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9lp2"] Apr 04 02:19:11 crc kubenswrapper[4681]: I0404 02:19:11.482642 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n9lp2" podUID="a90c835b-2256-4a2c-93bb-9e0d92db2f25" containerName="registry-server" containerID="cri-o://bae30a9c8f9008340e80437202b1a37deeb64b1ca587aca5c7bfac9ec418144a" gracePeriod=2 Apr 04 02:19:12 crc kubenswrapper[4681]: I0404 02:19:12.492492 4681 generic.go:334] "Generic (PLEG): container finished" podID="a90c835b-2256-4a2c-93bb-9e0d92db2f25" containerID="bae30a9c8f9008340e80437202b1a37deeb64b1ca587aca5c7bfac9ec418144a" exitCode=0 Apr 04 02:19:12 crc kubenswrapper[4681]: I0404 02:19:12.492553 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9lp2" event={"ID":"a90c835b-2256-4a2c-93bb-9e0d92db2f25","Type":"ContainerDied","Data":"bae30a9c8f9008340e80437202b1a37deeb64b1ca587aca5c7bfac9ec418144a"} Apr 04 02:19:12 crc kubenswrapper[4681]: I0404 02:19:12.492964 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n9lp2" event={"ID":"a90c835b-2256-4a2c-93bb-9e0d92db2f25","Type":"ContainerDied","Data":"c24677847449dd7b6a56a7c6f4b58ebf314595802e2997bb50e940eb7f70e4e1"} Apr 04 02:19:12 crc kubenswrapper[4681]: I0404 02:19:12.492977 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c24677847449dd7b6a56a7c6f4b58ebf314595802e2997bb50e940eb7f70e4e1" Apr 04 02:19:12 crc kubenswrapper[4681]: I0404 02:19:12.506050 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n9lp2" Apr 04 02:19:12 crc kubenswrapper[4681]: I0404 02:19:12.660900 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgbt6\" (UniqueName: \"kubernetes.io/projected/a90c835b-2256-4a2c-93bb-9e0d92db2f25-kube-api-access-qgbt6\") pod \"a90c835b-2256-4a2c-93bb-9e0d92db2f25\" (UID: \"a90c835b-2256-4a2c-93bb-9e0d92db2f25\") " Apr 04 02:19:12 crc kubenswrapper[4681]: I0404 02:19:12.660960 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a90c835b-2256-4a2c-93bb-9e0d92db2f25-utilities\") pod \"a90c835b-2256-4a2c-93bb-9e0d92db2f25\" (UID: \"a90c835b-2256-4a2c-93bb-9e0d92db2f25\") " Apr 04 02:19:12 crc kubenswrapper[4681]: I0404 02:19:12.661000 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a90c835b-2256-4a2c-93bb-9e0d92db2f25-catalog-content\") pod \"a90c835b-2256-4a2c-93bb-9e0d92db2f25\" (UID: \"a90c835b-2256-4a2c-93bb-9e0d92db2f25\") " Apr 04 02:19:12 crc kubenswrapper[4681]: I0404 02:19:12.661784 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a90c835b-2256-4a2c-93bb-9e0d92db2f25-utilities" (OuterVolumeSpecName: "utilities") pod "a90c835b-2256-4a2c-93bb-9e0d92db2f25" (UID: "a90c835b-2256-4a2c-93bb-9e0d92db2f25"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:19:12 crc kubenswrapper[4681]: I0404 02:19:12.665971 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a90c835b-2256-4a2c-93bb-9e0d92db2f25-kube-api-access-qgbt6" (OuterVolumeSpecName: "kube-api-access-qgbt6") pod "a90c835b-2256-4a2c-93bb-9e0d92db2f25" (UID: "a90c835b-2256-4a2c-93bb-9e0d92db2f25"). InnerVolumeSpecName "kube-api-access-qgbt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:19:12 crc kubenswrapper[4681]: I0404 02:19:12.684212 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a90c835b-2256-4a2c-93bb-9e0d92db2f25-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a90c835b-2256-4a2c-93bb-9e0d92db2f25" (UID: "a90c835b-2256-4a2c-93bb-9e0d92db2f25"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:19:12 crc kubenswrapper[4681]: I0404 02:19:12.762976 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgbt6\" (UniqueName: \"kubernetes.io/projected/a90c835b-2256-4a2c-93bb-9e0d92db2f25-kube-api-access-qgbt6\") on node \"crc\" DevicePath \"\"" Apr 04 02:19:12 crc kubenswrapper[4681]: I0404 02:19:12.763014 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a90c835b-2256-4a2c-93bb-9e0d92db2f25-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 02:19:12 crc kubenswrapper[4681]: I0404 02:19:12.763027 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a90c835b-2256-4a2c-93bb-9e0d92db2f25-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 02:19:13 crc kubenswrapper[4681]: I0404 02:19:13.503218 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n9lp2" Apr 04 02:19:13 crc kubenswrapper[4681]: I0404 02:19:13.533824 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9lp2"] Apr 04 02:19:13 crc kubenswrapper[4681]: I0404 02:19:13.543829 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n9lp2"] Apr 04 02:19:15 crc kubenswrapper[4681]: I0404 02:19:15.212524 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a90c835b-2256-4a2c-93bb-9e0d92db2f25" path="/var/lib/kubelet/pods/a90c835b-2256-4a2c-93bb-9e0d92db2f25/volumes" Apr 04 02:19:20 crc kubenswrapper[4681]: E0404 02:19:20.203988 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zv8zs" podUID="2acd20f5-b31c-411a-989c-f0ad12628894" Apr 04 02:19:36 crc kubenswrapper[4681]: I0404 02:19:36.689569 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zv8zs" event={"ID":"2acd20f5-b31c-411a-989c-f0ad12628894","Type":"ContainerStarted","Data":"c4413ad2ccff8f85326e4de76a4f2c85d8fdabb75db13c5ec4179cab9396a1bd"} Apr 04 02:19:36 crc kubenswrapper[4681]: I0404 02:19:36.718895 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zv8zs" podStartSLOduration=1.986022709 podStartE2EDuration="2m24.718867746s" podCreationTimestamp="2026-04-04 02:17:12 +0000 UTC" firstStartedPulling="2026-04-04 02:17:13.460521588 +0000 UTC m=+1313.126296708" lastFinishedPulling="2026-04-04 02:19:36.193366615 +0000 UTC m=+1455.859141745" observedRunningTime="2026-04-04 02:19:36.708353957 +0000 UTC m=+1456.374129107" watchObservedRunningTime="2026-04-04 02:19:36.718867746 +0000 UTC m=+1456.384642886" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.354708 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69554c7465-wwqrx"] Apr 04 02:19:52 crc kubenswrapper[4681]: E0404 02:19:52.355607 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b649a9d-e38f-4496-b6b7-39145dfdb158" containerName="extract-content" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.355624 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b649a9d-e38f-4496-b6b7-39145dfdb158" containerName="extract-content" Apr 04 02:19:52 crc kubenswrapper[4681]: E0404 02:19:52.355671 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a90c835b-2256-4a2c-93bb-9e0d92db2f25" containerName="extract-utilities" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.355681 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="a90c835b-2256-4a2c-93bb-9e0d92db2f25" containerName="extract-utilities" Apr 04 02:19:52 crc kubenswrapper[4681]: E0404 02:19:52.355705 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b649a9d-e38f-4496-b6b7-39145dfdb158" containerName="extract-utilities" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.355715 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b649a9d-e38f-4496-b6b7-39145dfdb158" containerName="extract-utilities" Apr 04 02:19:52 crc kubenswrapper[4681]: E0404 02:19:52.355729 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e664bb2-6221-4c7f-8159-4bd460b7d7fd" containerName="extract-content" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.355737 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e664bb2-6221-4c7f-8159-4bd460b7d7fd" containerName="extract-content" Apr 04 02:19:52 crc kubenswrapper[4681]: E0404 02:19:52.355753 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e664bb2-6221-4c7f-8159-4bd460b7d7fd" containerName="registry-server" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.355761 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e664bb2-6221-4c7f-8159-4bd460b7d7fd" containerName="registry-server" Apr 04 02:19:52 crc kubenswrapper[4681]: E0404 02:19:52.355778 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a90c835b-2256-4a2c-93bb-9e0d92db2f25" containerName="registry-server" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.355788 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="a90c835b-2256-4a2c-93bb-9e0d92db2f25" containerName="registry-server" Apr 04 02:19:52 crc kubenswrapper[4681]: E0404 02:19:52.355806 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a90c835b-2256-4a2c-93bb-9e0d92db2f25" containerName="extract-content" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.355815 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="a90c835b-2256-4a2c-93bb-9e0d92db2f25" containerName="extract-content" Apr 04 02:19:52 crc kubenswrapper[4681]: E0404 02:19:52.355835 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e6e9171-5cc4-45fb-9668-dee0d4a7df22" containerName="oc" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.355845 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e6e9171-5cc4-45fb-9668-dee0d4a7df22" containerName="oc" Apr 04 02:19:52 crc kubenswrapper[4681]: E0404 02:19:52.355863 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b649a9d-e38f-4496-b6b7-39145dfdb158" containerName="registry-server" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.355870 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b649a9d-e38f-4496-b6b7-39145dfdb158" containerName="registry-server" Apr 04 02:19:52 crc kubenswrapper[4681]: E0404 02:19:52.355882 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e664bb2-6221-4c7f-8159-4bd460b7d7fd" containerName="extract-utilities" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.355890 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e664bb2-6221-4c7f-8159-4bd460b7d7fd" containerName="extract-utilities" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.356078 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b649a9d-e38f-4496-b6b7-39145dfdb158" containerName="registry-server" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.356098 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="a90c835b-2256-4a2c-93bb-9e0d92db2f25" containerName="registry-server" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.356111 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e6e9171-5cc4-45fb-9668-dee0d4a7df22" containerName="oc" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.356130 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e664bb2-6221-4c7f-8159-4bd460b7d7fd" containerName="registry-server" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.357054 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69554c7465-wwqrx" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.359368 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.359596 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.360986 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.361525 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-75s7w" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.363836 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69554c7465-wwqrx"] Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.415098 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56d9f89cb5-6kwjz"] Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.416499 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56d9f89cb5-6kwjz" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.419945 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.426946 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56d9f89cb5-6kwjz"] Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.536831 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d10f82c9-9855-44b8-ae26-7426ae4fad47-dns-svc\") pod \"dnsmasq-dns-56d9f89cb5-6kwjz\" (UID: \"d10f82c9-9855-44b8-ae26-7426ae4fad47\") " pod="openstack/dnsmasq-dns-56d9f89cb5-6kwjz" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.536895 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d10f82c9-9855-44b8-ae26-7426ae4fad47-config\") pod \"dnsmasq-dns-56d9f89cb5-6kwjz\" (UID: \"d10f82c9-9855-44b8-ae26-7426ae4fad47\") " pod="openstack/dnsmasq-dns-56d9f89cb5-6kwjz" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.536930 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed55d407-8d90-441b-816f-8519eda89a94-config\") pod \"dnsmasq-dns-69554c7465-wwqrx\" (UID: \"ed55d407-8d90-441b-816f-8519eda89a94\") " pod="openstack/dnsmasq-dns-69554c7465-wwqrx" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.536950 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz98b\" (UniqueName: \"kubernetes.io/projected/ed55d407-8d90-441b-816f-8519eda89a94-kube-api-access-bz98b\") pod \"dnsmasq-dns-69554c7465-wwqrx\" (UID: \"ed55d407-8d90-441b-816f-8519eda89a94\") " pod="openstack/dnsmasq-dns-69554c7465-wwqrx" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.537175 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5dgt\" (UniqueName: \"kubernetes.io/projected/d10f82c9-9855-44b8-ae26-7426ae4fad47-kube-api-access-k5dgt\") pod \"dnsmasq-dns-56d9f89cb5-6kwjz\" (UID: \"d10f82c9-9855-44b8-ae26-7426ae4fad47\") " pod="openstack/dnsmasq-dns-56d9f89cb5-6kwjz" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.638629 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d10f82c9-9855-44b8-ae26-7426ae4fad47-dns-svc\") pod \"dnsmasq-dns-56d9f89cb5-6kwjz\" (UID: \"d10f82c9-9855-44b8-ae26-7426ae4fad47\") " pod="openstack/dnsmasq-dns-56d9f89cb5-6kwjz" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.638699 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d10f82c9-9855-44b8-ae26-7426ae4fad47-config\") pod \"dnsmasq-dns-56d9f89cb5-6kwjz\" (UID: \"d10f82c9-9855-44b8-ae26-7426ae4fad47\") " pod="openstack/dnsmasq-dns-56d9f89cb5-6kwjz" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.638729 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed55d407-8d90-441b-816f-8519eda89a94-config\") pod \"dnsmasq-dns-69554c7465-wwqrx\" (UID: \"ed55d407-8d90-441b-816f-8519eda89a94\") " pod="openstack/dnsmasq-dns-69554c7465-wwqrx" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.638749 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bz98b\" (UniqueName: \"kubernetes.io/projected/ed55d407-8d90-441b-816f-8519eda89a94-kube-api-access-bz98b\") pod \"dnsmasq-dns-69554c7465-wwqrx\" (UID: \"ed55d407-8d90-441b-816f-8519eda89a94\") " pod="openstack/dnsmasq-dns-69554c7465-wwqrx" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.638802 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5dgt\" (UniqueName: \"kubernetes.io/projected/d10f82c9-9855-44b8-ae26-7426ae4fad47-kube-api-access-k5dgt\") pod \"dnsmasq-dns-56d9f89cb5-6kwjz\" (UID: \"d10f82c9-9855-44b8-ae26-7426ae4fad47\") " pod="openstack/dnsmasq-dns-56d9f89cb5-6kwjz" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.639600 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d10f82c9-9855-44b8-ae26-7426ae4fad47-dns-svc\") pod \"dnsmasq-dns-56d9f89cb5-6kwjz\" (UID: \"d10f82c9-9855-44b8-ae26-7426ae4fad47\") " pod="openstack/dnsmasq-dns-56d9f89cb5-6kwjz" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.639611 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed55d407-8d90-441b-816f-8519eda89a94-config\") pod \"dnsmasq-dns-69554c7465-wwqrx\" (UID: \"ed55d407-8d90-441b-816f-8519eda89a94\") " pod="openstack/dnsmasq-dns-69554c7465-wwqrx" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.639763 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d10f82c9-9855-44b8-ae26-7426ae4fad47-config\") pod \"dnsmasq-dns-56d9f89cb5-6kwjz\" (UID: \"d10f82c9-9855-44b8-ae26-7426ae4fad47\") " pod="openstack/dnsmasq-dns-56d9f89cb5-6kwjz" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.656627 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5dgt\" (UniqueName: \"kubernetes.io/projected/d10f82c9-9855-44b8-ae26-7426ae4fad47-kube-api-access-k5dgt\") pod \"dnsmasq-dns-56d9f89cb5-6kwjz\" (UID: \"d10f82c9-9855-44b8-ae26-7426ae4fad47\") " pod="openstack/dnsmasq-dns-56d9f89cb5-6kwjz" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.659977 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bz98b\" (UniqueName: \"kubernetes.io/projected/ed55d407-8d90-441b-816f-8519eda89a94-kube-api-access-bz98b\") pod \"dnsmasq-dns-69554c7465-wwqrx\" (UID: \"ed55d407-8d90-441b-816f-8519eda89a94\") " pod="openstack/dnsmasq-dns-69554c7465-wwqrx" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.675329 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69554c7465-wwqrx" Apr 04 02:19:52 crc kubenswrapper[4681]: I0404 02:19:52.734110 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56d9f89cb5-6kwjz" Apr 04 02:19:53 crc kubenswrapper[4681]: I0404 02:19:53.099708 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69554c7465-wwqrx"] Apr 04 02:19:53 crc kubenswrapper[4681]: I0404 02:19:53.194070 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56d9f89cb5-6kwjz"] Apr 04 02:19:53 crc kubenswrapper[4681]: W0404 02:19:53.198202 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd10f82c9_9855_44b8_ae26_7426ae4fad47.slice/crio-9cdc216deaaf7e470fb67336b93b2b19a12fa739438caed86f2bc550b74a9f6d WatchSource:0}: Error finding container 9cdc216deaaf7e470fb67336b93b2b19a12fa739438caed86f2bc550b74a9f6d: Status 404 returned error can't find the container with id 9cdc216deaaf7e470fb67336b93b2b19a12fa739438caed86f2bc550b74a9f6d Apr 04 02:19:53 crc kubenswrapper[4681]: I0404 02:19:53.835365 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56d9f89cb5-6kwjz" event={"ID":"d10f82c9-9855-44b8-ae26-7426ae4fad47","Type":"ContainerStarted","Data":"9cdc216deaaf7e470fb67336b93b2b19a12fa739438caed86f2bc550b74a9f6d"} Apr 04 02:19:53 crc kubenswrapper[4681]: I0404 02:19:53.836307 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69554c7465-wwqrx" event={"ID":"ed55d407-8d90-441b-816f-8519eda89a94","Type":"ContainerStarted","Data":"c53a54ff5c4da47544309b2ae6cdec3b528779661a5f6fd7a50da53b591506b0"} Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.101143 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56d9f89cb5-6kwjz"] Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.139794 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fdf6c6f7-q2q8w"] Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.145612 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fdf6c6f7-q2q8w" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.153445 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fdf6c6f7-q2q8w"] Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.295917 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/641eda62-a031-46b3-8cb3-c534541afcf7-config\") pod \"dnsmasq-dns-7fdf6c6f7-q2q8w\" (UID: \"641eda62-a031-46b3-8cb3-c534541afcf7\") " pod="openstack/dnsmasq-dns-7fdf6c6f7-q2q8w" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.295959 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85whf\" (UniqueName: \"kubernetes.io/projected/641eda62-a031-46b3-8cb3-c534541afcf7-kube-api-access-85whf\") pod \"dnsmasq-dns-7fdf6c6f7-q2q8w\" (UID: \"641eda62-a031-46b3-8cb3-c534541afcf7\") " pod="openstack/dnsmasq-dns-7fdf6c6f7-q2q8w" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.296054 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/641eda62-a031-46b3-8cb3-c534541afcf7-dns-svc\") pod \"dnsmasq-dns-7fdf6c6f7-q2q8w\" (UID: \"641eda62-a031-46b3-8cb3-c534541afcf7\") " pod="openstack/dnsmasq-dns-7fdf6c6f7-q2q8w" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.369181 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69554c7465-wwqrx"] Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.396239 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b67658d95-hdcw2"] Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.401708 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/641eda62-a031-46b3-8cb3-c534541afcf7-config\") pod \"dnsmasq-dns-7fdf6c6f7-q2q8w\" (UID: \"641eda62-a031-46b3-8cb3-c534541afcf7\") " pod="openstack/dnsmasq-dns-7fdf6c6f7-q2q8w" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.401771 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85whf\" (UniqueName: \"kubernetes.io/projected/641eda62-a031-46b3-8cb3-c534541afcf7-kube-api-access-85whf\") pod \"dnsmasq-dns-7fdf6c6f7-q2q8w\" (UID: \"641eda62-a031-46b3-8cb3-c534541afcf7\") " pod="openstack/dnsmasq-dns-7fdf6c6f7-q2q8w" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.402014 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/641eda62-a031-46b3-8cb3-c534541afcf7-dns-svc\") pod \"dnsmasq-dns-7fdf6c6f7-q2q8w\" (UID: \"641eda62-a031-46b3-8cb3-c534541afcf7\") " pod="openstack/dnsmasq-dns-7fdf6c6f7-q2q8w" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.402835 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b67658d95-hdcw2" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.403129 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/641eda62-a031-46b3-8cb3-c534541afcf7-dns-svc\") pod \"dnsmasq-dns-7fdf6c6f7-q2q8w\" (UID: \"641eda62-a031-46b3-8cb3-c534541afcf7\") " pod="openstack/dnsmasq-dns-7fdf6c6f7-q2q8w" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.403884 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/641eda62-a031-46b3-8cb3-c534541afcf7-config\") pod \"dnsmasq-dns-7fdf6c6f7-q2q8w\" (UID: \"641eda62-a031-46b3-8cb3-c534541afcf7\") " pod="openstack/dnsmasq-dns-7fdf6c6f7-q2q8w" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.419605 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b67658d95-hdcw2"] Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.440676 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85whf\" (UniqueName: \"kubernetes.io/projected/641eda62-a031-46b3-8cb3-c534541afcf7-kube-api-access-85whf\") pod \"dnsmasq-dns-7fdf6c6f7-q2q8w\" (UID: \"641eda62-a031-46b3-8cb3-c534541afcf7\") " pod="openstack/dnsmasq-dns-7fdf6c6f7-q2q8w" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.476374 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fdf6c6f7-q2q8w" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.505037 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/168bf88b-a0ac-4eb5-a406-7511b4ce698d-config\") pod \"dnsmasq-dns-6b67658d95-hdcw2\" (UID: \"168bf88b-a0ac-4eb5-a406-7511b4ce698d\") " pod="openstack/dnsmasq-dns-6b67658d95-hdcw2" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.505104 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9pcg\" (UniqueName: \"kubernetes.io/projected/168bf88b-a0ac-4eb5-a406-7511b4ce698d-kube-api-access-t9pcg\") pod \"dnsmasq-dns-6b67658d95-hdcw2\" (UID: \"168bf88b-a0ac-4eb5-a406-7511b4ce698d\") " pod="openstack/dnsmasq-dns-6b67658d95-hdcw2" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.505180 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/168bf88b-a0ac-4eb5-a406-7511b4ce698d-dns-svc\") pod \"dnsmasq-dns-6b67658d95-hdcw2\" (UID: \"168bf88b-a0ac-4eb5-a406-7511b4ce698d\") " pod="openstack/dnsmasq-dns-6b67658d95-hdcw2" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.607408 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/168bf88b-a0ac-4eb5-a406-7511b4ce698d-config\") pod \"dnsmasq-dns-6b67658d95-hdcw2\" (UID: \"168bf88b-a0ac-4eb5-a406-7511b4ce698d\") " pod="openstack/dnsmasq-dns-6b67658d95-hdcw2" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.607464 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9pcg\" (UniqueName: \"kubernetes.io/projected/168bf88b-a0ac-4eb5-a406-7511b4ce698d-kube-api-access-t9pcg\") pod \"dnsmasq-dns-6b67658d95-hdcw2\" (UID: \"168bf88b-a0ac-4eb5-a406-7511b4ce698d\") " pod="openstack/dnsmasq-dns-6b67658d95-hdcw2" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.607527 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/168bf88b-a0ac-4eb5-a406-7511b4ce698d-dns-svc\") pod \"dnsmasq-dns-6b67658d95-hdcw2\" (UID: \"168bf88b-a0ac-4eb5-a406-7511b4ce698d\") " pod="openstack/dnsmasq-dns-6b67658d95-hdcw2" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.609292 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/168bf88b-a0ac-4eb5-a406-7511b4ce698d-config\") pod \"dnsmasq-dns-6b67658d95-hdcw2\" (UID: \"168bf88b-a0ac-4eb5-a406-7511b4ce698d\") " pod="openstack/dnsmasq-dns-6b67658d95-hdcw2" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.609323 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/168bf88b-a0ac-4eb5-a406-7511b4ce698d-dns-svc\") pod \"dnsmasq-dns-6b67658d95-hdcw2\" (UID: \"168bf88b-a0ac-4eb5-a406-7511b4ce698d\") " pod="openstack/dnsmasq-dns-6b67658d95-hdcw2" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.648470 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9pcg\" (UniqueName: \"kubernetes.io/projected/168bf88b-a0ac-4eb5-a406-7511b4ce698d-kube-api-access-t9pcg\") pod \"dnsmasq-dns-6b67658d95-hdcw2\" (UID: \"168bf88b-a0ac-4eb5-a406-7511b4ce698d\") " pod="openstack/dnsmasq-dns-6b67658d95-hdcw2" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.661546 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b67658d95-hdcw2"] Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.662139 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b67658d95-hdcw2" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.683838 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-595d94d48f-jpw4b"] Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.685646 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-595d94d48f-jpw4b" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.699245 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-595d94d48f-jpw4b"] Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.710074 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64ec402c-6b37-4942-be19-9dcc436c6650-config\") pod \"dnsmasq-dns-595d94d48f-jpw4b\" (UID: \"64ec402c-6b37-4942-be19-9dcc436c6650\") " pod="openstack/dnsmasq-dns-595d94d48f-jpw4b" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.713702 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8qdw\" (UniqueName: \"kubernetes.io/projected/64ec402c-6b37-4942-be19-9dcc436c6650-kube-api-access-n8qdw\") pod \"dnsmasq-dns-595d94d48f-jpw4b\" (UID: \"64ec402c-6b37-4942-be19-9dcc436c6650\") " pod="openstack/dnsmasq-dns-595d94d48f-jpw4b" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.713747 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64ec402c-6b37-4942-be19-9dcc436c6650-dns-svc\") pod \"dnsmasq-dns-595d94d48f-jpw4b\" (UID: \"64ec402c-6b37-4942-be19-9dcc436c6650\") " pod="openstack/dnsmasq-dns-595d94d48f-jpw4b" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.815311 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8qdw\" (UniqueName: \"kubernetes.io/projected/64ec402c-6b37-4942-be19-9dcc436c6650-kube-api-access-n8qdw\") pod \"dnsmasq-dns-595d94d48f-jpw4b\" (UID: \"64ec402c-6b37-4942-be19-9dcc436c6650\") " pod="openstack/dnsmasq-dns-595d94d48f-jpw4b" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.815354 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64ec402c-6b37-4942-be19-9dcc436c6650-dns-svc\") pod \"dnsmasq-dns-595d94d48f-jpw4b\" (UID: \"64ec402c-6b37-4942-be19-9dcc436c6650\") " pod="openstack/dnsmasq-dns-595d94d48f-jpw4b" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.815398 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64ec402c-6b37-4942-be19-9dcc436c6650-config\") pod \"dnsmasq-dns-595d94d48f-jpw4b\" (UID: \"64ec402c-6b37-4942-be19-9dcc436c6650\") " pod="openstack/dnsmasq-dns-595d94d48f-jpw4b" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.816159 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64ec402c-6b37-4942-be19-9dcc436c6650-config\") pod \"dnsmasq-dns-595d94d48f-jpw4b\" (UID: \"64ec402c-6b37-4942-be19-9dcc436c6650\") " pod="openstack/dnsmasq-dns-595d94d48f-jpw4b" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.819172 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64ec402c-6b37-4942-be19-9dcc436c6650-dns-svc\") pod \"dnsmasq-dns-595d94d48f-jpw4b\" (UID: \"64ec402c-6b37-4942-be19-9dcc436c6650\") " pod="openstack/dnsmasq-dns-595d94d48f-jpw4b" Apr 04 02:19:56 crc kubenswrapper[4681]: I0404 02:19:56.833941 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8qdw\" (UniqueName: \"kubernetes.io/projected/64ec402c-6b37-4942-be19-9dcc436c6650-kube-api-access-n8qdw\") pod \"dnsmasq-dns-595d94d48f-jpw4b\" (UID: \"64ec402c-6b37-4942-be19-9dcc436c6650\") " pod="openstack/dnsmasq-dns-595d94d48f-jpw4b" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.013560 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-595d94d48f-jpw4b" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.260358 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.261856 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.265728 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.265560 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-d5flm" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.266684 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.266729 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.267418 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.268720 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.272948 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.280161 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.338150 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b40175fa-a3b0-40c3-bc35-7d927897b82b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.338190 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b40175fa-a3b0-40c3-bc35-7d927897b82b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.338430 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b40175fa-a3b0-40c3-bc35-7d927897b82b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.338525 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.338573 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b40175fa-a3b0-40c3-bc35-7d927897b82b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.338618 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b40175fa-a3b0-40c3-bc35-7d927897b82b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.338641 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b40175fa-a3b0-40c3-bc35-7d927897b82b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.338659 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b40175fa-a3b0-40c3-bc35-7d927897b82b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.338688 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b40175fa-a3b0-40c3-bc35-7d927897b82b-config-data\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.338871 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b40175fa-a3b0-40c3-bc35-7d927897b82b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.338915 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfgqd\" (UniqueName: \"kubernetes.io/projected/b40175fa-a3b0-40c3-bc35-7d927897b82b-kube-api-access-hfgqd\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.440838 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b40175fa-a3b0-40c3-bc35-7d927897b82b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.440912 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b40175fa-a3b0-40c3-bc35-7d927897b82b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.440934 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b40175fa-a3b0-40c3-bc35-7d927897b82b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.440967 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b40175fa-a3b0-40c3-bc35-7d927897b82b-config-data\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.440991 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b40175fa-a3b0-40c3-bc35-7d927897b82b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.441008 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfgqd\" (UniqueName: \"kubernetes.io/projected/b40175fa-a3b0-40c3-bc35-7d927897b82b-kube-api-access-hfgqd\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.441059 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b40175fa-a3b0-40c3-bc35-7d927897b82b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.441074 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b40175fa-a3b0-40c3-bc35-7d927897b82b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.441122 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b40175fa-a3b0-40c3-bc35-7d927897b82b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.441145 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.441210 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b40175fa-a3b0-40c3-bc35-7d927897b82b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.442426 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b40175fa-a3b0-40c3-bc35-7d927897b82b-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.442638 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.442771 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b40175fa-a3b0-40c3-bc35-7d927897b82b-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.444695 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b40175fa-a3b0-40c3-bc35-7d927897b82b-config-data\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.445734 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b40175fa-a3b0-40c3-bc35-7d927897b82b-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.446168 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b40175fa-a3b0-40c3-bc35-7d927897b82b-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.451134 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b40175fa-a3b0-40c3-bc35-7d927897b82b-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.459710 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b40175fa-a3b0-40c3-bc35-7d927897b82b-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.461503 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfgqd\" (UniqueName: \"kubernetes.io/projected/b40175fa-a3b0-40c3-bc35-7d927897b82b-kube-api-access-hfgqd\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.461859 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b40175fa-a3b0-40c3-bc35-7d927897b82b-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.463043 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b40175fa-a3b0-40c3-bc35-7d927897b82b-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.473491 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.537413 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.542902 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.547312 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.547515 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.547565 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.547594 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.547795 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.547839 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.548116 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qc276" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.548781 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.588014 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.645678 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/274d9ff3-9300-48ad-8172-5be9539f6e7b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.645755 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.645788 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/274d9ff3-9300-48ad-8172-5be9539f6e7b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.645815 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/274d9ff3-9300-48ad-8172-5be9539f6e7b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.645846 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/274d9ff3-9300-48ad-8172-5be9539f6e7b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.645870 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/274d9ff3-9300-48ad-8172-5be9539f6e7b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.645898 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/274d9ff3-9300-48ad-8172-5be9539f6e7b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.645928 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srcs6\" (UniqueName: \"kubernetes.io/projected/274d9ff3-9300-48ad-8172-5be9539f6e7b-kube-api-access-srcs6\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.645961 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/274d9ff3-9300-48ad-8172-5be9539f6e7b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.645990 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/274d9ff3-9300-48ad-8172-5be9539f6e7b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.646022 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/274d9ff3-9300-48ad-8172-5be9539f6e7b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.747058 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/274d9ff3-9300-48ad-8172-5be9539f6e7b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.747117 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/274d9ff3-9300-48ad-8172-5be9539f6e7b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.747202 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/274d9ff3-9300-48ad-8172-5be9539f6e7b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.747238 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.747285 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/274d9ff3-9300-48ad-8172-5be9539f6e7b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.747314 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/274d9ff3-9300-48ad-8172-5be9539f6e7b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.747335 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/274d9ff3-9300-48ad-8172-5be9539f6e7b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.747349 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/274d9ff3-9300-48ad-8172-5be9539f6e7b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.747368 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/274d9ff3-9300-48ad-8172-5be9539f6e7b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.747389 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srcs6\" (UniqueName: \"kubernetes.io/projected/274d9ff3-9300-48ad-8172-5be9539f6e7b-kube-api-access-srcs6\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.747410 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/274d9ff3-9300-48ad-8172-5be9539f6e7b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.747608 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/274d9ff3-9300-48ad-8172-5be9539f6e7b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.747636 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.748470 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/274d9ff3-9300-48ad-8172-5be9539f6e7b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.748986 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/274d9ff3-9300-48ad-8172-5be9539f6e7b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.749166 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/274d9ff3-9300-48ad-8172-5be9539f6e7b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.749491 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/274d9ff3-9300-48ad-8172-5be9539f6e7b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.751295 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/274d9ff3-9300-48ad-8172-5be9539f6e7b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.755946 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/274d9ff3-9300-48ad-8172-5be9539f6e7b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.756207 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/274d9ff3-9300-48ad-8172-5be9539f6e7b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.756232 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/274d9ff3-9300-48ad-8172-5be9539f6e7b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.765980 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srcs6\" (UniqueName: \"kubernetes.io/projected/274d9ff3-9300-48ad-8172-5be9539f6e7b-kube-api-access-srcs6\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.795720 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.836924 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.838796 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.843080 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.843586 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-default-user" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.846701 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-erlang-cookie" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.846732 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-plugins-conf" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.846886 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-server-conf" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.847004 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"notifications-rabbitmq-config-data" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.847123 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-notifications-rabbitmq-svc" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.847289 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"notifications-rabbitmq-server-dockercfg-6pcx7" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.874210 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.949252 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/189dfe5e-4211-48c8-bc76-ea9c229c5d65-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.949326 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/189dfe5e-4211-48c8-bc76-ea9c229c5d65-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.949438 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/189dfe5e-4211-48c8-bc76-ea9c229c5d65-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.949471 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27kfk\" (UniqueName: \"kubernetes.io/projected/189dfe5e-4211-48c8-bc76-ea9c229c5d65-kube-api-access-27kfk\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.949495 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/189dfe5e-4211-48c8-bc76-ea9c229c5d65-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.949531 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.949554 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/189dfe5e-4211-48c8-bc76-ea9c229c5d65-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.949600 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/189dfe5e-4211-48c8-bc76-ea9c229c5d65-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.949651 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/189dfe5e-4211-48c8-bc76-ea9c229c5d65-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.949683 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/189dfe5e-4211-48c8-bc76-ea9c229c5d65-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:57 crc kubenswrapper[4681]: I0404 02:19:57.949707 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/189dfe5e-4211-48c8-bc76-ea9c229c5d65-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:58 crc kubenswrapper[4681]: I0404 02:19:58.051046 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/189dfe5e-4211-48c8-bc76-ea9c229c5d65-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:58 crc kubenswrapper[4681]: I0404 02:19:58.051115 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/189dfe5e-4211-48c8-bc76-ea9c229c5d65-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:58 crc kubenswrapper[4681]: I0404 02:19:58.051143 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/189dfe5e-4211-48c8-bc76-ea9c229c5d65-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:58 crc kubenswrapper[4681]: I0404 02:19:58.051165 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/189dfe5e-4211-48c8-bc76-ea9c229c5d65-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:58 crc kubenswrapper[4681]: I0404 02:19:58.051191 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/189dfe5e-4211-48c8-bc76-ea9c229c5d65-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:58 crc kubenswrapper[4681]: I0404 02:19:58.051213 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/189dfe5e-4211-48c8-bc76-ea9c229c5d65-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:58 crc kubenswrapper[4681]: I0404 02:19:58.051249 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/189dfe5e-4211-48c8-bc76-ea9c229c5d65-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:58 crc kubenswrapper[4681]: I0404 02:19:58.051289 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27kfk\" (UniqueName: \"kubernetes.io/projected/189dfe5e-4211-48c8-bc76-ea9c229c5d65-kube-api-access-27kfk\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:58 crc kubenswrapper[4681]: I0404 02:19:58.051313 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/189dfe5e-4211-48c8-bc76-ea9c229c5d65-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:58 crc kubenswrapper[4681]: I0404 02:19:58.051344 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:58 crc kubenswrapper[4681]: I0404 02:19:58.051361 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/189dfe5e-4211-48c8-bc76-ea9c229c5d65-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:58 crc kubenswrapper[4681]: I0404 02:19:58.051617 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:58 crc kubenswrapper[4681]: I0404 02:19:58.052127 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/189dfe5e-4211-48c8-bc76-ea9c229c5d65-rabbitmq-plugins\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:58 crc kubenswrapper[4681]: I0404 02:19:58.052180 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/189dfe5e-4211-48c8-bc76-ea9c229c5d65-rabbitmq-erlang-cookie\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:58 crc kubenswrapper[4681]: I0404 02:19:58.052597 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/189dfe5e-4211-48c8-bc76-ea9c229c5d65-plugins-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:58 crc kubenswrapper[4681]: I0404 02:19:58.053460 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/189dfe5e-4211-48c8-bc76-ea9c229c5d65-config-data\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:58 crc kubenswrapper[4681]: I0404 02:19:58.053466 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/189dfe5e-4211-48c8-bc76-ea9c229c5d65-server-conf\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:58 crc kubenswrapper[4681]: I0404 02:19:58.055458 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/189dfe5e-4211-48c8-bc76-ea9c229c5d65-erlang-cookie-secret\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:58 crc kubenswrapper[4681]: I0404 02:19:58.056518 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/189dfe5e-4211-48c8-bc76-ea9c229c5d65-rabbitmq-confd\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:58 crc kubenswrapper[4681]: I0404 02:19:58.057720 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/189dfe5e-4211-48c8-bc76-ea9c229c5d65-pod-info\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:58 crc kubenswrapper[4681]: I0404 02:19:58.060872 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/189dfe5e-4211-48c8-bc76-ea9c229c5d65-rabbitmq-tls\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:58 crc kubenswrapper[4681]: I0404 02:19:58.067697 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27kfk\" (UniqueName: \"kubernetes.io/projected/189dfe5e-4211-48c8-bc76-ea9c229c5d65-kube-api-access-27kfk\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:58 crc kubenswrapper[4681]: I0404 02:19:58.074377 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"notifications-rabbitmq-server-0\" (UID: \"189dfe5e-4211-48c8-bc76-ea9c229c5d65\") " pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:58 crc kubenswrapper[4681]: I0404 02:19:58.171556 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:19:58 crc kubenswrapper[4681]: I0404 02:19:58.985987 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Apr 04 02:19:58 crc kubenswrapper[4681]: I0404 02:19:58.989733 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Apr 04 02:19:58 crc kubenswrapper[4681]: I0404 02:19:58.993892 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Apr 04 02:19:58 crc kubenswrapper[4681]: I0404 02:19:58.994526 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Apr 04 02:19:58 crc kubenswrapper[4681]: I0404 02:19:58.994740 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Apr 04 02:19:58 crc kubenswrapper[4681]: I0404 02:19:58.995285 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-w4qqs" Apr 04 02:19:59 crc kubenswrapper[4681]: I0404 02:19:59.003427 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Apr 04 02:19:59 crc kubenswrapper[4681]: I0404 02:19:59.004775 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Apr 04 02:19:59 crc kubenswrapper[4681]: I0404 02:19:59.171717 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df2cz\" (UniqueName: \"kubernetes.io/projected/dd82b7b7-ba75-4588-9dc2-c47ed34762b5-kube-api-access-df2cz\") pod \"openstack-galera-0\" (UID: \"dd82b7b7-ba75-4588-9dc2-c47ed34762b5\") " pod="openstack/openstack-galera-0" Apr 04 02:19:59 crc kubenswrapper[4681]: I0404 02:19:59.171993 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dd82b7b7-ba75-4588-9dc2-c47ed34762b5-config-data-default\") pod \"openstack-galera-0\" (UID: \"dd82b7b7-ba75-4588-9dc2-c47ed34762b5\") " pod="openstack/openstack-galera-0" Apr 04 02:19:59 crc kubenswrapper[4681]: I0404 02:19:59.172153 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dd82b7b7-ba75-4588-9dc2-c47ed34762b5-kolla-config\") pod \"openstack-galera-0\" (UID: \"dd82b7b7-ba75-4588-9dc2-c47ed34762b5\") " pod="openstack/openstack-galera-0" Apr 04 02:19:59 crc kubenswrapper[4681]: I0404 02:19:59.172250 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd82b7b7-ba75-4588-9dc2-c47ed34762b5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dd82b7b7-ba75-4588-9dc2-c47ed34762b5\") " pod="openstack/openstack-galera-0" Apr 04 02:19:59 crc kubenswrapper[4681]: I0404 02:19:59.172417 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd82b7b7-ba75-4588-9dc2-c47ed34762b5-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"dd82b7b7-ba75-4588-9dc2-c47ed34762b5\") " pod="openstack/openstack-galera-0" Apr 04 02:19:59 crc kubenswrapper[4681]: I0404 02:19:59.172533 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"dd82b7b7-ba75-4588-9dc2-c47ed34762b5\") " pod="openstack/openstack-galera-0" Apr 04 02:19:59 crc kubenswrapper[4681]: I0404 02:19:59.172638 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dd82b7b7-ba75-4588-9dc2-c47ed34762b5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dd82b7b7-ba75-4588-9dc2-c47ed34762b5\") " pod="openstack/openstack-galera-0" Apr 04 02:19:59 crc kubenswrapper[4681]: I0404 02:19:59.172730 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd82b7b7-ba75-4588-9dc2-c47ed34762b5-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"dd82b7b7-ba75-4588-9dc2-c47ed34762b5\") " pod="openstack/openstack-galera-0" Apr 04 02:19:59 crc kubenswrapper[4681]: I0404 02:19:59.273945 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df2cz\" (UniqueName: \"kubernetes.io/projected/dd82b7b7-ba75-4588-9dc2-c47ed34762b5-kube-api-access-df2cz\") pod \"openstack-galera-0\" (UID: \"dd82b7b7-ba75-4588-9dc2-c47ed34762b5\") " pod="openstack/openstack-galera-0" Apr 04 02:19:59 crc kubenswrapper[4681]: I0404 02:19:59.274037 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dd82b7b7-ba75-4588-9dc2-c47ed34762b5-config-data-default\") pod \"openstack-galera-0\" (UID: \"dd82b7b7-ba75-4588-9dc2-c47ed34762b5\") " pod="openstack/openstack-galera-0" Apr 04 02:19:59 crc kubenswrapper[4681]: I0404 02:19:59.274086 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dd82b7b7-ba75-4588-9dc2-c47ed34762b5-kolla-config\") pod \"openstack-galera-0\" (UID: \"dd82b7b7-ba75-4588-9dc2-c47ed34762b5\") " pod="openstack/openstack-galera-0" Apr 04 02:19:59 crc kubenswrapper[4681]: I0404 02:19:59.274107 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd82b7b7-ba75-4588-9dc2-c47ed34762b5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dd82b7b7-ba75-4588-9dc2-c47ed34762b5\") " pod="openstack/openstack-galera-0" Apr 04 02:19:59 crc kubenswrapper[4681]: I0404 02:19:59.274159 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd82b7b7-ba75-4588-9dc2-c47ed34762b5-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"dd82b7b7-ba75-4588-9dc2-c47ed34762b5\") " pod="openstack/openstack-galera-0" Apr 04 02:19:59 crc kubenswrapper[4681]: I0404 02:19:59.274183 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"dd82b7b7-ba75-4588-9dc2-c47ed34762b5\") " pod="openstack/openstack-galera-0" Apr 04 02:19:59 crc kubenswrapper[4681]: I0404 02:19:59.274203 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dd82b7b7-ba75-4588-9dc2-c47ed34762b5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dd82b7b7-ba75-4588-9dc2-c47ed34762b5\") " pod="openstack/openstack-galera-0" Apr 04 02:19:59 crc kubenswrapper[4681]: I0404 02:19:59.274222 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd82b7b7-ba75-4588-9dc2-c47ed34762b5-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"dd82b7b7-ba75-4588-9dc2-c47ed34762b5\") " pod="openstack/openstack-galera-0" Apr 04 02:19:59 crc kubenswrapper[4681]: I0404 02:19:59.276210 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd82b7b7-ba75-4588-9dc2-c47ed34762b5-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dd82b7b7-ba75-4588-9dc2-c47ed34762b5\") " pod="openstack/openstack-galera-0" Apr 04 02:19:59 crc kubenswrapper[4681]: I0404 02:19:59.276622 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"dd82b7b7-ba75-4588-9dc2-c47ed34762b5\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-galera-0" Apr 04 02:19:59 crc kubenswrapper[4681]: I0404 02:19:59.276734 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dd82b7b7-ba75-4588-9dc2-c47ed34762b5-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dd82b7b7-ba75-4588-9dc2-c47ed34762b5\") " pod="openstack/openstack-galera-0" Apr 04 02:19:59 crc kubenswrapper[4681]: I0404 02:19:59.277298 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dd82b7b7-ba75-4588-9dc2-c47ed34762b5-kolla-config\") pod \"openstack-galera-0\" (UID: \"dd82b7b7-ba75-4588-9dc2-c47ed34762b5\") " pod="openstack/openstack-galera-0" Apr 04 02:19:59 crc kubenswrapper[4681]: I0404 02:19:59.277787 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dd82b7b7-ba75-4588-9dc2-c47ed34762b5-config-data-default\") pod \"openstack-galera-0\" (UID: \"dd82b7b7-ba75-4588-9dc2-c47ed34762b5\") " pod="openstack/openstack-galera-0" Apr 04 02:19:59 crc kubenswrapper[4681]: I0404 02:19:59.296612 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd82b7b7-ba75-4588-9dc2-c47ed34762b5-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"dd82b7b7-ba75-4588-9dc2-c47ed34762b5\") " pod="openstack/openstack-galera-0" Apr 04 02:19:59 crc kubenswrapper[4681]: I0404 02:19:59.298431 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd82b7b7-ba75-4588-9dc2-c47ed34762b5-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"dd82b7b7-ba75-4588-9dc2-c47ed34762b5\") " pod="openstack/openstack-galera-0" Apr 04 02:19:59 crc kubenswrapper[4681]: I0404 02:19:59.303607 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df2cz\" (UniqueName: \"kubernetes.io/projected/dd82b7b7-ba75-4588-9dc2-c47ed34762b5-kube-api-access-df2cz\") pod \"openstack-galera-0\" (UID: \"dd82b7b7-ba75-4588-9dc2-c47ed34762b5\") " pod="openstack/openstack-galera-0" Apr 04 02:19:59 crc kubenswrapper[4681]: I0404 02:19:59.323069 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"dd82b7b7-ba75-4588-9dc2-c47ed34762b5\") " pod="openstack/openstack-galera-0" Apr 04 02:19:59 crc kubenswrapper[4681]: I0404 02:19:59.613138 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.140114 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587820-s4hzt"] Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.141448 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587820-s4hzt" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.143305 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.143537 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.145020 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.150119 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587820-s4hzt"] Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.288828 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8dbz\" (UniqueName: \"kubernetes.io/projected/72057968-aa72-4a22-aaea-f74196e09c9e-kube-api-access-t8dbz\") pod \"auto-csr-approver-29587820-s4hzt\" (UID: \"72057968-aa72-4a22-aaea-f74196e09c9e\") " pod="openshift-infra/auto-csr-approver-29587820-s4hzt" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.378555 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.379890 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.385768 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-hxdjt" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.386437 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.386693 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.389697 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.392508 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8dbz\" (UniqueName: \"kubernetes.io/projected/72057968-aa72-4a22-aaea-f74196e09c9e-kube-api-access-t8dbz\") pod \"auto-csr-approver-29587820-s4hzt\" (UID: \"72057968-aa72-4a22-aaea-f74196e09c9e\") " pod="openshift-infra/auto-csr-approver-29587820-s4hzt" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.398122 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.417383 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8dbz\" (UniqueName: \"kubernetes.io/projected/72057968-aa72-4a22-aaea-f74196e09c9e-kube-api-access-t8dbz\") pod \"auto-csr-approver-29587820-s4hzt\" (UID: \"72057968-aa72-4a22-aaea-f74196e09c9e\") " pod="openshift-infra/auto-csr-approver-29587820-s4hzt" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.459727 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587820-s4hzt" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.494070 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7de30d66-63ae-43ca-8d87-33b3fc14f4b2-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7de30d66-63ae-43ca-8d87-33b3fc14f4b2\") " pod="openstack/openstack-cell1-galera-0" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.494359 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7de30d66-63ae-43ca-8d87-33b3fc14f4b2-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7de30d66-63ae-43ca-8d87-33b3fc14f4b2\") " pod="openstack/openstack-cell1-galera-0" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.494498 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7de30d66-63ae-43ca-8d87-33b3fc14f4b2-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7de30d66-63ae-43ca-8d87-33b3fc14f4b2\") " pod="openstack/openstack-cell1-galera-0" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.494603 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7de30d66-63ae-43ca-8d87-33b3fc14f4b2-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7de30d66-63ae-43ca-8d87-33b3fc14f4b2\") " pod="openstack/openstack-cell1-galera-0" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.494726 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"7de30d66-63ae-43ca-8d87-33b3fc14f4b2\") " pod="openstack/openstack-cell1-galera-0" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.494834 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7de30d66-63ae-43ca-8d87-33b3fc14f4b2-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7de30d66-63ae-43ca-8d87-33b3fc14f4b2\") " pod="openstack/openstack-cell1-galera-0" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.494956 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7de30d66-63ae-43ca-8d87-33b3fc14f4b2-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7de30d66-63ae-43ca-8d87-33b3fc14f4b2\") " pod="openstack/openstack-cell1-galera-0" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.495082 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b64gm\" (UniqueName: \"kubernetes.io/projected/7de30d66-63ae-43ca-8d87-33b3fc14f4b2-kube-api-access-b64gm\") pod \"openstack-cell1-galera-0\" (UID: \"7de30d66-63ae-43ca-8d87-33b3fc14f4b2\") " pod="openstack/openstack-cell1-galera-0" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.596466 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7de30d66-63ae-43ca-8d87-33b3fc14f4b2-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7de30d66-63ae-43ca-8d87-33b3fc14f4b2\") " pod="openstack/openstack-cell1-galera-0" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.596557 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7de30d66-63ae-43ca-8d87-33b3fc14f4b2-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7de30d66-63ae-43ca-8d87-33b3fc14f4b2\") " pod="openstack/openstack-cell1-galera-0" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.596634 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7de30d66-63ae-43ca-8d87-33b3fc14f4b2-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7de30d66-63ae-43ca-8d87-33b3fc14f4b2\") " pod="openstack/openstack-cell1-galera-0" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.596662 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7de30d66-63ae-43ca-8d87-33b3fc14f4b2-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7de30d66-63ae-43ca-8d87-33b3fc14f4b2\") " pod="openstack/openstack-cell1-galera-0" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.596723 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"7de30d66-63ae-43ca-8d87-33b3fc14f4b2\") " pod="openstack/openstack-cell1-galera-0" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.596776 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7de30d66-63ae-43ca-8d87-33b3fc14f4b2-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7de30d66-63ae-43ca-8d87-33b3fc14f4b2\") " pod="openstack/openstack-cell1-galera-0" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.596822 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7de30d66-63ae-43ca-8d87-33b3fc14f4b2-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7de30d66-63ae-43ca-8d87-33b3fc14f4b2\") " pod="openstack/openstack-cell1-galera-0" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.596882 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b64gm\" (UniqueName: \"kubernetes.io/projected/7de30d66-63ae-43ca-8d87-33b3fc14f4b2-kube-api-access-b64gm\") pod \"openstack-cell1-galera-0\" (UID: \"7de30d66-63ae-43ca-8d87-33b3fc14f4b2\") " pod="openstack/openstack-cell1-galera-0" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.597503 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"7de30d66-63ae-43ca-8d87-33b3fc14f4b2\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-cell1-galera-0" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.597886 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/7de30d66-63ae-43ca-8d87-33b3fc14f4b2-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"7de30d66-63ae-43ca-8d87-33b3fc14f4b2\") " pod="openstack/openstack-cell1-galera-0" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.598581 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/7de30d66-63ae-43ca-8d87-33b3fc14f4b2-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"7de30d66-63ae-43ca-8d87-33b3fc14f4b2\") " pod="openstack/openstack-cell1-galera-0" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.599674 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7de30d66-63ae-43ca-8d87-33b3fc14f4b2-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"7de30d66-63ae-43ca-8d87-33b3fc14f4b2\") " pod="openstack/openstack-cell1-galera-0" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.601575 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/7de30d66-63ae-43ca-8d87-33b3fc14f4b2-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"7de30d66-63ae-43ca-8d87-33b3fc14f4b2\") " pod="openstack/openstack-cell1-galera-0" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.602524 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/7de30d66-63ae-43ca-8d87-33b3fc14f4b2-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"7de30d66-63ae-43ca-8d87-33b3fc14f4b2\") " pod="openstack/openstack-cell1-galera-0" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.603393 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7de30d66-63ae-43ca-8d87-33b3fc14f4b2-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"7de30d66-63ae-43ca-8d87-33b3fc14f4b2\") " pod="openstack/openstack-cell1-galera-0" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.629149 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"7de30d66-63ae-43ca-8d87-33b3fc14f4b2\") " pod="openstack/openstack-cell1-galera-0" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.629167 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b64gm\" (UniqueName: \"kubernetes.io/projected/7de30d66-63ae-43ca-8d87-33b3fc14f4b2-kube-api-access-b64gm\") pod \"openstack-cell1-galera-0\" (UID: \"7de30d66-63ae-43ca-8d87-33b3fc14f4b2\") " pod="openstack/openstack-cell1-galera-0" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.704984 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.834084 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.836179 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.838221 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.839979 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.842176 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-2rnvf" Apr 04 02:20:00 crc kubenswrapper[4681]: I0404 02:20:00.849845 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Apr 04 02:20:01 crc kubenswrapper[4681]: I0404 02:20:01.004473 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/160ce09d-ccb7-4ce9-8bbe-574e115fcc3f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"160ce09d-ccb7-4ce9-8bbe-574e115fcc3f\") " pod="openstack/memcached-0" Apr 04 02:20:01 crc kubenswrapper[4681]: I0404 02:20:01.004547 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/160ce09d-ccb7-4ce9-8bbe-574e115fcc3f-config-data\") pod \"memcached-0\" (UID: \"160ce09d-ccb7-4ce9-8bbe-574e115fcc3f\") " pod="openstack/memcached-0" Apr 04 02:20:01 crc kubenswrapper[4681]: I0404 02:20:01.004631 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/160ce09d-ccb7-4ce9-8bbe-574e115fcc3f-kolla-config\") pod \"memcached-0\" (UID: \"160ce09d-ccb7-4ce9-8bbe-574e115fcc3f\") " pod="openstack/memcached-0" Apr 04 02:20:01 crc kubenswrapper[4681]: I0404 02:20:01.004682 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8qxw\" (UniqueName: \"kubernetes.io/projected/160ce09d-ccb7-4ce9-8bbe-574e115fcc3f-kube-api-access-k8qxw\") pod \"memcached-0\" (UID: \"160ce09d-ccb7-4ce9-8bbe-574e115fcc3f\") " pod="openstack/memcached-0" Apr 04 02:20:01 crc kubenswrapper[4681]: I0404 02:20:01.004722 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160ce09d-ccb7-4ce9-8bbe-574e115fcc3f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"160ce09d-ccb7-4ce9-8bbe-574e115fcc3f\") " pod="openstack/memcached-0" Apr 04 02:20:01 crc kubenswrapper[4681]: I0404 02:20:01.107654 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8qxw\" (UniqueName: \"kubernetes.io/projected/160ce09d-ccb7-4ce9-8bbe-574e115fcc3f-kube-api-access-k8qxw\") pod \"memcached-0\" (UID: \"160ce09d-ccb7-4ce9-8bbe-574e115fcc3f\") " pod="openstack/memcached-0" Apr 04 02:20:01 crc kubenswrapper[4681]: I0404 02:20:01.108132 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160ce09d-ccb7-4ce9-8bbe-574e115fcc3f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"160ce09d-ccb7-4ce9-8bbe-574e115fcc3f\") " pod="openstack/memcached-0" Apr 04 02:20:01 crc kubenswrapper[4681]: I0404 02:20:01.108628 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/160ce09d-ccb7-4ce9-8bbe-574e115fcc3f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"160ce09d-ccb7-4ce9-8bbe-574e115fcc3f\") " pod="openstack/memcached-0" Apr 04 02:20:01 crc kubenswrapper[4681]: I0404 02:20:01.108922 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/160ce09d-ccb7-4ce9-8bbe-574e115fcc3f-config-data\") pod \"memcached-0\" (UID: \"160ce09d-ccb7-4ce9-8bbe-574e115fcc3f\") " pod="openstack/memcached-0" Apr 04 02:20:01 crc kubenswrapper[4681]: I0404 02:20:01.109106 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/160ce09d-ccb7-4ce9-8bbe-574e115fcc3f-kolla-config\") pod \"memcached-0\" (UID: \"160ce09d-ccb7-4ce9-8bbe-574e115fcc3f\") " pod="openstack/memcached-0" Apr 04 02:20:01 crc kubenswrapper[4681]: I0404 02:20:01.109934 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/160ce09d-ccb7-4ce9-8bbe-574e115fcc3f-kolla-config\") pod \"memcached-0\" (UID: \"160ce09d-ccb7-4ce9-8bbe-574e115fcc3f\") " pod="openstack/memcached-0" Apr 04 02:20:01 crc kubenswrapper[4681]: I0404 02:20:01.109962 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/160ce09d-ccb7-4ce9-8bbe-574e115fcc3f-config-data\") pod \"memcached-0\" (UID: \"160ce09d-ccb7-4ce9-8bbe-574e115fcc3f\") " pod="openstack/memcached-0" Apr 04 02:20:01 crc kubenswrapper[4681]: I0404 02:20:01.112991 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/160ce09d-ccb7-4ce9-8bbe-574e115fcc3f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"160ce09d-ccb7-4ce9-8bbe-574e115fcc3f\") " pod="openstack/memcached-0" Apr 04 02:20:01 crc kubenswrapper[4681]: I0404 02:20:01.113081 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/160ce09d-ccb7-4ce9-8bbe-574e115fcc3f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"160ce09d-ccb7-4ce9-8bbe-574e115fcc3f\") " pod="openstack/memcached-0" Apr 04 02:20:01 crc kubenswrapper[4681]: I0404 02:20:01.123536 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8qxw\" (UniqueName: \"kubernetes.io/projected/160ce09d-ccb7-4ce9-8bbe-574e115fcc3f-kube-api-access-k8qxw\") pod \"memcached-0\" (UID: \"160ce09d-ccb7-4ce9-8bbe-574e115fcc3f\") " pod="openstack/memcached-0" Apr 04 02:20:01 crc kubenswrapper[4681]: I0404 02:20:01.162704 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Apr 04 02:20:03 crc kubenswrapper[4681]: I0404 02:20:03.297790 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Apr 04 02:20:03 crc kubenswrapper[4681]: I0404 02:20:03.299450 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Apr 04 02:20:03 crc kubenswrapper[4681]: I0404 02:20:03.302431 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-9qp6x" Apr 04 02:20:03 crc kubenswrapper[4681]: I0404 02:20:03.306751 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Apr 04 02:20:03 crc kubenswrapper[4681]: I0404 02:20:03.459027 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8jfb\" (UniqueName: \"kubernetes.io/projected/e1febd11-574c-4fc6-967c-d74bef4e351a-kube-api-access-t8jfb\") pod \"kube-state-metrics-0\" (UID: \"e1febd11-574c-4fc6-967c-d74bef4e351a\") " pod="openstack/kube-state-metrics-0" Apr 04 02:20:03 crc kubenswrapper[4681]: I0404 02:20:03.563230 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8jfb\" (UniqueName: \"kubernetes.io/projected/e1febd11-574c-4fc6-967c-d74bef4e351a-kube-api-access-t8jfb\") pod \"kube-state-metrics-0\" (UID: \"e1febd11-574c-4fc6-967c-d74bef4e351a\") " pod="openstack/kube-state-metrics-0" Apr 04 02:20:03 crc kubenswrapper[4681]: I0404 02:20:03.598928 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8jfb\" (UniqueName: \"kubernetes.io/projected/e1febd11-574c-4fc6-967c-d74bef4e351a-kube-api-access-t8jfb\") pod \"kube-state-metrics-0\" (UID: \"e1febd11-574c-4fc6-967c-d74bef4e351a\") " pod="openstack/kube-state-metrics-0" Apr 04 02:20:03 crc kubenswrapper[4681]: I0404 02:20:03.635423 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.653497 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.656616 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.659184 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.659703 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.659814 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-xgv8d" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.659964 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.659992 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.659705 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.660206 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.669917 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.686645 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.781459 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/6579b16f-f45a-4c22-9107-6763d001efb2-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.781784 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6579b16f-f45a-4c22-9107-6763d001efb2-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.782100 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6579b16f-f45a-4c22-9107-6763d001efb2-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.782424 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-be343324-7666-479e-a8ae-26270ab2cfcc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be343324-7666-479e-a8ae-26270ab2cfcc\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.782495 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6579b16f-f45a-4c22-9107-6763d001efb2-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.782572 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2prl7\" (UniqueName: \"kubernetes.io/projected/6579b16f-f45a-4c22-9107-6763d001efb2-kube-api-access-2prl7\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.782645 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/6579b16f-f45a-4c22-9107-6763d001efb2-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.782688 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6579b16f-f45a-4c22-9107-6763d001efb2-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.782842 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6579b16f-f45a-4c22-9107-6763d001efb2-config\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.782890 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6579b16f-f45a-4c22-9107-6763d001efb2-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.884230 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6579b16f-f45a-4c22-9107-6763d001efb2-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.884371 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-be343324-7666-479e-a8ae-26270ab2cfcc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be343324-7666-479e-a8ae-26270ab2cfcc\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.884403 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6579b16f-f45a-4c22-9107-6763d001efb2-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.884435 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2prl7\" (UniqueName: \"kubernetes.io/projected/6579b16f-f45a-4c22-9107-6763d001efb2-kube-api-access-2prl7\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.884472 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/6579b16f-f45a-4c22-9107-6763d001efb2-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.884500 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6579b16f-f45a-4c22-9107-6763d001efb2-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.884556 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6579b16f-f45a-4c22-9107-6763d001efb2-config\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.884577 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6579b16f-f45a-4c22-9107-6763d001efb2-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.884598 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/6579b16f-f45a-4c22-9107-6763d001efb2-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.884625 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6579b16f-f45a-4c22-9107-6763d001efb2-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.886235 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/6579b16f-f45a-4c22-9107-6763d001efb2-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.886235 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6579b16f-f45a-4c22-9107-6763d001efb2-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.886793 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/6579b16f-f45a-4c22-9107-6763d001efb2-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.888564 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6579b16f-f45a-4c22-9107-6763d001efb2-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.888943 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6579b16f-f45a-4c22-9107-6763d001efb2-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.888944 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6579b16f-f45a-4c22-9107-6763d001efb2-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.889251 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6579b16f-f45a-4c22-9107-6763d001efb2-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.890185 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6579b16f-f45a-4c22-9107-6763d001efb2-config\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.890229 4681 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.890252 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-be343324-7666-479e-a8ae-26270ab2cfcc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be343324-7666-479e-a8ae-26270ab2cfcc\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1afbb87d4ef2fe230ee8a94c40d1d069f8d7a05e7e9d3bfdb3b9deafd206a254/globalmount\"" pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.917196 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2prl7\" (UniqueName: \"kubernetes.io/projected/6579b16f-f45a-4c22-9107-6763d001efb2-kube-api-access-2prl7\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.937831 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-be343324-7666-479e-a8ae-26270ab2cfcc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be343324-7666-479e-a8ae-26270ab2cfcc\") pod \"prometheus-metric-storage-0\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:04 crc kubenswrapper[4681]: I0404 02:20:04.983489 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.163617 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-jz78r"] Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.170922 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jz78r" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.172407 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jz78r"] Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.173569 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-w22l6" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.173812 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.173995 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.191108 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-ndgrb"] Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.193439 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-ndgrb" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.204197 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-ndgrb"] Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.307542 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/616e7c64-534b-41e8-8ad9-0abf8f05d3d5-ovn-controller-tls-certs\") pod \"ovn-controller-jz78r\" (UID: \"616e7c64-534b-41e8-8ad9-0abf8f05d3d5\") " pod="openstack/ovn-controller-jz78r" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.307625 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/38cc8476-2432-47d7-ad56-fd155b7680a5-etc-ovs\") pod \"ovn-controller-ovs-ndgrb\" (UID: \"38cc8476-2432-47d7-ad56-fd155b7680a5\") " pod="openstack/ovn-controller-ovs-ndgrb" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.307657 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38cc8476-2432-47d7-ad56-fd155b7680a5-scripts\") pod \"ovn-controller-ovs-ndgrb\" (UID: \"38cc8476-2432-47d7-ad56-fd155b7680a5\") " pod="openstack/ovn-controller-ovs-ndgrb" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.307692 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spvnc\" (UniqueName: \"kubernetes.io/projected/38cc8476-2432-47d7-ad56-fd155b7680a5-kube-api-access-spvnc\") pod \"ovn-controller-ovs-ndgrb\" (UID: \"38cc8476-2432-47d7-ad56-fd155b7680a5\") " pod="openstack/ovn-controller-ovs-ndgrb" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.307844 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/38cc8476-2432-47d7-ad56-fd155b7680a5-var-run\") pod \"ovn-controller-ovs-ndgrb\" (UID: \"38cc8476-2432-47d7-ad56-fd155b7680a5\") " pod="openstack/ovn-controller-ovs-ndgrb" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.307893 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/616e7c64-534b-41e8-8ad9-0abf8f05d3d5-scripts\") pod \"ovn-controller-jz78r\" (UID: \"616e7c64-534b-41e8-8ad9-0abf8f05d3d5\") " pod="openstack/ovn-controller-jz78r" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.307943 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28nfg\" (UniqueName: \"kubernetes.io/projected/616e7c64-534b-41e8-8ad9-0abf8f05d3d5-kube-api-access-28nfg\") pod \"ovn-controller-jz78r\" (UID: \"616e7c64-534b-41e8-8ad9-0abf8f05d3d5\") " pod="openstack/ovn-controller-jz78r" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.308104 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/616e7c64-534b-41e8-8ad9-0abf8f05d3d5-combined-ca-bundle\") pod \"ovn-controller-jz78r\" (UID: \"616e7c64-534b-41e8-8ad9-0abf8f05d3d5\") " pod="openstack/ovn-controller-jz78r" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.308190 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/616e7c64-534b-41e8-8ad9-0abf8f05d3d5-var-run-ovn\") pod \"ovn-controller-jz78r\" (UID: \"616e7c64-534b-41e8-8ad9-0abf8f05d3d5\") " pod="openstack/ovn-controller-jz78r" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.308277 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/38cc8476-2432-47d7-ad56-fd155b7680a5-var-lib\") pod \"ovn-controller-ovs-ndgrb\" (UID: \"38cc8476-2432-47d7-ad56-fd155b7680a5\") " pod="openstack/ovn-controller-ovs-ndgrb" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.308313 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/616e7c64-534b-41e8-8ad9-0abf8f05d3d5-var-run\") pod \"ovn-controller-jz78r\" (UID: \"616e7c64-534b-41e8-8ad9-0abf8f05d3d5\") " pod="openstack/ovn-controller-jz78r" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.308345 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/38cc8476-2432-47d7-ad56-fd155b7680a5-var-log\") pod \"ovn-controller-ovs-ndgrb\" (UID: \"38cc8476-2432-47d7-ad56-fd155b7680a5\") " pod="openstack/ovn-controller-ovs-ndgrb" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.308424 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/616e7c64-534b-41e8-8ad9-0abf8f05d3d5-var-log-ovn\") pod \"ovn-controller-jz78r\" (UID: \"616e7c64-534b-41e8-8ad9-0abf8f05d3d5\") " pod="openstack/ovn-controller-jz78r" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.409700 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/616e7c64-534b-41e8-8ad9-0abf8f05d3d5-scripts\") pod \"ovn-controller-jz78r\" (UID: \"616e7c64-534b-41e8-8ad9-0abf8f05d3d5\") " pod="openstack/ovn-controller-jz78r" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.409750 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28nfg\" (UniqueName: \"kubernetes.io/projected/616e7c64-534b-41e8-8ad9-0abf8f05d3d5-kube-api-access-28nfg\") pod \"ovn-controller-jz78r\" (UID: \"616e7c64-534b-41e8-8ad9-0abf8f05d3d5\") " pod="openstack/ovn-controller-jz78r" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.409818 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/616e7c64-534b-41e8-8ad9-0abf8f05d3d5-combined-ca-bundle\") pod \"ovn-controller-jz78r\" (UID: \"616e7c64-534b-41e8-8ad9-0abf8f05d3d5\") " pod="openstack/ovn-controller-jz78r" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.410163 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/616e7c64-534b-41e8-8ad9-0abf8f05d3d5-var-run-ovn\") pod \"ovn-controller-jz78r\" (UID: \"616e7c64-534b-41e8-8ad9-0abf8f05d3d5\") " pod="openstack/ovn-controller-jz78r" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.410664 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/616e7c64-534b-41e8-8ad9-0abf8f05d3d5-var-run-ovn\") pod \"ovn-controller-jz78r\" (UID: \"616e7c64-534b-41e8-8ad9-0abf8f05d3d5\") " pod="openstack/ovn-controller-jz78r" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.410731 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/38cc8476-2432-47d7-ad56-fd155b7680a5-var-lib\") pod \"ovn-controller-ovs-ndgrb\" (UID: \"38cc8476-2432-47d7-ad56-fd155b7680a5\") " pod="openstack/ovn-controller-ovs-ndgrb" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.410864 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/38cc8476-2432-47d7-ad56-fd155b7680a5-var-lib\") pod \"ovn-controller-ovs-ndgrb\" (UID: \"38cc8476-2432-47d7-ad56-fd155b7680a5\") " pod="openstack/ovn-controller-ovs-ndgrb" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.410920 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/616e7c64-534b-41e8-8ad9-0abf8f05d3d5-var-run\") pod \"ovn-controller-jz78r\" (UID: \"616e7c64-534b-41e8-8ad9-0abf8f05d3d5\") " pod="openstack/ovn-controller-jz78r" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.411027 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/616e7c64-534b-41e8-8ad9-0abf8f05d3d5-var-run\") pod \"ovn-controller-jz78r\" (UID: \"616e7c64-534b-41e8-8ad9-0abf8f05d3d5\") " pod="openstack/ovn-controller-jz78r" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.410945 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/38cc8476-2432-47d7-ad56-fd155b7680a5-var-log\") pod \"ovn-controller-ovs-ndgrb\" (UID: \"38cc8476-2432-47d7-ad56-fd155b7680a5\") " pod="openstack/ovn-controller-ovs-ndgrb" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.411085 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/616e7c64-534b-41e8-8ad9-0abf8f05d3d5-var-log-ovn\") pod \"ovn-controller-jz78r\" (UID: \"616e7c64-534b-41e8-8ad9-0abf8f05d3d5\") " pod="openstack/ovn-controller-jz78r" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.411136 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/616e7c64-534b-41e8-8ad9-0abf8f05d3d5-ovn-controller-tls-certs\") pod \"ovn-controller-jz78r\" (UID: \"616e7c64-534b-41e8-8ad9-0abf8f05d3d5\") " pod="openstack/ovn-controller-jz78r" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.411188 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/38cc8476-2432-47d7-ad56-fd155b7680a5-etc-ovs\") pod \"ovn-controller-ovs-ndgrb\" (UID: \"38cc8476-2432-47d7-ad56-fd155b7680a5\") " pod="openstack/ovn-controller-ovs-ndgrb" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.411209 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38cc8476-2432-47d7-ad56-fd155b7680a5-scripts\") pod \"ovn-controller-ovs-ndgrb\" (UID: \"38cc8476-2432-47d7-ad56-fd155b7680a5\") " pod="openstack/ovn-controller-ovs-ndgrb" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.411328 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/38cc8476-2432-47d7-ad56-fd155b7680a5-var-log\") pod \"ovn-controller-ovs-ndgrb\" (UID: \"38cc8476-2432-47d7-ad56-fd155b7680a5\") " pod="openstack/ovn-controller-ovs-ndgrb" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.411362 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/616e7c64-534b-41e8-8ad9-0abf8f05d3d5-var-log-ovn\") pod \"ovn-controller-jz78r\" (UID: \"616e7c64-534b-41e8-8ad9-0abf8f05d3d5\") " pod="openstack/ovn-controller-jz78r" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.411402 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/38cc8476-2432-47d7-ad56-fd155b7680a5-etc-ovs\") pod \"ovn-controller-ovs-ndgrb\" (UID: \"38cc8476-2432-47d7-ad56-fd155b7680a5\") " pod="openstack/ovn-controller-ovs-ndgrb" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.411462 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spvnc\" (UniqueName: \"kubernetes.io/projected/38cc8476-2432-47d7-ad56-fd155b7680a5-kube-api-access-spvnc\") pod \"ovn-controller-ovs-ndgrb\" (UID: \"38cc8476-2432-47d7-ad56-fd155b7680a5\") " pod="openstack/ovn-controller-ovs-ndgrb" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.411505 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/38cc8476-2432-47d7-ad56-fd155b7680a5-var-run\") pod \"ovn-controller-ovs-ndgrb\" (UID: \"38cc8476-2432-47d7-ad56-fd155b7680a5\") " pod="openstack/ovn-controller-ovs-ndgrb" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.411594 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/38cc8476-2432-47d7-ad56-fd155b7680a5-var-run\") pod \"ovn-controller-ovs-ndgrb\" (UID: \"38cc8476-2432-47d7-ad56-fd155b7680a5\") " pod="openstack/ovn-controller-ovs-ndgrb" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.412652 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/616e7c64-534b-41e8-8ad9-0abf8f05d3d5-scripts\") pod \"ovn-controller-jz78r\" (UID: \"616e7c64-534b-41e8-8ad9-0abf8f05d3d5\") " pod="openstack/ovn-controller-jz78r" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.414440 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/616e7c64-534b-41e8-8ad9-0abf8f05d3d5-ovn-controller-tls-certs\") pod \"ovn-controller-jz78r\" (UID: \"616e7c64-534b-41e8-8ad9-0abf8f05d3d5\") " pod="openstack/ovn-controller-jz78r" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.414671 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/616e7c64-534b-41e8-8ad9-0abf8f05d3d5-combined-ca-bundle\") pod \"ovn-controller-jz78r\" (UID: \"616e7c64-534b-41e8-8ad9-0abf8f05d3d5\") " pod="openstack/ovn-controller-jz78r" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.418717 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/38cc8476-2432-47d7-ad56-fd155b7680a5-scripts\") pod \"ovn-controller-ovs-ndgrb\" (UID: \"38cc8476-2432-47d7-ad56-fd155b7680a5\") " pod="openstack/ovn-controller-ovs-ndgrb" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.432624 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28nfg\" (UniqueName: \"kubernetes.io/projected/616e7c64-534b-41e8-8ad9-0abf8f05d3d5-kube-api-access-28nfg\") pod \"ovn-controller-jz78r\" (UID: \"616e7c64-534b-41e8-8ad9-0abf8f05d3d5\") " pod="openstack/ovn-controller-jz78r" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.446517 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spvnc\" (UniqueName: \"kubernetes.io/projected/38cc8476-2432-47d7-ad56-fd155b7680a5-kube-api-access-spvnc\") pod \"ovn-controller-ovs-ndgrb\" (UID: \"38cc8476-2432-47d7-ad56-fd155b7680a5\") " pod="openstack/ovn-controller-ovs-ndgrb" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.489848 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jz78r" Apr 04 02:20:06 crc kubenswrapper[4681]: I0404 02:20:06.511506 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-ndgrb" Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.743362 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.744894 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.746733 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.747304 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.748002 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.748441 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-l6q22" Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.748890 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.750522 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.832633 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqq2k\" (UniqueName: \"kubernetes.io/projected/30fe1cfd-59db-4c85-bf2c-a476faeabd9c-kube-api-access-bqq2k\") pod \"ovsdbserver-nb-0\" (UID: \"30fe1cfd-59db-4c85-bf2c-a476faeabd9c\") " pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.832700 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30fe1cfd-59db-4c85-bf2c-a476faeabd9c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"30fe1cfd-59db-4c85-bf2c-a476faeabd9c\") " pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.832729 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/30fe1cfd-59db-4c85-bf2c-a476faeabd9c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"30fe1cfd-59db-4c85-bf2c-a476faeabd9c\") " pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.832752 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30fe1cfd-59db-4c85-bf2c-a476faeabd9c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"30fe1cfd-59db-4c85-bf2c-a476faeabd9c\") " pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.832797 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/30fe1cfd-59db-4c85-bf2c-a476faeabd9c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"30fe1cfd-59db-4c85-bf2c-a476faeabd9c\") " pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.832843 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/30fe1cfd-59db-4c85-bf2c-a476faeabd9c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"30fe1cfd-59db-4c85-bf2c-a476faeabd9c\") " pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.832882 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30fe1cfd-59db-4c85-bf2c-a476faeabd9c-config\") pod \"ovsdbserver-nb-0\" (UID: \"30fe1cfd-59db-4c85-bf2c-a476faeabd9c\") " pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.832911 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"30fe1cfd-59db-4c85-bf2c-a476faeabd9c\") " pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.934200 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/30fe1cfd-59db-4c85-bf2c-a476faeabd9c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"30fe1cfd-59db-4c85-bf2c-a476faeabd9c\") " pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.934298 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/30fe1cfd-59db-4c85-bf2c-a476faeabd9c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"30fe1cfd-59db-4c85-bf2c-a476faeabd9c\") " pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.934378 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30fe1cfd-59db-4c85-bf2c-a476faeabd9c-config\") pod \"ovsdbserver-nb-0\" (UID: \"30fe1cfd-59db-4c85-bf2c-a476faeabd9c\") " pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.934414 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"30fe1cfd-59db-4c85-bf2c-a476faeabd9c\") " pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.934476 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqq2k\" (UniqueName: \"kubernetes.io/projected/30fe1cfd-59db-4c85-bf2c-a476faeabd9c-kube-api-access-bqq2k\") pod \"ovsdbserver-nb-0\" (UID: \"30fe1cfd-59db-4c85-bf2c-a476faeabd9c\") " pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.934522 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30fe1cfd-59db-4c85-bf2c-a476faeabd9c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"30fe1cfd-59db-4c85-bf2c-a476faeabd9c\") " pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.934545 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/30fe1cfd-59db-4c85-bf2c-a476faeabd9c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"30fe1cfd-59db-4c85-bf2c-a476faeabd9c\") " pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.934559 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30fe1cfd-59db-4c85-bf2c-a476faeabd9c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"30fe1cfd-59db-4c85-bf2c-a476faeabd9c\") " pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.934987 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"30fe1cfd-59db-4c85-bf2c-a476faeabd9c\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.935628 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/30fe1cfd-59db-4c85-bf2c-a476faeabd9c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"30fe1cfd-59db-4c85-bf2c-a476faeabd9c\") " pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.935858 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30fe1cfd-59db-4c85-bf2c-a476faeabd9c-config\") pod \"ovsdbserver-nb-0\" (UID: \"30fe1cfd-59db-4c85-bf2c-a476faeabd9c\") " pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.936082 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/30fe1cfd-59db-4c85-bf2c-a476faeabd9c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"30fe1cfd-59db-4c85-bf2c-a476faeabd9c\") " pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.941795 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/30fe1cfd-59db-4c85-bf2c-a476faeabd9c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"30fe1cfd-59db-4c85-bf2c-a476faeabd9c\") " pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.943163 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/30fe1cfd-59db-4c85-bf2c-a476faeabd9c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"30fe1cfd-59db-4c85-bf2c-a476faeabd9c\") " pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.945561 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30fe1cfd-59db-4c85-bf2c-a476faeabd9c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"30fe1cfd-59db-4c85-bf2c-a476faeabd9c\") " pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:07 crc kubenswrapper[4681]: I0404 02:20:07.986175 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqq2k\" (UniqueName: \"kubernetes.io/projected/30fe1cfd-59db-4c85-bf2c-a476faeabd9c-kube-api-access-bqq2k\") pod \"ovsdbserver-nb-0\" (UID: \"30fe1cfd-59db-4c85-bf2c-a476faeabd9c\") " pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:08 crc kubenswrapper[4681]: I0404 02:20:08.027492 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"30fe1cfd-59db-4c85-bf2c-a476faeabd9c\") " pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:08 crc kubenswrapper[4681]: I0404 02:20:08.107643 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.371496 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.373104 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.375175 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.375339 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-d2977" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.375542 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.376324 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.387171 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.487432 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5zjx\" (UniqueName: \"kubernetes.io/projected/f2a3604e-5c76-460f-aebb-5e2e89688d74-kube-api-access-q5zjx\") pod \"ovsdbserver-sb-0\" (UID: \"f2a3604e-5c76-460f-aebb-5e2e89688d74\") " pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.487744 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f2a3604e-5c76-460f-aebb-5e2e89688d74\") " pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.487899 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a3604e-5c76-460f-aebb-5e2e89688d74-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f2a3604e-5c76-460f-aebb-5e2e89688d74\") " pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.488068 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2a3604e-5c76-460f-aebb-5e2e89688d74-config\") pod \"ovsdbserver-sb-0\" (UID: \"f2a3604e-5c76-460f-aebb-5e2e89688d74\") " pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.488164 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f2a3604e-5c76-460f-aebb-5e2e89688d74-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f2a3604e-5c76-460f-aebb-5e2e89688d74\") " pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.488312 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2a3604e-5c76-460f-aebb-5e2e89688d74-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f2a3604e-5c76-460f-aebb-5e2e89688d74\") " pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.488401 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f2a3604e-5c76-460f-aebb-5e2e89688d74-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f2a3604e-5c76-460f-aebb-5e2e89688d74\") " pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.488516 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2a3604e-5c76-460f-aebb-5e2e89688d74-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f2a3604e-5c76-460f-aebb-5e2e89688d74\") " pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.590010 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2a3604e-5c76-460f-aebb-5e2e89688d74-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f2a3604e-5c76-460f-aebb-5e2e89688d74\") " pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.590077 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f2a3604e-5c76-460f-aebb-5e2e89688d74-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f2a3604e-5c76-460f-aebb-5e2e89688d74\") " pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.590158 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2a3604e-5c76-460f-aebb-5e2e89688d74-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f2a3604e-5c76-460f-aebb-5e2e89688d74\") " pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.590202 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5zjx\" (UniqueName: \"kubernetes.io/projected/f2a3604e-5c76-460f-aebb-5e2e89688d74-kube-api-access-q5zjx\") pod \"ovsdbserver-sb-0\" (UID: \"f2a3604e-5c76-460f-aebb-5e2e89688d74\") " pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.590250 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f2a3604e-5c76-460f-aebb-5e2e89688d74\") " pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.590333 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a3604e-5c76-460f-aebb-5e2e89688d74-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f2a3604e-5c76-460f-aebb-5e2e89688d74\") " pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.590390 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2a3604e-5c76-460f-aebb-5e2e89688d74-config\") pod \"ovsdbserver-sb-0\" (UID: \"f2a3604e-5c76-460f-aebb-5e2e89688d74\") " pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.590412 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f2a3604e-5c76-460f-aebb-5e2e89688d74-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f2a3604e-5c76-460f-aebb-5e2e89688d74\") " pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.591609 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f2a3604e-5c76-460f-aebb-5e2e89688d74-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f2a3604e-5c76-460f-aebb-5e2e89688d74\") " pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.592540 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f2a3604e-5c76-460f-aebb-5e2e89688d74-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f2a3604e-5c76-460f-aebb-5e2e89688d74\") " pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.594124 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2a3604e-5c76-460f-aebb-5e2e89688d74-config\") pod \"ovsdbserver-sb-0\" (UID: \"f2a3604e-5c76-460f-aebb-5e2e89688d74\") " pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.594435 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f2a3604e-5c76-460f-aebb-5e2e89688d74\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.595515 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2a3604e-5c76-460f-aebb-5e2e89688d74-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f2a3604e-5c76-460f-aebb-5e2e89688d74\") " pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.598399 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2a3604e-5c76-460f-aebb-5e2e89688d74-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f2a3604e-5c76-460f-aebb-5e2e89688d74\") " pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.612372 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2a3604e-5c76-460f-aebb-5e2e89688d74-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f2a3604e-5c76-460f-aebb-5e2e89688d74\") " pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.614876 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5zjx\" (UniqueName: \"kubernetes.io/projected/f2a3604e-5c76-460f-aebb-5e2e89688d74-kube-api-access-q5zjx\") pod \"ovsdbserver-sb-0\" (UID: \"f2a3604e-5c76-460f-aebb-5e2e89688d74\") " pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.629468 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f2a3604e-5c76-460f-aebb-5e2e89688d74\") " pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:10 crc kubenswrapper[4681]: I0404 02:20:10.690891 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:26 crc kubenswrapper[4681]: I0404 02:20:26.524649 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:20:26 crc kubenswrapper[4681]: I0404 02:20:26.525204 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:20:28 crc kubenswrapper[4681]: I0404 02:20:28.137041 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Apr 04 02:20:28 crc kubenswrapper[4681]: I0404 02:20:28.171303 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f2a3604e-5c76-460f-aebb-5e2e89688d74","Type":"ContainerStarted","Data":"fa3d5657641cee7b285a26a95a7f895a78573237550f5a09658edc8fc9ce41c0"} Apr 04 02:20:28 crc kubenswrapper[4681]: I0404 02:20:28.857022 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Apr 04 02:20:28 crc kubenswrapper[4681]: I0404 02:20:28.874422 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/notifications-rabbitmq-server-0"] Apr 04 02:20:28 crc kubenswrapper[4681]: W0404 02:20:28.875122 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod189dfe5e_4211_48c8_bc76_ea9c229c5d65.slice/crio-0fc9b8157fa49f6d3e1cf48996c0d9169da8e648d14530abdaff9b0adec1458a WatchSource:0}: Error finding container 0fc9b8157fa49f6d3e1cf48996c0d9169da8e648d14530abdaff9b0adec1458a: Status 404 returned error can't find the container with id 0fc9b8157fa49f6d3e1cf48996c0d9169da8e648d14530abdaff9b0adec1458a Apr 04 02:20:28 crc kubenswrapper[4681]: I0404 02:20:28.885882 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Apr 04 02:20:28 crc kubenswrapper[4681]: I0404 02:20:28.897463 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fdf6c6f7-q2q8w"] Apr 04 02:20:28 crc kubenswrapper[4681]: I0404 02:20:28.919308 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jz78r"] Apr 04 02:20:28 crc kubenswrapper[4681]: W0404 02:20:28.926728 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6579b16f_f45a_4c22_9107_6763d001efb2.slice/crio-8d35abc49d7dafa0f07907d8aae7bbc1ea76b82aa232d392eef31205979adb2c WatchSource:0}: Error finding container 8d35abc49d7dafa0f07907d8aae7bbc1ea76b82aa232d392eef31205979adb2c: Status 404 returned error can't find the container with id 8d35abc49d7dafa0f07907d8aae7bbc1ea76b82aa232d392eef31205979adb2c Apr 04 02:20:28 crc kubenswrapper[4681]: I0404 02:20:28.933580 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Apr 04 02:20:28 crc kubenswrapper[4681]: I0404 02:20:28.939353 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Apr 04 02:20:29 crc kubenswrapper[4681]: I0404 02:20:29.136808 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Apr 04 02:20:29 crc kubenswrapper[4681]: I0404 02:20:29.168491 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b67658d95-hdcw2"] Apr 04 02:20:29 crc kubenswrapper[4681]: E0404 02:20:29.245414 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:38.102.83.110:5001/podified-master-centos10/openstack-ovn-nb-db-server:watcher_latest,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n69h9hbdh67bh5bbh68ch56bh656h69h594h576h558h5dh598hf5h67dhc8h644h5f8hf4h6dh66h677h548h59fh74h696hd9hb4h5d5h55bhfq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bqq2k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(30fe1cfd-59db-4c85-bf2c-a476faeabd9c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Apr 04 02:20:29 crc kubenswrapper[4681]: E0404 02:20:29.248556 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n69h9hbdh67bh5bbh68ch56bh656h69h594h576h558h5dh598hf5h67dhc8h644h5f8hf4h6dh66h677h548h59fh74h696hd9hb4h5d5h55bhfq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bqq2k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(30fe1cfd-59db-4c85-bf2c-a476faeabd9c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Apr 04 02:20:29 crc kubenswrapper[4681]: E0404 02:20:29.248631 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.110:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hfgqd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(b40175fa-a3b0-40c3-bc35-7d927897b82b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Apr 04 02:20:29 crc kubenswrapper[4681]: E0404 02:20:29.249769 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/rabbitmq-server-0" podUID="b40175fa-a3b0-40c3-bc35-7d927897b82b" Apr 04 02:20:29 crc kubenswrapper[4681]: E0404 02:20:29.249836 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack/ovsdbserver-nb-0" podUID="30fe1cfd-59db-4c85-bf2c-a476faeabd9c" Apr 04 02:20:29 crc kubenswrapper[4681]: E0404 02:20:29.258143 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:38.102.83.110:5001/podified-master-centos10/openstack-mariadb:watcher_latest,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b64gm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(7de30d66-63ae-43ca-8d87-33b3fc14f4b2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Apr 04 02:20:29 crc kubenswrapper[4681]: E0404 02:20:29.259652 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/openstack-cell1-galera-0" podUID="7de30d66-63ae-43ca-8d87-33b3fc14f4b2" Apr 04 02:20:29 crc kubenswrapper[4681]: I0404 02:20:29.268859 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-595d94d48f-jpw4b"] Apr 04 02:20:29 crc kubenswrapper[4681]: I0404 02:20:29.268893 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Apr 04 02:20:29 crc kubenswrapper[4681]: I0404 02:20:29.268905 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e1febd11-574c-4fc6-967c-d74bef4e351a","Type":"ContainerStarted","Data":"5270ac1c78bc95c1c239197d463aadf43af48b2e1404bc9d97a5edd151413efa"} Apr 04 02:20:29 crc kubenswrapper[4681]: I0404 02:20:29.268924 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"160ce09d-ccb7-4ce9-8bbe-574e115fcc3f","Type":"ContainerStarted","Data":"7935a516558ee2619b74954b7899cf55202e62a0a650bbc0ae4a62531756c29a"} Apr 04 02:20:29 crc kubenswrapper[4681]: I0404 02:20:29.268937 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"189dfe5e-4211-48c8-bc76-ea9c229c5d65","Type":"ContainerStarted","Data":"0fc9b8157fa49f6d3e1cf48996c0d9169da8e648d14530abdaff9b0adec1458a"} Apr 04 02:20:29 crc kubenswrapper[4681]: I0404 02:20:29.268948 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dd82b7b7-ba75-4588-9dc2-c47ed34762b5","Type":"ContainerStarted","Data":"d18f4f0adcc8b6a719a0023ae22888bc50fa0f2f6e4dc24eca32fb0977fc6c3d"} Apr 04 02:20:29 crc kubenswrapper[4681]: I0404 02:20:29.268957 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6579b16f-f45a-4c22-9107-6763d001efb2","Type":"ContainerStarted","Data":"8d35abc49d7dafa0f07907d8aae7bbc1ea76b82aa232d392eef31205979adb2c"} Apr 04 02:20:29 crc kubenswrapper[4681]: I0404 02:20:29.268967 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fdf6c6f7-q2q8w" event={"ID":"641eda62-a031-46b3-8cb3-c534541afcf7","Type":"ContainerStarted","Data":"adb995fa89492fae2e79c445a74611cdf86d9301a9017b042aff6971dfb82f1c"} Apr 04 02:20:29 crc kubenswrapper[4681]: I0404 02:20:29.268975 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jz78r" event={"ID":"616e7c64-534b-41e8-8ad9-0abf8f05d3d5","Type":"ContainerStarted","Data":"ec95909fe03c8b9be534b452040d06f59fcd18056056478090021d5c019ed5d7"} Apr 04 02:20:29 crc kubenswrapper[4681]: I0404 02:20:29.268985 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b67658d95-hdcw2" event={"ID":"168bf88b-a0ac-4eb5-a406-7511b4ce698d","Type":"ContainerStarted","Data":"26babd7e66eef35deec043ca3cc59271e65721c2cb13d2ec21af9fe42b039911"} Apr 04 02:20:29 crc kubenswrapper[4681]: I0404 02:20:29.268994 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"274d9ff3-9300-48ad-8172-5be9539f6e7b","Type":"ContainerStarted","Data":"1b1f20ffd5f92abdea6191780f349a233c0076c681e57dcd798f8b28762bd9b5"} Apr 04 02:20:29 crc kubenswrapper[4681]: I0404 02:20:29.272544 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587820-s4hzt"] Apr 04 02:20:29 crc kubenswrapper[4681]: I0404 02:20:29.284634 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Apr 04 02:20:29 crc kubenswrapper[4681]: I0404 02:20:29.294533 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Apr 04 02:20:29 crc kubenswrapper[4681]: I0404 02:20:29.323797 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-ndgrb"] Apr 04 02:20:29 crc kubenswrapper[4681]: E0404 02:20:29.811778 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.110:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Apr 04 02:20:29 crc kubenswrapper[4681]: E0404 02:20:29.811843 4681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.110:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Apr 04 02:20:29 crc kubenswrapper[4681]: E0404 02:20:29.812017 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.110:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k5dgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-56d9f89cb5-6kwjz_openstack(d10f82c9-9855-44b8-ae26-7426ae4fad47): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Apr 04 02:20:29 crc kubenswrapper[4681]: E0404 02:20:29.813285 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-56d9f89cb5-6kwjz" podUID="d10f82c9-9855-44b8-ae26-7426ae4fad47" Apr 04 02:20:30 crc kubenswrapper[4681]: I0404 02:20:30.273045 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ndgrb" event={"ID":"38cc8476-2432-47d7-ad56-fd155b7680a5","Type":"ContainerStarted","Data":"0ca6b69f92f5eae623d743edf4a98f49efe619bcf4d8efab47be59924b07e162"} Apr 04 02:20:30 crc kubenswrapper[4681]: I0404 02:20:30.274318 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587820-s4hzt" event={"ID":"72057968-aa72-4a22-aaea-f74196e09c9e","Type":"ContainerStarted","Data":"e34c33f76ae7a1d49f2f29886e66bd02a8a05cc5ab853b2dd2d3a2435ccb2a29"} Apr 04 02:20:30 crc kubenswrapper[4681]: I0404 02:20:30.275759 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b40175fa-a3b0-40c3-bc35-7d927897b82b","Type":"ContainerStarted","Data":"b90271c55f6dc546352e3b58b33273ce09e93afbbc49ffa55de4309797c707ff"} Apr 04 02:20:30 crc kubenswrapper[4681]: I0404 02:20:30.280142 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7de30d66-63ae-43ca-8d87-33b3fc14f4b2","Type":"ContainerStarted","Data":"389c9c159488d99afb9ff49cee18ae340c6d902427cbfe04611bd34e8062231b"} Apr 04 02:20:30 crc kubenswrapper[4681]: E0404 02:20:30.281056 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.110:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-server-0" podUID="b40175fa-a3b0-40c3-bc35-7d927897b82b" Apr 04 02:20:30 crc kubenswrapper[4681]: I0404 02:20:30.282871 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"30fe1cfd-59db-4c85-bf2c-a476faeabd9c","Type":"ContainerStarted","Data":"08f4613a0fa88db5138bdc5a76c366cab10f0c89678c8bc893ba0d2f6c7fe66e"} Apr 04 02:20:30 crc kubenswrapper[4681]: E0404 02:20:30.284733 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.110:5001/podified-master-centos10/openstack-mariadb:watcher_latest\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="7de30d66-63ae-43ca-8d87-33b3fc14f4b2" Apr 04 02:20:30 crc kubenswrapper[4681]: I0404 02:20:30.287155 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-595d94d48f-jpw4b" event={"ID":"64ec402c-6b37-4942-be19-9dcc436c6650","Type":"ContainerStarted","Data":"481572872fcebbc20411444c05f45ae4f77a3c5198ec6f87ca00d5fb023e25f2"} Apr 04 02:20:30 crc kubenswrapper[4681]: E0404 02:20:30.289442 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.110:5001/podified-master-centos10/openstack-ovn-nb-db-server:watcher_latest\\\"\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"]" pod="openstack/ovsdbserver-nb-0" podUID="30fe1cfd-59db-4c85-bf2c-a476faeabd9c" Apr 04 02:20:30 crc kubenswrapper[4681]: E0404 02:20:30.465087 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.110:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Apr 04 02:20:30 crc kubenswrapper[4681]: E0404 02:20:30.465146 4681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.110:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Apr 04 02:20:30 crc kubenswrapper[4681]: E0404 02:20:30.465496 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.110:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bz98b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-69554c7465-wwqrx_openstack(ed55d407-8d90-441b-816f-8519eda89a94): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Apr 04 02:20:30 crc kubenswrapper[4681]: E0404 02:20:30.466655 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-69554c7465-wwqrx" podUID="ed55d407-8d90-441b-816f-8519eda89a94" Apr 04 02:20:30 crc kubenswrapper[4681]: I0404 02:20:30.615784 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56d9f89cb5-6kwjz" Apr 04 02:20:30 crc kubenswrapper[4681]: I0404 02:20:30.781957 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d10f82c9-9855-44b8-ae26-7426ae4fad47-config\") pod \"d10f82c9-9855-44b8-ae26-7426ae4fad47\" (UID: \"d10f82c9-9855-44b8-ae26-7426ae4fad47\") " Apr 04 02:20:30 crc kubenswrapper[4681]: I0404 02:20:30.782064 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5dgt\" (UniqueName: \"kubernetes.io/projected/d10f82c9-9855-44b8-ae26-7426ae4fad47-kube-api-access-k5dgt\") pod \"d10f82c9-9855-44b8-ae26-7426ae4fad47\" (UID: \"d10f82c9-9855-44b8-ae26-7426ae4fad47\") " Apr 04 02:20:30 crc kubenswrapper[4681]: I0404 02:20:30.782247 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d10f82c9-9855-44b8-ae26-7426ae4fad47-dns-svc\") pod \"d10f82c9-9855-44b8-ae26-7426ae4fad47\" (UID: \"d10f82c9-9855-44b8-ae26-7426ae4fad47\") " Apr 04 02:20:30 crc kubenswrapper[4681]: I0404 02:20:30.782624 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d10f82c9-9855-44b8-ae26-7426ae4fad47-config" (OuterVolumeSpecName: "config") pod "d10f82c9-9855-44b8-ae26-7426ae4fad47" (UID: "d10f82c9-9855-44b8-ae26-7426ae4fad47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:20:30 crc kubenswrapper[4681]: I0404 02:20:30.782781 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d10f82c9-9855-44b8-ae26-7426ae4fad47-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d10f82c9-9855-44b8-ae26-7426ae4fad47" (UID: "d10f82c9-9855-44b8-ae26-7426ae4fad47"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:20:30 crc kubenswrapper[4681]: I0404 02:20:30.783208 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d10f82c9-9855-44b8-ae26-7426ae4fad47-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:20:30 crc kubenswrapper[4681]: I0404 02:20:30.783532 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d10f82c9-9855-44b8-ae26-7426ae4fad47-dns-svc\") on node \"crc\" DevicePath \"\"" Apr 04 02:20:30 crc kubenswrapper[4681]: I0404 02:20:30.787495 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d10f82c9-9855-44b8-ae26-7426ae4fad47-kube-api-access-k5dgt" (OuterVolumeSpecName: "kube-api-access-k5dgt") pod "d10f82c9-9855-44b8-ae26-7426ae4fad47" (UID: "d10f82c9-9855-44b8-ae26-7426ae4fad47"). InnerVolumeSpecName "kube-api-access-k5dgt". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:20:30 crc kubenswrapper[4681]: I0404 02:20:30.885019 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5dgt\" (UniqueName: \"kubernetes.io/projected/d10f82c9-9855-44b8-ae26-7426ae4fad47-kube-api-access-k5dgt\") on node \"crc\" DevicePath \"\"" Apr 04 02:20:31 crc kubenswrapper[4681]: I0404 02:20:31.309173 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56d9f89cb5-6kwjz" event={"ID":"d10f82c9-9855-44b8-ae26-7426ae4fad47","Type":"ContainerDied","Data":"9cdc216deaaf7e470fb67336b93b2b19a12fa739438caed86f2bc550b74a9f6d"} Apr 04 02:20:31 crc kubenswrapper[4681]: I0404 02:20:31.309300 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56d9f89cb5-6kwjz" Apr 04 02:20:31 crc kubenswrapper[4681]: E0404 02:20:31.318238 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.110:5001/podified-master-centos10/openstack-mariadb:watcher_latest\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="7de30d66-63ae-43ca-8d87-33b3fc14f4b2" Apr 04 02:20:31 crc kubenswrapper[4681]: E0404 02:20:31.318965 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.110:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-server-0" podUID="b40175fa-a3b0-40c3-bc35-7d927897b82b" Apr 04 02:20:31 crc kubenswrapper[4681]: E0404 02:20:31.320160 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.110:5001/podified-master-centos10/openstack-ovn-nb-db-server:watcher_latest\\\"\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"]" pod="openstack/ovsdbserver-nb-0" podUID="30fe1cfd-59db-4c85-bf2c-a476faeabd9c" Apr 04 02:20:31 crc kubenswrapper[4681]: I0404 02:20:31.530243 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56d9f89cb5-6kwjz"] Apr 04 02:20:31 crc kubenswrapper[4681]: I0404 02:20:31.540547 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56d9f89cb5-6kwjz"] Apr 04 02:20:34 crc kubenswrapper[4681]: I0404 02:20:34.490224 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d10f82c9-9855-44b8-ae26-7426ae4fad47" path="/var/lib/kubelet/pods/d10f82c9-9855-44b8-ae26-7426ae4fad47/volumes" Apr 04 02:20:34 crc kubenswrapper[4681]: I0404 02:20:34.850448 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69554c7465-wwqrx" Apr 04 02:20:34 crc kubenswrapper[4681]: I0404 02:20:34.950878 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed55d407-8d90-441b-816f-8519eda89a94-config\") pod \"ed55d407-8d90-441b-816f-8519eda89a94\" (UID: \"ed55d407-8d90-441b-816f-8519eda89a94\") " Apr 04 02:20:34 crc kubenswrapper[4681]: I0404 02:20:34.951341 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed55d407-8d90-441b-816f-8519eda89a94-config" (OuterVolumeSpecName: "config") pod "ed55d407-8d90-441b-816f-8519eda89a94" (UID: "ed55d407-8d90-441b-816f-8519eda89a94"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:20:34 crc kubenswrapper[4681]: I0404 02:20:34.951435 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bz98b\" (UniqueName: \"kubernetes.io/projected/ed55d407-8d90-441b-816f-8519eda89a94-kube-api-access-bz98b\") pod \"ed55d407-8d90-441b-816f-8519eda89a94\" (UID: \"ed55d407-8d90-441b-816f-8519eda89a94\") " Apr 04 02:20:34 crc kubenswrapper[4681]: I0404 02:20:34.951768 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed55d407-8d90-441b-816f-8519eda89a94-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:20:34 crc kubenswrapper[4681]: I0404 02:20:34.969978 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed55d407-8d90-441b-816f-8519eda89a94-kube-api-access-bz98b" (OuterVolumeSpecName: "kube-api-access-bz98b") pod "ed55d407-8d90-441b-816f-8519eda89a94" (UID: "ed55d407-8d90-441b-816f-8519eda89a94"). InnerVolumeSpecName "kube-api-access-bz98b". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:20:35 crc kubenswrapper[4681]: I0404 02:20:35.053495 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bz98b\" (UniqueName: \"kubernetes.io/projected/ed55d407-8d90-441b-816f-8519eda89a94-kube-api-access-bz98b\") on node \"crc\" DevicePath \"\"" Apr 04 02:20:35 crc kubenswrapper[4681]: I0404 02:20:35.344777 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69554c7465-wwqrx" event={"ID":"ed55d407-8d90-441b-816f-8519eda89a94","Type":"ContainerDied","Data":"c53a54ff5c4da47544309b2ae6cdec3b528779661a5f6fd7a50da53b591506b0"} Apr 04 02:20:35 crc kubenswrapper[4681]: I0404 02:20:35.344831 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69554c7465-wwqrx" Apr 04 02:20:35 crc kubenswrapper[4681]: I0404 02:20:35.404180 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-69554c7465-wwqrx"] Apr 04 02:20:35 crc kubenswrapper[4681]: I0404 02:20:35.417055 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-69554c7465-wwqrx"] Apr 04 02:20:37 crc kubenswrapper[4681]: I0404 02:20:37.221807 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed55d407-8d90-441b-816f-8519eda89a94" path="/var/lib/kubelet/pods/ed55d407-8d90-441b-816f-8519eda89a94/volumes" Apr 04 02:20:41 crc kubenswrapper[4681]: I0404 02:20:41.410690 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587820-s4hzt" event={"ID":"72057968-aa72-4a22-aaea-f74196e09c9e","Type":"ContainerStarted","Data":"eb72537278ec7f69a33cd0f2ce675823ca7641fafd48cd792c89d075fe907366"} Apr 04 02:20:41 crc kubenswrapper[4681]: I0404 02:20:41.429743 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29587820-s4hzt" podStartSLOduration=34.180250266 podStartE2EDuration="41.429723961s" podCreationTimestamp="2026-04-04 02:20:00 +0000 UTC" firstStartedPulling="2026-04-04 02:20:29.242739261 +0000 UTC m=+1508.908514381" lastFinishedPulling="2026-04-04 02:20:36.492212956 +0000 UTC m=+1516.157988076" observedRunningTime="2026-04-04 02:20:41.426969646 +0000 UTC m=+1521.092744786" watchObservedRunningTime="2026-04-04 02:20:41.429723961 +0000 UTC m=+1521.095499081" Apr 04 02:20:42 crc kubenswrapper[4681]: I0404 02:20:42.202897 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 04 02:20:42 crc kubenswrapper[4681]: I0404 02:20:42.421200 4681 generic.go:334] "Generic (PLEG): container finished" podID="72057968-aa72-4a22-aaea-f74196e09c9e" containerID="eb72537278ec7f69a33cd0f2ce675823ca7641fafd48cd792c89d075fe907366" exitCode=0 Apr 04 02:20:42 crc kubenswrapper[4681]: I0404 02:20:42.421538 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587820-s4hzt" event={"ID":"72057968-aa72-4a22-aaea-f74196e09c9e","Type":"ContainerDied","Data":"eb72537278ec7f69a33cd0f2ce675823ca7641fafd48cd792c89d075fe907366"} Apr 04 02:20:42 crc kubenswrapper[4681]: I0404 02:20:42.423826 4681 generic.go:334] "Generic (PLEG): container finished" podID="641eda62-a031-46b3-8cb3-c534541afcf7" containerID="47feea440402ba09ee14438fa0cf02c80a220839ec619aada731ba929874b376" exitCode=0 Apr 04 02:20:42 crc kubenswrapper[4681]: I0404 02:20:42.423927 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fdf6c6f7-q2q8w" event={"ID":"641eda62-a031-46b3-8cb3-c534541afcf7","Type":"ContainerDied","Data":"47feea440402ba09ee14438fa0cf02c80a220839ec619aada731ba929874b376"} Apr 04 02:20:42 crc kubenswrapper[4681]: I0404 02:20:42.427616 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"160ce09d-ccb7-4ce9-8bbe-574e115fcc3f","Type":"ContainerStarted","Data":"a7508d507c672a1cbc49a63563dedb1583e0a4fc59fbb730d1365ee389e9e2e8"} Apr 04 02:20:42 crc kubenswrapper[4681]: I0404 02:20:42.428175 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Apr 04 02:20:42 crc kubenswrapper[4681]: I0404 02:20:42.478843 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=30.612432648 podStartE2EDuration="42.478826002s" podCreationTimestamp="2026-04-04 02:20:00 +0000 UTC" firstStartedPulling="2026-04-04 02:20:28.862510421 +0000 UTC m=+1508.528285551" lastFinishedPulling="2026-04-04 02:20:40.728903785 +0000 UTC m=+1520.394678905" observedRunningTime="2026-04-04 02:20:42.469316091 +0000 UTC m=+1522.135091221" watchObservedRunningTime="2026-04-04 02:20:42.478826002 +0000 UTC m=+1522.144601122" Apr 04 02:20:43 crc kubenswrapper[4681]: I0404 02:20:43.439872 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dd82b7b7-ba75-4588-9dc2-c47ed34762b5","Type":"ContainerStarted","Data":"08930779d9b2f26aa589d9c5f5db22692071c9fb19190b268180bf6cb38432f2"} Apr 04 02:20:43 crc kubenswrapper[4681]: I0404 02:20:43.445556 4681 generic.go:334] "Generic (PLEG): container finished" podID="64ec402c-6b37-4942-be19-9dcc436c6650" containerID="9a6d40d9dc888707d9fc2b6101145b2acd5049a694a279b004aae47df96f3a35" exitCode=0 Apr 04 02:20:43 crc kubenswrapper[4681]: I0404 02:20:43.445641 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-595d94d48f-jpw4b" event={"ID":"64ec402c-6b37-4942-be19-9dcc436c6650","Type":"ContainerDied","Data":"9a6d40d9dc888707d9fc2b6101145b2acd5049a694a279b004aae47df96f3a35"} Apr 04 02:20:43 crc kubenswrapper[4681]: I0404 02:20:43.447410 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f2a3604e-5c76-460f-aebb-5e2e89688d74","Type":"ContainerStarted","Data":"acc6c3f9520367bb31c58c5e1e5c1a06749bdb723d6113a66cb3d422a6e0effe"} Apr 04 02:20:43 crc kubenswrapper[4681]: I0404 02:20:43.450749 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ndgrb" event={"ID":"38cc8476-2432-47d7-ad56-fd155b7680a5","Type":"ContainerStarted","Data":"12da01366bba16011addff48c396bcaf071f71b7aa5337b0f101304e3b31e217"} Apr 04 02:20:43 crc kubenswrapper[4681]: I0404 02:20:43.452996 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e1febd11-574c-4fc6-967c-d74bef4e351a","Type":"ContainerStarted","Data":"4e8db8d5c0ddf84e62042ca36be9c30db3a79e75a7e6c04276ba26e3ed4cc379"} Apr 04 02:20:43 crc kubenswrapper[4681]: I0404 02:20:43.453071 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Apr 04 02:20:43 crc kubenswrapper[4681]: I0404 02:20:43.459565 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jz78r" event={"ID":"616e7c64-534b-41e8-8ad9-0abf8f05d3d5","Type":"ContainerStarted","Data":"b2a613e55a3fa0939c9626fe88549d003c9024bdc0c6ec5649566931038cc35c"} Apr 04 02:20:43 crc kubenswrapper[4681]: I0404 02:20:43.459686 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-jz78r" Apr 04 02:20:43 crc kubenswrapper[4681]: I0404 02:20:43.462598 4681 generic.go:334] "Generic (PLEG): container finished" podID="168bf88b-a0ac-4eb5-a406-7511b4ce698d" containerID="98e4b427ccb02feb53c227a64ef9528e7af40fdb4c88d48ec5f8772059950776" exitCode=0 Apr 04 02:20:43 crc kubenswrapper[4681]: I0404 02:20:43.462638 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b67658d95-hdcw2" event={"ID":"168bf88b-a0ac-4eb5-a406-7511b4ce698d","Type":"ContainerDied","Data":"98e4b427ccb02feb53c227a64ef9528e7af40fdb4c88d48ec5f8772059950776"} Apr 04 02:20:43 crc kubenswrapper[4681]: I0404 02:20:43.518090 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=27.34067969 podStartE2EDuration="40.518063882s" podCreationTimestamp="2026-04-04 02:20:03 +0000 UTC" firstStartedPulling="2026-04-04 02:20:28.920352406 +0000 UTC m=+1508.586127526" lastFinishedPulling="2026-04-04 02:20:42.097736598 +0000 UTC m=+1521.763511718" observedRunningTime="2026-04-04 02:20:43.514133925 +0000 UTC m=+1523.179909045" watchObservedRunningTime="2026-04-04 02:20:43.518063882 +0000 UTC m=+1523.183839012" Apr 04 02:20:43 crc kubenswrapper[4681]: I0404 02:20:43.582901 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-jz78r" podStartSLOduration=24.948684733 podStartE2EDuration="37.582536309s" podCreationTimestamp="2026-04-04 02:20:06 +0000 UTC" firstStartedPulling="2026-04-04 02:20:28.925692502 +0000 UTC m=+1508.591467622" lastFinishedPulling="2026-04-04 02:20:41.559544078 +0000 UTC m=+1521.225319198" observedRunningTime="2026-04-04 02:20:43.575894668 +0000 UTC m=+1523.241669808" watchObservedRunningTime="2026-04-04 02:20:43.582536309 +0000 UTC m=+1523.248311439" Apr 04 02:20:44 crc kubenswrapper[4681]: I0404 02:20:44.253504 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587820-s4hzt" Apr 04 02:20:44 crc kubenswrapper[4681]: I0404 02:20:44.416666 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8dbz\" (UniqueName: \"kubernetes.io/projected/72057968-aa72-4a22-aaea-f74196e09c9e-kube-api-access-t8dbz\") pod \"72057968-aa72-4a22-aaea-f74196e09c9e\" (UID: \"72057968-aa72-4a22-aaea-f74196e09c9e\") " Apr 04 02:20:44 crc kubenswrapper[4681]: I0404 02:20:44.423853 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72057968-aa72-4a22-aaea-f74196e09c9e-kube-api-access-t8dbz" (OuterVolumeSpecName: "kube-api-access-t8dbz") pod "72057968-aa72-4a22-aaea-f74196e09c9e" (UID: "72057968-aa72-4a22-aaea-f74196e09c9e"). InnerVolumeSpecName "kube-api-access-t8dbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:20:44 crc kubenswrapper[4681]: I0404 02:20:44.483286 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b67658d95-hdcw2" event={"ID":"168bf88b-a0ac-4eb5-a406-7511b4ce698d","Type":"ContainerDied","Data":"26babd7e66eef35deec043ca3cc59271e65721c2cb13d2ec21af9fe42b039911"} Apr 04 02:20:44 crc kubenswrapper[4681]: I0404 02:20:44.483330 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26babd7e66eef35deec043ca3cc59271e65721c2cb13d2ec21af9fe42b039911" Apr 04 02:20:44 crc kubenswrapper[4681]: I0404 02:20:44.492590 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"274d9ff3-9300-48ad-8172-5be9539f6e7b","Type":"ContainerStarted","Data":"6bd0a70fd89cdba92090f31bfc1808554d991ffdc4bb98db56421cd6149eb1f5"} Apr 04 02:20:44 crc kubenswrapper[4681]: I0404 02:20:44.496494 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b67658d95-hdcw2" Apr 04 02:20:44 crc kubenswrapper[4681]: I0404 02:20:44.496620 4681 generic.go:334] "Generic (PLEG): container finished" podID="38cc8476-2432-47d7-ad56-fd155b7680a5" containerID="12da01366bba16011addff48c396bcaf071f71b7aa5337b0f101304e3b31e217" exitCode=0 Apr 04 02:20:44 crc kubenswrapper[4681]: I0404 02:20:44.496669 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ndgrb" event={"ID":"38cc8476-2432-47d7-ad56-fd155b7680a5","Type":"ContainerDied","Data":"12da01366bba16011addff48c396bcaf071f71b7aa5337b0f101304e3b31e217"} Apr 04 02:20:44 crc kubenswrapper[4681]: I0404 02:20:44.499010 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587820-s4hzt" event={"ID":"72057968-aa72-4a22-aaea-f74196e09c9e","Type":"ContainerDied","Data":"e34c33f76ae7a1d49f2f29886e66bd02a8a05cc5ab853b2dd2d3a2435ccb2a29"} Apr 04 02:20:44 crc kubenswrapper[4681]: I0404 02:20:44.499039 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e34c33f76ae7a1d49f2f29886e66bd02a8a05cc5ab853b2dd2d3a2435ccb2a29" Apr 04 02:20:44 crc kubenswrapper[4681]: I0404 02:20:44.499088 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587820-s4hzt" Apr 04 02:20:44 crc kubenswrapper[4681]: I0404 02:20:44.506931 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587814-cdrrh"] Apr 04 02:20:44 crc kubenswrapper[4681]: I0404 02:20:44.508928 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"189dfe5e-4211-48c8-bc76-ea9c229c5d65","Type":"ContainerStarted","Data":"cfc0250f9ff590c86c7a78c7162839570a127e0fbd746ca75f15337422fe7bed"} Apr 04 02:20:44 crc kubenswrapper[4681]: I0404 02:20:44.519085 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8dbz\" (UniqueName: \"kubernetes.io/projected/72057968-aa72-4a22-aaea-f74196e09c9e-kube-api-access-t8dbz\") on node \"crc\" DevicePath \"\"" Apr 04 02:20:44 crc kubenswrapper[4681]: I0404 02:20:44.524402 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587814-cdrrh"] Apr 04 02:20:44 crc kubenswrapper[4681]: I0404 02:20:44.620096 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9pcg\" (UniqueName: \"kubernetes.io/projected/168bf88b-a0ac-4eb5-a406-7511b4ce698d-kube-api-access-t9pcg\") pod \"168bf88b-a0ac-4eb5-a406-7511b4ce698d\" (UID: \"168bf88b-a0ac-4eb5-a406-7511b4ce698d\") " Apr 04 02:20:44 crc kubenswrapper[4681]: I0404 02:20:44.620182 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/168bf88b-a0ac-4eb5-a406-7511b4ce698d-config\") pod \"168bf88b-a0ac-4eb5-a406-7511b4ce698d\" (UID: \"168bf88b-a0ac-4eb5-a406-7511b4ce698d\") " Apr 04 02:20:44 crc kubenswrapper[4681]: I0404 02:20:44.620338 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/168bf88b-a0ac-4eb5-a406-7511b4ce698d-dns-svc\") pod \"168bf88b-a0ac-4eb5-a406-7511b4ce698d\" (UID: \"168bf88b-a0ac-4eb5-a406-7511b4ce698d\") " Apr 04 02:20:44 crc kubenswrapper[4681]: I0404 02:20:44.631588 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/168bf88b-a0ac-4eb5-a406-7511b4ce698d-kube-api-access-t9pcg" (OuterVolumeSpecName: "kube-api-access-t9pcg") pod "168bf88b-a0ac-4eb5-a406-7511b4ce698d" (UID: "168bf88b-a0ac-4eb5-a406-7511b4ce698d"). InnerVolumeSpecName "kube-api-access-t9pcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:20:44 crc kubenswrapper[4681]: I0404 02:20:44.665381 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/168bf88b-a0ac-4eb5-a406-7511b4ce698d-config" (OuterVolumeSpecName: "config") pod "168bf88b-a0ac-4eb5-a406-7511b4ce698d" (UID: "168bf88b-a0ac-4eb5-a406-7511b4ce698d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:20:44 crc kubenswrapper[4681]: I0404 02:20:44.677187 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/168bf88b-a0ac-4eb5-a406-7511b4ce698d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "168bf88b-a0ac-4eb5-a406-7511b4ce698d" (UID: "168bf88b-a0ac-4eb5-a406-7511b4ce698d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:20:44 crc kubenswrapper[4681]: I0404 02:20:44.723067 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9pcg\" (UniqueName: \"kubernetes.io/projected/168bf88b-a0ac-4eb5-a406-7511b4ce698d-kube-api-access-t9pcg\") on node \"crc\" DevicePath \"\"" Apr 04 02:20:44 crc kubenswrapper[4681]: I0404 02:20:44.723103 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/168bf88b-a0ac-4eb5-a406-7511b4ce698d-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:20:44 crc kubenswrapper[4681]: I0404 02:20:44.723112 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/168bf88b-a0ac-4eb5-a406-7511b4ce698d-dns-svc\") on node \"crc\" DevicePath \"\"" Apr 04 02:20:45 crc kubenswrapper[4681]: I0404 02:20:45.215157 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f599124d-85f4-4576-a845-ef6ae9456614" path="/var/lib/kubelet/pods/f599124d-85f4-4576-a845-ef6ae9456614/volumes" Apr 04 02:20:45 crc kubenswrapper[4681]: I0404 02:20:45.520196 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"30fe1cfd-59db-4c85-bf2c-a476faeabd9c","Type":"ContainerStarted","Data":"18a83849a371b14ab282c6d1f4456b7c10099788cf6f44a64c292c5cd7aaebc8"} Apr 04 02:20:45 crc kubenswrapper[4681]: I0404 02:20:45.523395 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-595d94d48f-jpw4b" event={"ID":"64ec402c-6b37-4942-be19-9dcc436c6650","Type":"ContainerStarted","Data":"54266d783e9b3f336356f3dc4441776dbcce7bdadab6c89c6117cebbde979adb"} Apr 04 02:20:45 crc kubenswrapper[4681]: I0404 02:20:45.523691 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-595d94d48f-jpw4b" Apr 04 02:20:45 crc kubenswrapper[4681]: I0404 02:20:45.525224 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6579b16f-f45a-4c22-9107-6763d001efb2","Type":"ContainerStarted","Data":"1cc25dad350a8aaa54a6f494b053fa55a5f1f7e1c9bf98c525234211428b87b9"} Apr 04 02:20:45 crc kubenswrapper[4681]: I0404 02:20:45.527238 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fdf6c6f7-q2q8w" event={"ID":"641eda62-a031-46b3-8cb3-c534541afcf7","Type":"ContainerStarted","Data":"873a28a2eae6661aeaeb4b71b453f9628b4726bfad3002fd813ef4f35a80c404"} Apr 04 02:20:45 crc kubenswrapper[4681]: I0404 02:20:45.527432 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b67658d95-hdcw2" Apr 04 02:20:45 crc kubenswrapper[4681]: I0404 02:20:45.544104 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-595d94d48f-jpw4b" podStartSLOduration=42.249823044 podStartE2EDuration="49.544087356s" podCreationTimestamp="2026-04-04 02:19:56 +0000 UTC" firstStartedPulling="2026-04-04 02:20:29.202724425 +0000 UTC m=+1508.868499555" lastFinishedPulling="2026-04-04 02:20:36.496988747 +0000 UTC m=+1516.162763867" observedRunningTime="2026-04-04 02:20:45.541169016 +0000 UTC m=+1525.206944136" watchObservedRunningTime="2026-04-04 02:20:45.544087356 +0000 UTC m=+1525.209862476" Apr 04 02:20:45 crc kubenswrapper[4681]: I0404 02:20:45.583527 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b67658d95-hdcw2"] Apr 04 02:20:45 crc kubenswrapper[4681]: I0404 02:20:45.592408 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b67658d95-hdcw2"] Apr 04 02:20:45 crc kubenswrapper[4681]: I0404 02:20:45.618007 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fdf6c6f7-q2q8w" podStartSLOduration=37.783329099 podStartE2EDuration="49.617982312s" podCreationTimestamp="2026-04-04 02:19:56 +0000 UTC" firstStartedPulling="2026-04-04 02:20:28.893675715 +0000 UTC m=+1508.559450835" lastFinishedPulling="2026-04-04 02:20:40.728328928 +0000 UTC m=+1520.394104048" observedRunningTime="2026-04-04 02:20:45.608615825 +0000 UTC m=+1525.274390955" watchObservedRunningTime="2026-04-04 02:20:45.617982312 +0000 UTC m=+1525.283757452" Apr 04 02:20:46 crc kubenswrapper[4681]: I0404 02:20:46.476985 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fdf6c6f7-q2q8w" Apr 04 02:20:47 crc kubenswrapper[4681]: I0404 02:20:47.216782 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="168bf88b-a0ac-4eb5-a406-7511b4ce698d" path="/var/lib/kubelet/pods/168bf88b-a0ac-4eb5-a406-7511b4ce698d/volumes" Apr 04 02:20:48 crc kubenswrapper[4681]: I0404 02:20:48.560440 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ndgrb" event={"ID":"38cc8476-2432-47d7-ad56-fd155b7680a5","Type":"ContainerStarted","Data":"bee8e37834973cb5e99a6ad51e87851240257daeacd02ff15ac2aa3e0d1b5ce8"} Apr 04 02:20:48 crc kubenswrapper[4681]: I0404 02:20:48.562538 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b40175fa-a3b0-40c3-bc35-7d927897b82b","Type":"ContainerStarted","Data":"d0e9586c6a17e8d85e77ce9203d97ce45d37dae710b702194232b433a39aad53"} Apr 04 02:20:48 crc kubenswrapper[4681]: I0404 02:20:48.568017 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7de30d66-63ae-43ca-8d87-33b3fc14f4b2","Type":"ContainerStarted","Data":"36d56db957703fcbe4a0b7947518ae20c7387fc303d7734d7efd544002e6f079"} Apr 04 02:20:49 crc kubenswrapper[4681]: I0404 02:20:49.584980 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f2a3604e-5c76-460f-aebb-5e2e89688d74","Type":"ContainerStarted","Data":"b9c3b6b89c72497d2ddf16a5b23f011e7c547b49a9c8a447cb6dd5d1160f7fba"} Apr 04 02:20:49 crc kubenswrapper[4681]: I0404 02:20:49.588380 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ndgrb" event={"ID":"38cc8476-2432-47d7-ad56-fd155b7680a5","Type":"ContainerStarted","Data":"ddafe6920d719d7bf5176bea540298fc7018dd45bd7b69fd5d7023803607124b"} Apr 04 02:20:49 crc kubenswrapper[4681]: I0404 02:20:49.588660 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-ndgrb" Apr 04 02:20:49 crc kubenswrapper[4681]: I0404 02:20:49.588679 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-ndgrb" Apr 04 02:20:49 crc kubenswrapper[4681]: I0404 02:20:49.591312 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"30fe1cfd-59db-4c85-bf2c-a476faeabd9c","Type":"ContainerStarted","Data":"c8b6f3d1e27eb1290f552ca49de674696d7a87c47e91c49c71d8ed4e864ae32f"} Apr 04 02:20:49 crc kubenswrapper[4681]: I0404 02:20:49.608834 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=20.273482433 podStartE2EDuration="40.60881483s" podCreationTimestamp="2026-04-04 02:20:09 +0000 UTC" firstStartedPulling="2026-04-04 02:20:28.152942245 +0000 UTC m=+1507.818717365" lastFinishedPulling="2026-04-04 02:20:48.488274642 +0000 UTC m=+1528.154049762" observedRunningTime="2026-04-04 02:20:49.606693673 +0000 UTC m=+1529.272468793" watchObservedRunningTime="2026-04-04 02:20:49.60881483 +0000 UTC m=+1529.274589950" Apr 04 02:20:49 crc kubenswrapper[4681]: I0404 02:20:49.640393 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-ndgrb" podStartSLOduration=31.970351365 podStartE2EDuration="43.640375636s" podCreationTimestamp="2026-04-04 02:20:06 +0000 UTC" firstStartedPulling="2026-04-04 02:20:29.331520785 +0000 UTC m=+1508.997295905" lastFinishedPulling="2026-04-04 02:20:41.001545056 +0000 UTC m=+1520.667320176" observedRunningTime="2026-04-04 02:20:49.629550129 +0000 UTC m=+1529.295325249" watchObservedRunningTime="2026-04-04 02:20:49.640375636 +0000 UTC m=+1529.306150756" Apr 04 02:20:49 crc kubenswrapper[4681]: I0404 02:20:49.662813 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=24.415569633 podStartE2EDuration="43.66279096s" podCreationTimestamp="2026-04-04 02:20:06 +0000 UTC" firstStartedPulling="2026-04-04 02:20:29.245113126 +0000 UTC m=+1508.910888246" lastFinishedPulling="2026-04-04 02:20:48.492334453 +0000 UTC m=+1528.158109573" observedRunningTime="2026-04-04 02:20:49.660078306 +0000 UTC m=+1529.325853426" watchObservedRunningTime="2026-04-04 02:20:49.66279096 +0000 UTC m=+1529.328566080" Apr 04 02:20:49 crc kubenswrapper[4681]: I0404 02:20:49.692337 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:49 crc kubenswrapper[4681]: I0404 02:20:49.734946 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:50 crc kubenswrapper[4681]: I0404 02:20:50.109066 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:50 crc kubenswrapper[4681]: I0404 02:20:50.150767 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:50 crc kubenswrapper[4681]: I0404 02:20:50.599105 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:50 crc kubenswrapper[4681]: I0404 02:20:50.599149 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:50 crc kubenswrapper[4681]: I0404 02:20:50.633398 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Apr 04 02:20:50 crc kubenswrapper[4681]: I0404 02:20:50.642498 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Apr 04 02:20:50 crc kubenswrapper[4681]: I0404 02:20:50.890914 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-595d94d48f-jpw4b"] Apr 04 02:20:50 crc kubenswrapper[4681]: I0404 02:20:50.891112 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-595d94d48f-jpw4b" podUID="64ec402c-6b37-4942-be19-9dcc436c6650" containerName="dnsmasq-dns" containerID="cri-o://54266d783e9b3f336356f3dc4441776dbcce7bdadab6c89c6117cebbde979adb" gracePeriod=10 Apr 04 02:20:50 crc kubenswrapper[4681]: I0404 02:20:50.892540 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-595d94d48f-jpw4b" Apr 04 02:20:50 crc kubenswrapper[4681]: I0404 02:20:50.955420 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c9f548c5f-whptx"] Apr 04 02:20:50 crc kubenswrapper[4681]: E0404 02:20:50.957722 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72057968-aa72-4a22-aaea-f74196e09c9e" containerName="oc" Apr 04 02:20:50 crc kubenswrapper[4681]: I0404 02:20:50.957766 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="72057968-aa72-4a22-aaea-f74196e09c9e" containerName="oc" Apr 04 02:20:50 crc kubenswrapper[4681]: E0404 02:20:50.957786 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="168bf88b-a0ac-4eb5-a406-7511b4ce698d" containerName="init" Apr 04 02:20:50 crc kubenswrapper[4681]: I0404 02:20:50.957792 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="168bf88b-a0ac-4eb5-a406-7511b4ce698d" containerName="init" Apr 04 02:20:50 crc kubenswrapper[4681]: I0404 02:20:50.958768 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="168bf88b-a0ac-4eb5-a406-7511b4ce698d" containerName="init" Apr 04 02:20:50 crc kubenswrapper[4681]: I0404 02:20:50.958818 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="72057968-aa72-4a22-aaea-f74196e09c9e" containerName="oc" Apr 04 02:20:50 crc kubenswrapper[4681]: I0404 02:20:50.969588 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c9f548c5f-whptx" Apr 04 02:20:50 crc kubenswrapper[4681]: I0404 02:20:50.971055 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-nlvvn"] Apr 04 02:20:50 crc kubenswrapper[4681]: I0404 02:20:50.972646 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-nlvvn" Apr 04 02:20:50 crc kubenswrapper[4681]: I0404 02:20:50.978258 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Apr 04 02:20:50 crc kubenswrapper[4681]: I0404 02:20:50.978282 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.026151 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c9f548c5f-whptx"] Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.038129 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-nlvvn"] Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.039312 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e527afea-72d4-4e70-a923-a5007d7d44bf-dns-svc\") pod \"dnsmasq-dns-c9f548c5f-whptx\" (UID: \"e527afea-72d4-4e70-a923-a5007d7d44bf\") " pod="openstack/dnsmasq-dns-c9f548c5f-whptx" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.039413 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e527afea-72d4-4e70-a923-a5007d7d44bf-ovsdbserver-sb\") pod \"dnsmasq-dns-c9f548c5f-whptx\" (UID: \"e527afea-72d4-4e70-a923-a5007d7d44bf\") " pod="openstack/dnsmasq-dns-c9f548c5f-whptx" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.039465 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87ltp\" (UniqueName: \"kubernetes.io/projected/e527afea-72d4-4e70-a923-a5007d7d44bf-kube-api-access-87ltp\") pod \"dnsmasq-dns-c9f548c5f-whptx\" (UID: \"e527afea-72d4-4e70-a923-a5007d7d44bf\") " pod="openstack/dnsmasq-dns-c9f548c5f-whptx" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.039492 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/209debba-9c1c-4486-82c7-38424335f889-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-nlvvn\" (UID: \"209debba-9c1c-4486-82c7-38424335f889\") " pod="openstack/ovn-controller-metrics-nlvvn" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.039523 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e527afea-72d4-4e70-a923-a5007d7d44bf-config\") pod \"dnsmasq-dns-c9f548c5f-whptx\" (UID: \"e527afea-72d4-4e70-a923-a5007d7d44bf\") " pod="openstack/dnsmasq-dns-c9f548c5f-whptx" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.039556 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/209debba-9c1c-4486-82c7-38424335f889-config\") pod \"ovn-controller-metrics-nlvvn\" (UID: \"209debba-9c1c-4486-82c7-38424335f889\") " pod="openstack/ovn-controller-metrics-nlvvn" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.039580 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/209debba-9c1c-4486-82c7-38424335f889-ovn-rundir\") pod \"ovn-controller-metrics-nlvvn\" (UID: \"209debba-9c1c-4486-82c7-38424335f889\") " pod="openstack/ovn-controller-metrics-nlvvn" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.039601 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/209debba-9c1c-4486-82c7-38424335f889-ovs-rundir\") pod \"ovn-controller-metrics-nlvvn\" (UID: \"209debba-9c1c-4486-82c7-38424335f889\") " pod="openstack/ovn-controller-metrics-nlvvn" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.039653 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f4hf\" (UniqueName: \"kubernetes.io/projected/209debba-9c1c-4486-82c7-38424335f889-kube-api-access-5f4hf\") pod \"ovn-controller-metrics-nlvvn\" (UID: \"209debba-9c1c-4486-82c7-38424335f889\") " pod="openstack/ovn-controller-metrics-nlvvn" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.039684 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/209debba-9c1c-4486-82c7-38424335f889-combined-ca-bundle\") pod \"ovn-controller-metrics-nlvvn\" (UID: \"209debba-9c1c-4486-82c7-38424335f889\") " pod="openstack/ovn-controller-metrics-nlvvn" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.080635 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fdf6c6f7-q2q8w"] Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.080830 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fdf6c6f7-q2q8w" podUID="641eda62-a031-46b3-8cb3-c534541afcf7" containerName="dnsmasq-dns" containerID="cri-o://873a28a2eae6661aeaeb4b71b453f9628b4726bfad3002fd813ef4f35a80c404" gracePeriod=10 Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.084845 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fdf6c6f7-q2q8w" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.115235 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d8b7b5bff-h882f"] Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.132138 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.133770 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.134422 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d8b7b5bff-h882f" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.141398 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e527afea-72d4-4e70-a923-a5007d7d44bf-dns-svc\") pod \"dnsmasq-dns-c9f548c5f-whptx\" (UID: \"e527afea-72d4-4e70-a923-a5007d7d44bf\") " pod="openstack/dnsmasq-dns-c9f548c5f-whptx" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.141517 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e527afea-72d4-4e70-a923-a5007d7d44bf-ovsdbserver-sb\") pod \"dnsmasq-dns-c9f548c5f-whptx\" (UID: \"e527afea-72d4-4e70-a923-a5007d7d44bf\") " pod="openstack/dnsmasq-dns-c9f548c5f-whptx" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.141569 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87ltp\" (UniqueName: \"kubernetes.io/projected/e527afea-72d4-4e70-a923-a5007d7d44bf-kube-api-access-87ltp\") pod \"dnsmasq-dns-c9f548c5f-whptx\" (UID: \"e527afea-72d4-4e70-a923-a5007d7d44bf\") " pod="openstack/dnsmasq-dns-c9f548c5f-whptx" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.141596 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/209debba-9c1c-4486-82c7-38424335f889-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-nlvvn\" (UID: \"209debba-9c1c-4486-82c7-38424335f889\") " pod="openstack/ovn-controller-metrics-nlvvn" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.141625 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e527afea-72d4-4e70-a923-a5007d7d44bf-config\") pod \"dnsmasq-dns-c9f548c5f-whptx\" (UID: \"e527afea-72d4-4e70-a923-a5007d7d44bf\") " pod="openstack/dnsmasq-dns-c9f548c5f-whptx" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.141659 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/209debba-9c1c-4486-82c7-38424335f889-config\") pod \"ovn-controller-metrics-nlvvn\" (UID: \"209debba-9c1c-4486-82c7-38424335f889\") " pod="openstack/ovn-controller-metrics-nlvvn" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.141684 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/209debba-9c1c-4486-82c7-38424335f889-ovn-rundir\") pod \"ovn-controller-metrics-nlvvn\" (UID: \"209debba-9c1c-4486-82c7-38424335f889\") " pod="openstack/ovn-controller-metrics-nlvvn" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.141706 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/209debba-9c1c-4486-82c7-38424335f889-ovs-rundir\") pod \"ovn-controller-metrics-nlvvn\" (UID: \"209debba-9c1c-4486-82c7-38424335f889\") " pod="openstack/ovn-controller-metrics-nlvvn" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.141765 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5f4hf\" (UniqueName: \"kubernetes.io/projected/209debba-9c1c-4486-82c7-38424335f889-kube-api-access-5f4hf\") pod \"ovn-controller-metrics-nlvvn\" (UID: \"209debba-9c1c-4486-82c7-38424335f889\") " pod="openstack/ovn-controller-metrics-nlvvn" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.141802 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/209debba-9c1c-4486-82c7-38424335f889-combined-ca-bundle\") pod \"ovn-controller-metrics-nlvvn\" (UID: \"209debba-9c1c-4486-82c7-38424335f889\") " pod="openstack/ovn-controller-metrics-nlvvn" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.142675 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.142925 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.143203 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-6bn6c" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.143388 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.144425 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.148986 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e527afea-72d4-4e70-a923-a5007d7d44bf-dns-svc\") pod \"dnsmasq-dns-c9f548c5f-whptx\" (UID: \"e527afea-72d4-4e70-a923-a5007d7d44bf\") " pod="openstack/dnsmasq-dns-c9f548c5f-whptx" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.149823 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e527afea-72d4-4e70-a923-a5007d7d44bf-ovsdbserver-sb\") pod \"dnsmasq-dns-c9f548c5f-whptx\" (UID: \"e527afea-72d4-4e70-a923-a5007d7d44bf\") " pod="openstack/dnsmasq-dns-c9f548c5f-whptx" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.149996 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/209debba-9c1c-4486-82c7-38424335f889-ovs-rundir\") pod \"ovn-controller-metrics-nlvvn\" (UID: \"209debba-9c1c-4486-82c7-38424335f889\") " pod="openstack/ovn-controller-metrics-nlvvn" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.150004 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/209debba-9c1c-4486-82c7-38424335f889-ovn-rundir\") pod \"ovn-controller-metrics-nlvvn\" (UID: \"209debba-9c1c-4486-82c7-38424335f889\") " pod="openstack/ovn-controller-metrics-nlvvn" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.150576 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e527afea-72d4-4e70-a923-a5007d7d44bf-config\") pod \"dnsmasq-dns-c9f548c5f-whptx\" (UID: \"e527afea-72d4-4e70-a923-a5007d7d44bf\") " pod="openstack/dnsmasq-dns-c9f548c5f-whptx" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.150661 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/209debba-9c1c-4486-82c7-38424335f889-config\") pod \"ovn-controller-metrics-nlvvn\" (UID: \"209debba-9c1c-4486-82c7-38424335f889\") " pod="openstack/ovn-controller-metrics-nlvvn" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.151282 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/209debba-9c1c-4486-82c7-38424335f889-combined-ca-bundle\") pod \"ovn-controller-metrics-nlvvn\" (UID: \"209debba-9c1c-4486-82c7-38424335f889\") " pod="openstack/ovn-controller-metrics-nlvvn" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.156840 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/209debba-9c1c-4486-82c7-38424335f889-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-nlvvn\" (UID: \"209debba-9c1c-4486-82c7-38424335f889\") " pod="openstack/ovn-controller-metrics-nlvvn" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.161024 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d8b7b5bff-h882f"] Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.164401 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.176131 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87ltp\" (UniqueName: \"kubernetes.io/projected/e527afea-72d4-4e70-a923-a5007d7d44bf-kube-api-access-87ltp\") pod \"dnsmasq-dns-c9f548c5f-whptx\" (UID: \"e527afea-72d4-4e70-a923-a5007d7d44bf\") " pod="openstack/dnsmasq-dns-c9f548c5f-whptx" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.195923 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5f4hf\" (UniqueName: \"kubernetes.io/projected/209debba-9c1c-4486-82c7-38424335f889-kube-api-access-5f4hf\") pod \"ovn-controller-metrics-nlvvn\" (UID: \"209debba-9c1c-4486-82c7-38424335f889\") " pod="openstack/ovn-controller-metrics-nlvvn" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.218900 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.243047 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c79dddc-8bad-4bfb-920f-434aea2c400c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4c79dddc-8bad-4bfb-920f-434aea2c400c\") " pod="openstack/ovn-northd-0" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.243101 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c79dddc-8bad-4bfb-920f-434aea2c400c-config\") pod \"ovn-northd-0\" (UID: \"4c79dddc-8bad-4bfb-920f-434aea2c400c\") " pod="openstack/ovn-northd-0" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.243123 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c79dddc-8bad-4bfb-920f-434aea2c400c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4c79dddc-8bad-4bfb-920f-434aea2c400c\") " pod="openstack/ovn-northd-0" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.243144 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4c79dddc-8bad-4bfb-920f-434aea2c400c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4c79dddc-8bad-4bfb-920f-434aea2c400c\") " pod="openstack/ovn-northd-0" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.243161 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c79dddc-8bad-4bfb-920f-434aea2c400c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4c79dddc-8bad-4bfb-920f-434aea2c400c\") " pod="openstack/ovn-northd-0" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.243212 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32-ovsdbserver-sb\") pod \"dnsmasq-dns-5d8b7b5bff-h882f\" (UID: \"afa1c1fe-631b-46bf-8735-a6fcc2d3ad32\") " pod="openstack/dnsmasq-dns-5d8b7b5bff-h882f" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.243233 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32-ovsdbserver-nb\") pod \"dnsmasq-dns-5d8b7b5bff-h882f\" (UID: \"afa1c1fe-631b-46bf-8735-a6fcc2d3ad32\") " pod="openstack/dnsmasq-dns-5d8b7b5bff-h882f" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.243295 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32-dns-svc\") pod \"dnsmasq-dns-5d8b7b5bff-h882f\" (UID: \"afa1c1fe-631b-46bf-8735-a6fcc2d3ad32\") " pod="openstack/dnsmasq-dns-5d8b7b5bff-h882f" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.243313 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmrfg\" (UniqueName: \"kubernetes.io/projected/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32-kube-api-access-qmrfg\") pod \"dnsmasq-dns-5d8b7b5bff-h882f\" (UID: \"afa1c1fe-631b-46bf-8735-a6fcc2d3ad32\") " pod="openstack/dnsmasq-dns-5d8b7b5bff-h882f" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.243352 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32-config\") pod \"dnsmasq-dns-5d8b7b5bff-h882f\" (UID: \"afa1c1fe-631b-46bf-8735-a6fcc2d3ad32\") " pod="openstack/dnsmasq-dns-5d8b7b5bff-h882f" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.243375 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c79dddc-8bad-4bfb-920f-434aea2c400c-scripts\") pod \"ovn-northd-0\" (UID: \"4c79dddc-8bad-4bfb-920f-434aea2c400c\") " pod="openstack/ovn-northd-0" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.243408 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw2bp\" (UniqueName: \"kubernetes.io/projected/4c79dddc-8bad-4bfb-920f-434aea2c400c-kube-api-access-bw2bp\") pod \"ovn-northd-0\" (UID: \"4c79dddc-8bad-4bfb-920f-434aea2c400c\") " pod="openstack/ovn-northd-0" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.344788 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c9f548c5f-whptx" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.347514 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32-dns-svc\") pod \"dnsmasq-dns-5d8b7b5bff-h882f\" (UID: \"afa1c1fe-631b-46bf-8735-a6fcc2d3ad32\") " pod="openstack/dnsmasq-dns-5d8b7b5bff-h882f" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.347579 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmrfg\" (UniqueName: \"kubernetes.io/projected/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32-kube-api-access-qmrfg\") pod \"dnsmasq-dns-5d8b7b5bff-h882f\" (UID: \"afa1c1fe-631b-46bf-8735-a6fcc2d3ad32\") " pod="openstack/dnsmasq-dns-5d8b7b5bff-h882f" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.347643 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32-config\") pod \"dnsmasq-dns-5d8b7b5bff-h882f\" (UID: \"afa1c1fe-631b-46bf-8735-a6fcc2d3ad32\") " pod="openstack/dnsmasq-dns-5d8b7b5bff-h882f" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.347680 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c79dddc-8bad-4bfb-920f-434aea2c400c-scripts\") pod \"ovn-northd-0\" (UID: \"4c79dddc-8bad-4bfb-920f-434aea2c400c\") " pod="openstack/ovn-northd-0" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.347723 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw2bp\" (UniqueName: \"kubernetes.io/projected/4c79dddc-8bad-4bfb-920f-434aea2c400c-kube-api-access-bw2bp\") pod \"ovn-northd-0\" (UID: \"4c79dddc-8bad-4bfb-920f-434aea2c400c\") " pod="openstack/ovn-northd-0" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.347770 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c79dddc-8bad-4bfb-920f-434aea2c400c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4c79dddc-8bad-4bfb-920f-434aea2c400c\") " pod="openstack/ovn-northd-0" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.347793 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c79dddc-8bad-4bfb-920f-434aea2c400c-config\") pod \"ovn-northd-0\" (UID: \"4c79dddc-8bad-4bfb-920f-434aea2c400c\") " pod="openstack/ovn-northd-0" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.347810 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c79dddc-8bad-4bfb-920f-434aea2c400c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4c79dddc-8bad-4bfb-920f-434aea2c400c\") " pod="openstack/ovn-northd-0" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.347832 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4c79dddc-8bad-4bfb-920f-434aea2c400c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4c79dddc-8bad-4bfb-920f-434aea2c400c\") " pod="openstack/ovn-northd-0" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.347854 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c79dddc-8bad-4bfb-920f-434aea2c400c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4c79dddc-8bad-4bfb-920f-434aea2c400c\") " pod="openstack/ovn-northd-0" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.347892 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32-ovsdbserver-sb\") pod \"dnsmasq-dns-5d8b7b5bff-h882f\" (UID: \"afa1c1fe-631b-46bf-8735-a6fcc2d3ad32\") " pod="openstack/dnsmasq-dns-5d8b7b5bff-h882f" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.347916 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32-ovsdbserver-nb\") pod \"dnsmasq-dns-5d8b7b5bff-h882f\" (UID: \"afa1c1fe-631b-46bf-8735-a6fcc2d3ad32\") " pod="openstack/dnsmasq-dns-5d8b7b5bff-h882f" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.348494 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32-dns-svc\") pod \"dnsmasq-dns-5d8b7b5bff-h882f\" (UID: \"afa1c1fe-631b-46bf-8735-a6fcc2d3ad32\") " pod="openstack/dnsmasq-dns-5d8b7b5bff-h882f" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.348577 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32-config\") pod \"dnsmasq-dns-5d8b7b5bff-h882f\" (UID: \"afa1c1fe-631b-46bf-8735-a6fcc2d3ad32\") " pod="openstack/dnsmasq-dns-5d8b7b5bff-h882f" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.348851 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/4c79dddc-8bad-4bfb-920f-434aea2c400c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"4c79dddc-8bad-4bfb-920f-434aea2c400c\") " pod="openstack/ovn-northd-0" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.349089 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c79dddc-8bad-4bfb-920f-434aea2c400c-config\") pod \"ovn-northd-0\" (UID: \"4c79dddc-8bad-4bfb-920f-434aea2c400c\") " pod="openstack/ovn-northd-0" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.349111 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c79dddc-8bad-4bfb-920f-434aea2c400c-scripts\") pod \"ovn-northd-0\" (UID: \"4c79dddc-8bad-4bfb-920f-434aea2c400c\") " pod="openstack/ovn-northd-0" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.349309 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32-ovsdbserver-sb\") pod \"dnsmasq-dns-5d8b7b5bff-h882f\" (UID: \"afa1c1fe-631b-46bf-8735-a6fcc2d3ad32\") " pod="openstack/dnsmasq-dns-5d8b7b5bff-h882f" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.351922 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c79dddc-8bad-4bfb-920f-434aea2c400c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"4c79dddc-8bad-4bfb-920f-434aea2c400c\") " pod="openstack/ovn-northd-0" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.354054 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c79dddc-8bad-4bfb-920f-434aea2c400c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"4c79dddc-8bad-4bfb-920f-434aea2c400c\") " pod="openstack/ovn-northd-0" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.355760 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32-ovsdbserver-nb\") pod \"dnsmasq-dns-5d8b7b5bff-h882f\" (UID: \"afa1c1fe-631b-46bf-8735-a6fcc2d3ad32\") " pod="openstack/dnsmasq-dns-5d8b7b5bff-h882f" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.360244 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-nlvvn" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.366146 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c79dddc-8bad-4bfb-920f-434aea2c400c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"4c79dddc-8bad-4bfb-920f-434aea2c400c\") " pod="openstack/ovn-northd-0" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.367870 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmrfg\" (UniqueName: \"kubernetes.io/projected/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32-kube-api-access-qmrfg\") pod \"dnsmasq-dns-5d8b7b5bff-h882f\" (UID: \"afa1c1fe-631b-46bf-8735-a6fcc2d3ad32\") " pod="openstack/dnsmasq-dns-5d8b7b5bff-h882f" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.379144 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw2bp\" (UniqueName: \"kubernetes.io/projected/4c79dddc-8bad-4bfb-920f-434aea2c400c-kube-api-access-bw2bp\") pod \"ovn-northd-0\" (UID: \"4c79dddc-8bad-4bfb-920f-434aea2c400c\") " pod="openstack/ovn-northd-0" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.480430 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7fdf6c6f7-q2q8w" podUID="641eda62-a031-46b3-8cb3-c534541afcf7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.106:5353: connect: connection refused" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.564602 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.575676 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d8b7b5bff-h882f" Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.614744 4681 generic.go:334] "Generic (PLEG): container finished" podID="64ec402c-6b37-4942-be19-9dcc436c6650" containerID="54266d783e9b3f336356f3dc4441776dbcce7bdadab6c89c6117cebbde979adb" exitCode=0 Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.614802 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-595d94d48f-jpw4b" event={"ID":"64ec402c-6b37-4942-be19-9dcc436c6650","Type":"ContainerDied","Data":"54266d783e9b3f336356f3dc4441776dbcce7bdadab6c89c6117cebbde979adb"} Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.620299 4681 generic.go:334] "Generic (PLEG): container finished" podID="641eda62-a031-46b3-8cb3-c534541afcf7" containerID="873a28a2eae6661aeaeb4b71b453f9628b4726bfad3002fd813ef4f35a80c404" exitCode=0 Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.620500 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fdf6c6f7-q2q8w" event={"ID":"641eda62-a031-46b3-8cb3-c534541afcf7","Type":"ContainerDied","Data":"873a28a2eae6661aeaeb4b71b453f9628b4726bfad3002fd813ef4f35a80c404"} Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.835007 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c9f548c5f-whptx"] Apr 04 02:20:51 crc kubenswrapper[4681]: W0404 02:20:51.842202 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode527afea_72d4_4e70_a923_a5007d7d44bf.slice/crio-f8559b24b82634bad07c5b037aafa5fe79a6b13144dbcbcb173b1895c2da8c37 WatchSource:0}: Error finding container f8559b24b82634bad07c5b037aafa5fe79a6b13144dbcbcb173b1895c2da8c37: Status 404 returned error can't find the container with id f8559b24b82634bad07c5b037aafa5fe79a6b13144dbcbcb173b1895c2da8c37 Apr 04 02:20:51 crc kubenswrapper[4681]: I0404 02:20:51.918408 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-nlvvn"] Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.075910 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.235188 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d8b7b5bff-h882f"] Apr 04 02:20:52 crc kubenswrapper[4681]: W0404 02:20:52.396487 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafa1c1fe_631b_46bf_8735_a6fcc2d3ad32.slice/crio-f61a2b4b3e5c202613701675d26897dbfce930e235cc28e5b998e6ab8ef2b114 WatchSource:0}: Error finding container f61a2b4b3e5c202613701675d26897dbfce930e235cc28e5b998e6ab8ef2b114: Status 404 returned error can't find the container with id f61a2b4b3e5c202613701675d26897dbfce930e235cc28e5b998e6ab8ef2b114 Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.566964 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-595d94d48f-jpw4b" Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.574365 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fdf6c6f7-q2q8w" Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.630383 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-nlvvn" event={"ID":"209debba-9c1c-4486-82c7-38424335f889","Type":"ContainerStarted","Data":"5f3a35a3a8c6073a040f6a7fd51810435a6d0f21c1282324a02dc195d86bf109"} Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.631717 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d8b7b5bff-h882f" event={"ID":"afa1c1fe-631b-46bf-8735-a6fcc2d3ad32","Type":"ContainerStarted","Data":"f61a2b4b3e5c202613701675d26897dbfce930e235cc28e5b998e6ab8ef2b114"} Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.633610 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-595d94d48f-jpw4b" event={"ID":"64ec402c-6b37-4942-be19-9dcc436c6650","Type":"ContainerDied","Data":"481572872fcebbc20411444c05f45ae4f77a3c5198ec6f87ca00d5fb023e25f2"} Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.633653 4681 scope.go:117] "RemoveContainer" containerID="54266d783e9b3f336356f3dc4441776dbcce7bdadab6c89c6117cebbde979adb" Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.633686 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-595d94d48f-jpw4b" Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.635791 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fdf6c6f7-q2q8w" event={"ID":"641eda62-a031-46b3-8cb3-c534541afcf7","Type":"ContainerDied","Data":"adb995fa89492fae2e79c445a74611cdf86d9301a9017b042aff6971dfb82f1c"} Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.635802 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fdf6c6f7-q2q8w" Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.636864 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4c79dddc-8bad-4bfb-920f-434aea2c400c","Type":"ContainerStarted","Data":"76eab242544f4c1412173173b4e2d54657852272e788d274fef7342424777d4e"} Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.639530 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c9f548c5f-whptx" event={"ID":"e527afea-72d4-4e70-a923-a5007d7d44bf","Type":"ContainerStarted","Data":"f8559b24b82634bad07c5b037aafa5fe79a6b13144dbcbcb173b1895c2da8c37"} Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.699056 4681 scope.go:117] "RemoveContainer" containerID="9a6d40d9dc888707d9fc2b6101145b2acd5049a694a279b004aae47df96f3a35" Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.739830 4681 scope.go:117] "RemoveContainer" containerID="873a28a2eae6661aeaeb4b71b453f9628b4726bfad3002fd813ef4f35a80c404" Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.764276 4681 scope.go:117] "RemoveContainer" containerID="47feea440402ba09ee14438fa0cf02c80a220839ec619aada731ba929874b376" Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.772361 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8qdw\" (UniqueName: \"kubernetes.io/projected/64ec402c-6b37-4942-be19-9dcc436c6650-kube-api-access-n8qdw\") pod \"64ec402c-6b37-4942-be19-9dcc436c6650\" (UID: \"64ec402c-6b37-4942-be19-9dcc436c6650\") " Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.772478 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/641eda62-a031-46b3-8cb3-c534541afcf7-config\") pod \"641eda62-a031-46b3-8cb3-c534541afcf7\" (UID: \"641eda62-a031-46b3-8cb3-c534541afcf7\") " Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.772504 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64ec402c-6b37-4942-be19-9dcc436c6650-config\") pod \"64ec402c-6b37-4942-be19-9dcc436c6650\" (UID: \"64ec402c-6b37-4942-be19-9dcc436c6650\") " Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.772536 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/641eda62-a031-46b3-8cb3-c534541afcf7-dns-svc\") pod \"641eda62-a031-46b3-8cb3-c534541afcf7\" (UID: \"641eda62-a031-46b3-8cb3-c534541afcf7\") " Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.772590 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64ec402c-6b37-4942-be19-9dcc436c6650-dns-svc\") pod \"64ec402c-6b37-4942-be19-9dcc436c6650\" (UID: \"64ec402c-6b37-4942-be19-9dcc436c6650\") " Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.772667 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85whf\" (UniqueName: \"kubernetes.io/projected/641eda62-a031-46b3-8cb3-c534541afcf7-kube-api-access-85whf\") pod \"641eda62-a031-46b3-8cb3-c534541afcf7\" (UID: \"641eda62-a031-46b3-8cb3-c534541afcf7\") " Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.781223 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64ec402c-6b37-4942-be19-9dcc436c6650-kube-api-access-n8qdw" (OuterVolumeSpecName: "kube-api-access-n8qdw") pod "64ec402c-6b37-4942-be19-9dcc436c6650" (UID: "64ec402c-6b37-4942-be19-9dcc436c6650"). InnerVolumeSpecName "kube-api-access-n8qdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.792879 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/641eda62-a031-46b3-8cb3-c534541afcf7-kube-api-access-85whf" (OuterVolumeSpecName: "kube-api-access-85whf") pod "641eda62-a031-46b3-8cb3-c534541afcf7" (UID: "641eda62-a031-46b3-8cb3-c534541afcf7"). InnerVolumeSpecName "kube-api-access-85whf". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.817485 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/641eda62-a031-46b3-8cb3-c534541afcf7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "641eda62-a031-46b3-8cb3-c534541afcf7" (UID: "641eda62-a031-46b3-8cb3-c534541afcf7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.827871 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64ec402c-6b37-4942-be19-9dcc436c6650-config" (OuterVolumeSpecName: "config") pod "64ec402c-6b37-4942-be19-9dcc436c6650" (UID: "64ec402c-6b37-4942-be19-9dcc436c6650"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.832786 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64ec402c-6b37-4942-be19-9dcc436c6650-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "64ec402c-6b37-4942-be19-9dcc436c6650" (UID: "64ec402c-6b37-4942-be19-9dcc436c6650"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.851899 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/641eda62-a031-46b3-8cb3-c534541afcf7-config" (OuterVolumeSpecName: "config") pod "641eda62-a031-46b3-8cb3-c534541afcf7" (UID: "641eda62-a031-46b3-8cb3-c534541afcf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.874536 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64ec402c-6b37-4942-be19-9dcc436c6650-dns-svc\") on node \"crc\" DevicePath \"\"" Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.874578 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85whf\" (UniqueName: \"kubernetes.io/projected/641eda62-a031-46b3-8cb3-c534541afcf7-kube-api-access-85whf\") on node \"crc\" DevicePath \"\"" Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.874592 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8qdw\" (UniqueName: \"kubernetes.io/projected/64ec402c-6b37-4942-be19-9dcc436c6650-kube-api-access-n8qdw\") on node \"crc\" DevicePath \"\"" Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.874604 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/641eda62-a031-46b3-8cb3-c534541afcf7-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.874615 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64ec402c-6b37-4942-be19-9dcc436c6650-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.874628 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/641eda62-a031-46b3-8cb3-c534541afcf7-dns-svc\") on node \"crc\" DevicePath \"\"" Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.965652 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-595d94d48f-jpw4b"] Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.973519 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-595d94d48f-jpw4b"] Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.981475 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fdf6c6f7-q2q8w"] Apr 04 02:20:52 crc kubenswrapper[4681]: I0404 02:20:52.987618 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fdf6c6f7-q2q8w"] Apr 04 02:20:53 crc kubenswrapper[4681]: I0404 02:20:53.212020 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="641eda62-a031-46b3-8cb3-c534541afcf7" path="/var/lib/kubelet/pods/641eda62-a031-46b3-8cb3-c534541afcf7/volumes" Apr 04 02:20:53 crc kubenswrapper[4681]: I0404 02:20:53.213357 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64ec402c-6b37-4942-be19-9dcc436c6650" path="/var/lib/kubelet/pods/64ec402c-6b37-4942-be19-9dcc436c6650/volumes" Apr 04 02:20:53 crc kubenswrapper[4681]: I0404 02:20:53.651690 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c9f548c5f-whptx" event={"ID":"e527afea-72d4-4e70-a923-a5007d7d44bf","Type":"ContainerStarted","Data":"131abdd122f4e6f6e5b5e156966a71f8e935879e5f071ef8c576d39fda34eecb"} Apr 04 02:20:53 crc kubenswrapper[4681]: I0404 02:20:53.665616 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Apr 04 02:20:53 crc kubenswrapper[4681]: I0404 02:20:53.736535 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c9f548c5f-whptx"] Apr 04 02:20:53 crc kubenswrapper[4681]: I0404 02:20:53.790497 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79d5b69897-l2rjd"] Apr 04 02:20:53 crc kubenswrapper[4681]: E0404 02:20:53.790932 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="641eda62-a031-46b3-8cb3-c534541afcf7" containerName="dnsmasq-dns" Apr 04 02:20:53 crc kubenswrapper[4681]: I0404 02:20:53.790950 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="641eda62-a031-46b3-8cb3-c534541afcf7" containerName="dnsmasq-dns" Apr 04 02:20:53 crc kubenswrapper[4681]: E0404 02:20:53.790965 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64ec402c-6b37-4942-be19-9dcc436c6650" containerName="dnsmasq-dns" Apr 04 02:20:53 crc kubenswrapper[4681]: I0404 02:20:53.790972 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="64ec402c-6b37-4942-be19-9dcc436c6650" containerName="dnsmasq-dns" Apr 04 02:20:53 crc kubenswrapper[4681]: E0404 02:20:53.790986 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64ec402c-6b37-4942-be19-9dcc436c6650" containerName="init" Apr 04 02:20:53 crc kubenswrapper[4681]: I0404 02:20:53.790992 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="64ec402c-6b37-4942-be19-9dcc436c6650" containerName="init" Apr 04 02:20:53 crc kubenswrapper[4681]: E0404 02:20:53.791001 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="641eda62-a031-46b3-8cb3-c534541afcf7" containerName="init" Apr 04 02:20:53 crc kubenswrapper[4681]: I0404 02:20:53.791006 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="641eda62-a031-46b3-8cb3-c534541afcf7" containerName="init" Apr 04 02:20:53 crc kubenswrapper[4681]: I0404 02:20:53.791184 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="64ec402c-6b37-4942-be19-9dcc436c6650" containerName="dnsmasq-dns" Apr 04 02:20:53 crc kubenswrapper[4681]: I0404 02:20:53.791208 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="641eda62-a031-46b3-8cb3-c534541afcf7" containerName="dnsmasq-dns" Apr 04 02:20:53 crc kubenswrapper[4681]: I0404 02:20:53.792248 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79d5b69897-l2rjd" Apr 04 02:20:53 crc kubenswrapper[4681]: I0404 02:20:53.865787 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79d5b69897-l2rjd"] Apr 04 02:20:53 crc kubenswrapper[4681]: I0404 02:20:53.994232 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19-dns-svc\") pod \"dnsmasq-dns-79d5b69897-l2rjd\" (UID: \"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19\") " pod="openstack/dnsmasq-dns-79d5b69897-l2rjd" Apr 04 02:20:53 crc kubenswrapper[4681]: I0404 02:20:53.994448 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19-config\") pod \"dnsmasq-dns-79d5b69897-l2rjd\" (UID: \"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19\") " pod="openstack/dnsmasq-dns-79d5b69897-l2rjd" Apr 04 02:20:53 crc kubenswrapper[4681]: I0404 02:20:53.994707 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19-ovsdbserver-nb\") pod \"dnsmasq-dns-79d5b69897-l2rjd\" (UID: \"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19\") " pod="openstack/dnsmasq-dns-79d5b69897-l2rjd" Apr 04 02:20:53 crc kubenswrapper[4681]: I0404 02:20:53.994873 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19-ovsdbserver-sb\") pod \"dnsmasq-dns-79d5b69897-l2rjd\" (UID: \"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19\") " pod="openstack/dnsmasq-dns-79d5b69897-l2rjd" Apr 04 02:20:53 crc kubenswrapper[4681]: I0404 02:20:53.994917 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql5dj\" (UniqueName: \"kubernetes.io/projected/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19-kube-api-access-ql5dj\") pod \"dnsmasq-dns-79d5b69897-l2rjd\" (UID: \"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19\") " pod="openstack/dnsmasq-dns-79d5b69897-l2rjd" Apr 04 02:20:54 crc kubenswrapper[4681]: I0404 02:20:54.095971 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19-dns-svc\") pod \"dnsmasq-dns-79d5b69897-l2rjd\" (UID: \"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19\") " pod="openstack/dnsmasq-dns-79d5b69897-l2rjd" Apr 04 02:20:54 crc kubenswrapper[4681]: I0404 02:20:54.096243 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19-config\") pod \"dnsmasq-dns-79d5b69897-l2rjd\" (UID: \"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19\") " pod="openstack/dnsmasq-dns-79d5b69897-l2rjd" Apr 04 02:20:54 crc kubenswrapper[4681]: I0404 02:20:54.096310 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19-ovsdbserver-nb\") pod \"dnsmasq-dns-79d5b69897-l2rjd\" (UID: \"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19\") " pod="openstack/dnsmasq-dns-79d5b69897-l2rjd" Apr 04 02:20:54 crc kubenswrapper[4681]: I0404 02:20:54.096356 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19-ovsdbserver-sb\") pod \"dnsmasq-dns-79d5b69897-l2rjd\" (UID: \"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19\") " pod="openstack/dnsmasq-dns-79d5b69897-l2rjd" Apr 04 02:20:54 crc kubenswrapper[4681]: I0404 02:20:54.096374 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql5dj\" (UniqueName: \"kubernetes.io/projected/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19-kube-api-access-ql5dj\") pod \"dnsmasq-dns-79d5b69897-l2rjd\" (UID: \"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19\") " pod="openstack/dnsmasq-dns-79d5b69897-l2rjd" Apr 04 02:20:54 crc kubenswrapper[4681]: I0404 02:20:54.097244 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19-ovsdbserver-nb\") pod \"dnsmasq-dns-79d5b69897-l2rjd\" (UID: \"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19\") " pod="openstack/dnsmasq-dns-79d5b69897-l2rjd" Apr 04 02:20:54 crc kubenswrapper[4681]: I0404 02:20:54.097351 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19-config\") pod \"dnsmasq-dns-79d5b69897-l2rjd\" (UID: \"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19\") " pod="openstack/dnsmasq-dns-79d5b69897-l2rjd" Apr 04 02:20:54 crc kubenswrapper[4681]: I0404 02:20:54.097483 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19-ovsdbserver-sb\") pod \"dnsmasq-dns-79d5b69897-l2rjd\" (UID: \"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19\") " pod="openstack/dnsmasq-dns-79d5b69897-l2rjd" Apr 04 02:20:54 crc kubenswrapper[4681]: I0404 02:20:54.098058 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19-dns-svc\") pod \"dnsmasq-dns-79d5b69897-l2rjd\" (UID: \"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19\") " pod="openstack/dnsmasq-dns-79d5b69897-l2rjd" Apr 04 02:20:54 crc kubenswrapper[4681]: I0404 02:20:54.116062 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql5dj\" (UniqueName: \"kubernetes.io/projected/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19-kube-api-access-ql5dj\") pod \"dnsmasq-dns-79d5b69897-l2rjd\" (UID: \"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19\") " pod="openstack/dnsmasq-dns-79d5b69897-l2rjd" Apr 04 02:20:54 crc kubenswrapper[4681]: I0404 02:20:54.413960 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79d5b69897-l2rjd" Apr 04 02:20:54 crc kubenswrapper[4681]: I0404 02:20:54.664068 4681 generic.go:334] "Generic (PLEG): container finished" podID="e527afea-72d4-4e70-a923-a5007d7d44bf" containerID="131abdd122f4e6f6e5b5e156966a71f8e935879e5f071ef8c576d39fda34eecb" exitCode=0 Apr 04 02:20:54 crc kubenswrapper[4681]: I0404 02:20:54.664382 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c9f548c5f-whptx" event={"ID":"e527afea-72d4-4e70-a923-a5007d7d44bf","Type":"ContainerDied","Data":"131abdd122f4e6f6e5b5e156966a71f8e935879e5f071ef8c576d39fda34eecb"} Apr 04 02:20:54 crc kubenswrapper[4681]: I0404 02:20:54.668667 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-nlvvn" event={"ID":"209debba-9c1c-4486-82c7-38424335f889","Type":"ContainerStarted","Data":"83178fdf2792218332de61c1efed95b5bac70996da9096f7b7beebcc0d1dd514"} Apr 04 02:20:54 crc kubenswrapper[4681]: I0404 02:20:54.669874 4681 generic.go:334] "Generic (PLEG): container finished" podID="afa1c1fe-631b-46bf-8735-a6fcc2d3ad32" containerID="f7648c12b94896ab5c25ac4dd59e7129ac21cce7d90999e7f104a71e719c632d" exitCode=0 Apr 04 02:20:54 crc kubenswrapper[4681]: I0404 02:20:54.669924 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d8b7b5bff-h882f" event={"ID":"afa1c1fe-631b-46bf-8735-a6fcc2d3ad32","Type":"ContainerDied","Data":"f7648c12b94896ab5c25ac4dd59e7129ac21cce7d90999e7f104a71e719c632d"} Apr 04 02:20:54 crc kubenswrapper[4681]: I0404 02:20:54.674942 4681 generic.go:334] "Generic (PLEG): container finished" podID="6579b16f-f45a-4c22-9107-6763d001efb2" containerID="1cc25dad350a8aaa54a6f494b053fa55a5f1f7e1c9bf98c525234211428b87b9" exitCode=0 Apr 04 02:20:54 crc kubenswrapper[4681]: I0404 02:20:54.675663 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6579b16f-f45a-4c22-9107-6763d001efb2","Type":"ContainerDied","Data":"1cc25dad350a8aaa54a6f494b053fa55a5f1f7e1c9bf98c525234211428b87b9"} Apr 04 02:20:54 crc kubenswrapper[4681]: I0404 02:20:54.726082 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-nlvvn" podStartSLOduration=4.725988448 podStartE2EDuration="4.725988448s" podCreationTimestamp="2026-04-04 02:20:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:20:54.719970084 +0000 UTC m=+1534.385745214" watchObservedRunningTime="2026-04-04 02:20:54.725988448 +0000 UTC m=+1534.391763568" Apr 04 02:20:54 crc kubenswrapper[4681]: I0404 02:20:54.908366 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79d5b69897-l2rjd"] Apr 04 02:20:54 crc kubenswrapper[4681]: I0404 02:20:54.977002 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Apr 04 02:20:54 crc kubenswrapper[4681]: I0404 02:20:54.991668 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.003412 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.004080 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-pgqt9" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.004243 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.004699 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.023540 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.046023 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c9f548c5f-whptx" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.113203 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb9z7\" (UniqueName: \"kubernetes.io/projected/cdc00a76-b945-4eca-98d7-1f126a78785f-kube-api-access-tb9z7\") pod \"swift-storage-0\" (UID: \"cdc00a76-b945-4eca-98d7-1f126a78785f\") " pod="openstack/swift-storage-0" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.113301 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdc00a76-b945-4eca-98d7-1f126a78785f-etc-swift\") pod \"swift-storage-0\" (UID: \"cdc00a76-b945-4eca-98d7-1f126a78785f\") " pod="openstack/swift-storage-0" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.113374 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cdc00a76-b945-4eca-98d7-1f126a78785f-cache\") pod \"swift-storage-0\" (UID: \"cdc00a76-b945-4eca-98d7-1f126a78785f\") " pod="openstack/swift-storage-0" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.113473 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdc00a76-b945-4eca-98d7-1f126a78785f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"cdc00a76-b945-4eca-98d7-1f126a78785f\") " pod="openstack/swift-storage-0" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.113591 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/cdc00a76-b945-4eca-98d7-1f126a78785f-lock\") pod \"swift-storage-0\" (UID: \"cdc00a76-b945-4eca-98d7-1f126a78785f\") " pod="openstack/swift-storage-0" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.113717 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"cdc00a76-b945-4eca-98d7-1f126a78785f\") " pod="openstack/swift-storage-0" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.214613 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87ltp\" (UniqueName: \"kubernetes.io/projected/e527afea-72d4-4e70-a923-a5007d7d44bf-kube-api-access-87ltp\") pod \"e527afea-72d4-4e70-a923-a5007d7d44bf\" (UID: \"e527afea-72d4-4e70-a923-a5007d7d44bf\") " Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.214772 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e527afea-72d4-4e70-a923-a5007d7d44bf-ovsdbserver-sb\") pod \"e527afea-72d4-4e70-a923-a5007d7d44bf\" (UID: \"e527afea-72d4-4e70-a923-a5007d7d44bf\") " Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.215507 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e527afea-72d4-4e70-a923-a5007d7d44bf-config\") pod \"e527afea-72d4-4e70-a923-a5007d7d44bf\" (UID: \"e527afea-72d4-4e70-a923-a5007d7d44bf\") " Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.215560 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e527afea-72d4-4e70-a923-a5007d7d44bf-dns-svc\") pod \"e527afea-72d4-4e70-a923-a5007d7d44bf\" (UID: \"e527afea-72d4-4e70-a923-a5007d7d44bf\") " Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.215936 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/cdc00a76-b945-4eca-98d7-1f126a78785f-lock\") pod \"swift-storage-0\" (UID: \"cdc00a76-b945-4eca-98d7-1f126a78785f\") " pod="openstack/swift-storage-0" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.216121 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"cdc00a76-b945-4eca-98d7-1f126a78785f\") " pod="openstack/swift-storage-0" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.216313 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb9z7\" (UniqueName: \"kubernetes.io/projected/cdc00a76-b945-4eca-98d7-1f126a78785f-kube-api-access-tb9z7\") pod \"swift-storage-0\" (UID: \"cdc00a76-b945-4eca-98d7-1f126a78785f\") " pod="openstack/swift-storage-0" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.216351 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdc00a76-b945-4eca-98d7-1f126a78785f-etc-swift\") pod \"swift-storage-0\" (UID: \"cdc00a76-b945-4eca-98d7-1f126a78785f\") " pod="openstack/swift-storage-0" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.216405 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cdc00a76-b945-4eca-98d7-1f126a78785f-cache\") pod \"swift-storage-0\" (UID: \"cdc00a76-b945-4eca-98d7-1f126a78785f\") " pod="openstack/swift-storage-0" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.216574 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdc00a76-b945-4eca-98d7-1f126a78785f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"cdc00a76-b945-4eca-98d7-1f126a78785f\") " pod="openstack/swift-storage-0" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.217359 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"cdc00a76-b945-4eca-98d7-1f126a78785f\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/swift-storage-0" Apr 04 02:20:55 crc kubenswrapper[4681]: E0404 02:20:55.218326 4681 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Apr 04 02:20:55 crc kubenswrapper[4681]: E0404 02:20:55.218352 4681 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Apr 04 02:20:55 crc kubenswrapper[4681]: E0404 02:20:55.218405 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cdc00a76-b945-4eca-98d7-1f126a78785f-etc-swift podName:cdc00a76-b945-4eca-98d7-1f126a78785f nodeName:}" failed. No retries permitted until 2026-04-04 02:20:55.718385903 +0000 UTC m=+1535.384161093 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cdc00a76-b945-4eca-98d7-1f126a78785f-etc-swift") pod "swift-storage-0" (UID: "cdc00a76-b945-4eca-98d7-1f126a78785f") : configmap "swift-ring-files" not found Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.218940 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e527afea-72d4-4e70-a923-a5007d7d44bf-kube-api-access-87ltp" (OuterVolumeSpecName: "kube-api-access-87ltp") pod "e527afea-72d4-4e70-a923-a5007d7d44bf" (UID: "e527afea-72d4-4e70-a923-a5007d7d44bf"). InnerVolumeSpecName "kube-api-access-87ltp". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.219178 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/cdc00a76-b945-4eca-98d7-1f126a78785f-lock\") pod \"swift-storage-0\" (UID: \"cdc00a76-b945-4eca-98d7-1f126a78785f\") " pod="openstack/swift-storage-0" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.219484 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/cdc00a76-b945-4eca-98d7-1f126a78785f-cache\") pod \"swift-storage-0\" (UID: \"cdc00a76-b945-4eca-98d7-1f126a78785f\") " pod="openstack/swift-storage-0" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.222027 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdc00a76-b945-4eca-98d7-1f126a78785f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"cdc00a76-b945-4eca-98d7-1f126a78785f\") " pod="openstack/swift-storage-0" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.238913 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e527afea-72d4-4e70-a923-a5007d7d44bf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e527afea-72d4-4e70-a923-a5007d7d44bf" (UID: "e527afea-72d4-4e70-a923-a5007d7d44bf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.240982 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e527afea-72d4-4e70-a923-a5007d7d44bf-config" (OuterVolumeSpecName: "config") pod "e527afea-72d4-4e70-a923-a5007d7d44bf" (UID: "e527afea-72d4-4e70-a923-a5007d7d44bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.242606 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e527afea-72d4-4e70-a923-a5007d7d44bf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e527afea-72d4-4e70-a923-a5007d7d44bf" (UID: "e527afea-72d4-4e70-a923-a5007d7d44bf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.243915 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb9z7\" (UniqueName: \"kubernetes.io/projected/cdc00a76-b945-4eca-98d7-1f126a78785f-kube-api-access-tb9z7\") pod \"swift-storage-0\" (UID: \"cdc00a76-b945-4eca-98d7-1f126a78785f\") " pod="openstack/swift-storage-0" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.248848 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"cdc00a76-b945-4eca-98d7-1f126a78785f\") " pod="openstack/swift-storage-0" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.318239 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87ltp\" (UniqueName: \"kubernetes.io/projected/e527afea-72d4-4e70-a923-a5007d7d44bf-kube-api-access-87ltp\") on node \"crc\" DevicePath \"\"" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.318295 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e527afea-72d4-4e70-a923-a5007d7d44bf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.318305 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e527afea-72d4-4e70-a923-a5007d7d44bf-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.318313 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e527afea-72d4-4e70-a923-a5007d7d44bf-dns-svc\") on node \"crc\" DevicePath \"\"" Apr 04 02:20:55 crc kubenswrapper[4681]: E0404 02:20:55.548350 4681 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Apr 04 02:20:55 crc kubenswrapper[4681]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Apr 04 02:20:55 crc kubenswrapper[4681]: > podSandboxID="f61a2b4b3e5c202613701675d26897dbfce930e235cc28e5b998e6ab8ef2b114" Apr 04 02:20:55 crc kubenswrapper[4681]: E0404 02:20:55.548845 4681 kuberuntime_manager.go:1274] "Unhandled Error" err=< Apr 04 02:20:55 crc kubenswrapper[4681]: container &Container{Name:dnsmasq-dns,Image:38.102.83.110:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n94hc8hcdhd9h5cfh88hddh99h5bfh576hf6hcbh64fh655h594h657h5d9h6fh586h57h7dh657h54bhfbh585h549hc9h567h5bdh5d8h5d4h697q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qmrfg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5d8b7b5bff-h882f_openstack(afa1c1fe-631b-46bf-8735-a6fcc2d3ad32): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Apr 04 02:20:55 crc kubenswrapper[4681]: > logger="UnhandledError" Apr 04 02:20:55 crc kubenswrapper[4681]: E0404 02:20:55.550092 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-5d8b7b5bff-h882f" podUID="afa1c1fe-631b-46bf-8735-a6fcc2d3ad32" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.562205 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-m7hrz"] Apr 04 02:20:55 crc kubenswrapper[4681]: E0404 02:20:55.562763 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e527afea-72d4-4e70-a923-a5007d7d44bf" containerName="init" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.562779 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="e527afea-72d4-4e70-a923-a5007d7d44bf" containerName="init" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.562977 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="e527afea-72d4-4e70-a923-a5007d7d44bf" containerName="init" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.563734 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-m7hrz" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.567686 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.567919 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.571973 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.573676 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-m7hrz"] Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.594836 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-m7hrz"] Apr 04 02:20:55 crc kubenswrapper[4681]: E0404 02:20:55.649806 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-thzv9 ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/swift-ring-rebalance-m7hrz" podUID="55fa04cd-8b44-4759-9978-5b7df697f46d" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.657212 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-gz57l"] Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.658314 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gz57l" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.684812 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-gz57l"] Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.708454 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79d5b69897-l2rjd" event={"ID":"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19","Type":"ContainerStarted","Data":"5eececdba393d6de2eec67ccc39436bc3121546f00c5a0880b34465a1ecec016"} Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.708499 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79d5b69897-l2rjd" event={"ID":"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19","Type":"ContainerStarted","Data":"6ff70907f296bb3774472a4c99aea6a6153400b98f82e38239889e8526a3702a"} Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.731683 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-m7hrz" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.731763 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c9f548c5f-whptx" event={"ID":"e527afea-72d4-4e70-a923-a5007d7d44bf","Type":"ContainerDied","Data":"f8559b24b82634bad07c5b037aafa5fe79a6b13144dbcbcb173b1895c2da8c37"} Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.731800 4681 scope.go:117] "RemoveContainer" containerID="131abdd122f4e6f6e5b5e156966a71f8e935879e5f071ef8c576d39fda34eecb" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.733769 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c9f548c5f-whptx" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.735254 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/55fa04cd-8b44-4759-9978-5b7df697f46d-dispersionconf\") pod \"swift-ring-rebalance-m7hrz\" (UID: \"55fa04cd-8b44-4759-9978-5b7df697f46d\") " pod="openstack/swift-ring-rebalance-m7hrz" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.735364 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/55fa04cd-8b44-4759-9978-5b7df697f46d-etc-swift\") pod \"swift-ring-rebalance-m7hrz\" (UID: \"55fa04cd-8b44-4759-9978-5b7df697f46d\") " pod="openstack/swift-ring-rebalance-m7hrz" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.735383 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55fa04cd-8b44-4759-9978-5b7df697f46d-scripts\") pod \"swift-ring-rebalance-m7hrz\" (UID: \"55fa04cd-8b44-4759-9978-5b7df697f46d\") " pod="openstack/swift-ring-rebalance-m7hrz" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.735418 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdc00a76-b945-4eca-98d7-1f126a78785f-etc-swift\") pod \"swift-storage-0\" (UID: \"cdc00a76-b945-4eca-98d7-1f126a78785f\") " pod="openstack/swift-storage-0" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.735445 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/55fa04cd-8b44-4759-9978-5b7df697f46d-swiftconf\") pod \"swift-ring-rebalance-m7hrz\" (UID: \"55fa04cd-8b44-4759-9978-5b7df697f46d\") " pod="openstack/swift-ring-rebalance-m7hrz" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.735493 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/55fa04cd-8b44-4759-9978-5b7df697f46d-ring-data-devices\") pod \"swift-ring-rebalance-m7hrz\" (UID: \"55fa04cd-8b44-4759-9978-5b7df697f46d\") " pod="openstack/swift-ring-rebalance-m7hrz" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.735540 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55fa04cd-8b44-4759-9978-5b7df697f46d-combined-ca-bundle\") pod \"swift-ring-rebalance-m7hrz\" (UID: \"55fa04cd-8b44-4759-9978-5b7df697f46d\") " pod="openstack/swift-ring-rebalance-m7hrz" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.735562 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thzv9\" (UniqueName: \"kubernetes.io/projected/55fa04cd-8b44-4759-9978-5b7df697f46d-kube-api-access-thzv9\") pod \"swift-ring-rebalance-m7hrz\" (UID: \"55fa04cd-8b44-4759-9978-5b7df697f46d\") " pod="openstack/swift-ring-rebalance-m7hrz" Apr 04 02:20:55 crc kubenswrapper[4681]: E0404 02:20:55.735724 4681 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Apr 04 02:20:55 crc kubenswrapper[4681]: E0404 02:20:55.735745 4681 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Apr 04 02:20:55 crc kubenswrapper[4681]: E0404 02:20:55.735791 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cdc00a76-b945-4eca-98d7-1f126a78785f-etc-swift podName:cdc00a76-b945-4eca-98d7-1f126a78785f nodeName:}" failed. No retries permitted until 2026-04-04 02:20:56.735773352 +0000 UTC m=+1536.401548472 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cdc00a76-b945-4eca-98d7-1f126a78785f-etc-swift") pod "swift-storage-0" (UID: "cdc00a76-b945-4eca-98d7-1f126a78785f") : configmap "swift-ring-files" not found Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.777592 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-m7hrz" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.838886 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55fa04cd-8b44-4759-9978-5b7df697f46d-combined-ca-bundle\") pod \"swift-ring-rebalance-m7hrz\" (UID: \"55fa04cd-8b44-4759-9978-5b7df697f46d\") " pod="openstack/swift-ring-rebalance-m7hrz" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.838923 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thzv9\" (UniqueName: \"kubernetes.io/projected/55fa04cd-8b44-4759-9978-5b7df697f46d-kube-api-access-thzv9\") pod \"swift-ring-rebalance-m7hrz\" (UID: \"55fa04cd-8b44-4759-9978-5b7df697f46d\") " pod="openstack/swift-ring-rebalance-m7hrz" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.838945 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d76298d-bafc-4c57-9e19-f77f982a3187-combined-ca-bundle\") pod \"swift-ring-rebalance-gz57l\" (UID: \"6d76298d-bafc-4c57-9e19-f77f982a3187\") " pod="openstack/swift-ring-rebalance-gz57l" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.838972 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d76298d-bafc-4c57-9e19-f77f982a3187-scripts\") pod \"swift-ring-rebalance-gz57l\" (UID: \"6d76298d-bafc-4c57-9e19-f77f982a3187\") " pod="openstack/swift-ring-rebalance-gz57l" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.838989 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxnnh\" (UniqueName: \"kubernetes.io/projected/6d76298d-bafc-4c57-9e19-f77f982a3187-kube-api-access-xxnnh\") pod \"swift-ring-rebalance-gz57l\" (UID: \"6d76298d-bafc-4c57-9e19-f77f982a3187\") " pod="openstack/swift-ring-rebalance-gz57l" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.839007 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/55fa04cd-8b44-4759-9978-5b7df697f46d-dispersionconf\") pod \"swift-ring-rebalance-m7hrz\" (UID: \"55fa04cd-8b44-4759-9978-5b7df697f46d\") " pod="openstack/swift-ring-rebalance-m7hrz" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.839091 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6d76298d-bafc-4c57-9e19-f77f982a3187-etc-swift\") pod \"swift-ring-rebalance-gz57l\" (UID: \"6d76298d-bafc-4c57-9e19-f77f982a3187\") " pod="openstack/swift-ring-rebalance-gz57l" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.839109 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/55fa04cd-8b44-4759-9978-5b7df697f46d-etc-swift\") pod \"swift-ring-rebalance-m7hrz\" (UID: \"55fa04cd-8b44-4759-9978-5b7df697f46d\") " pod="openstack/swift-ring-rebalance-m7hrz" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.839128 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55fa04cd-8b44-4759-9978-5b7df697f46d-scripts\") pod \"swift-ring-rebalance-m7hrz\" (UID: \"55fa04cd-8b44-4759-9978-5b7df697f46d\") " pod="openstack/swift-ring-rebalance-m7hrz" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.839205 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/55fa04cd-8b44-4759-9978-5b7df697f46d-swiftconf\") pod \"swift-ring-rebalance-m7hrz\" (UID: \"55fa04cd-8b44-4759-9978-5b7df697f46d\") " pod="openstack/swift-ring-rebalance-m7hrz" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.839285 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/55fa04cd-8b44-4759-9978-5b7df697f46d-ring-data-devices\") pod \"swift-ring-rebalance-m7hrz\" (UID: \"55fa04cd-8b44-4759-9978-5b7df697f46d\") " pod="openstack/swift-ring-rebalance-m7hrz" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.839305 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6d76298d-bafc-4c57-9e19-f77f982a3187-ring-data-devices\") pod \"swift-ring-rebalance-gz57l\" (UID: \"6d76298d-bafc-4c57-9e19-f77f982a3187\") " pod="openstack/swift-ring-rebalance-gz57l" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.839322 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6d76298d-bafc-4c57-9e19-f77f982a3187-dispersionconf\") pod \"swift-ring-rebalance-gz57l\" (UID: \"6d76298d-bafc-4c57-9e19-f77f982a3187\") " pod="openstack/swift-ring-rebalance-gz57l" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.839347 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6d76298d-bafc-4c57-9e19-f77f982a3187-swiftconf\") pod \"swift-ring-rebalance-gz57l\" (UID: \"6d76298d-bafc-4c57-9e19-f77f982a3187\") " pod="openstack/swift-ring-rebalance-gz57l" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.855455 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/55fa04cd-8b44-4759-9978-5b7df697f46d-ring-data-devices\") pod \"swift-ring-rebalance-m7hrz\" (UID: \"55fa04cd-8b44-4759-9978-5b7df697f46d\") " pod="openstack/swift-ring-rebalance-m7hrz" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.855668 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/55fa04cd-8b44-4759-9978-5b7df697f46d-etc-swift\") pod \"swift-ring-rebalance-m7hrz\" (UID: \"55fa04cd-8b44-4759-9978-5b7df697f46d\") " pod="openstack/swift-ring-rebalance-m7hrz" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.856055 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55fa04cd-8b44-4759-9978-5b7df697f46d-scripts\") pod \"swift-ring-rebalance-m7hrz\" (UID: \"55fa04cd-8b44-4759-9978-5b7df697f46d\") " pod="openstack/swift-ring-rebalance-m7hrz" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.871076 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/55fa04cd-8b44-4759-9978-5b7df697f46d-swiftconf\") pod \"swift-ring-rebalance-m7hrz\" (UID: \"55fa04cd-8b44-4759-9978-5b7df697f46d\") " pod="openstack/swift-ring-rebalance-m7hrz" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.892345 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/55fa04cd-8b44-4759-9978-5b7df697f46d-dispersionconf\") pod \"swift-ring-rebalance-m7hrz\" (UID: \"55fa04cd-8b44-4759-9978-5b7df697f46d\") " pod="openstack/swift-ring-rebalance-m7hrz" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.893174 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55fa04cd-8b44-4759-9978-5b7df697f46d-combined-ca-bundle\") pod \"swift-ring-rebalance-m7hrz\" (UID: \"55fa04cd-8b44-4759-9978-5b7df697f46d\") " pod="openstack/swift-ring-rebalance-m7hrz" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.920684 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thzv9\" (UniqueName: \"kubernetes.io/projected/55fa04cd-8b44-4759-9978-5b7df697f46d-kube-api-access-thzv9\") pod \"swift-ring-rebalance-m7hrz\" (UID: \"55fa04cd-8b44-4759-9978-5b7df697f46d\") " pod="openstack/swift-ring-rebalance-m7hrz" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.922482 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c9f548c5f-whptx"] Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.930434 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c9f548c5f-whptx"] Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.940687 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55fa04cd-8b44-4759-9978-5b7df697f46d-scripts\") pod \"55fa04cd-8b44-4759-9978-5b7df697f46d\" (UID: \"55fa04cd-8b44-4759-9978-5b7df697f46d\") " Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.940804 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/55fa04cd-8b44-4759-9978-5b7df697f46d-etc-swift\") pod \"55fa04cd-8b44-4759-9978-5b7df697f46d\" (UID: \"55fa04cd-8b44-4759-9978-5b7df697f46d\") " Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.940940 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/55fa04cd-8b44-4759-9978-5b7df697f46d-ring-data-devices\") pod \"55fa04cd-8b44-4759-9978-5b7df697f46d\" (UID: \"55fa04cd-8b44-4759-9978-5b7df697f46d\") " Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.941181 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d76298d-bafc-4c57-9e19-f77f982a3187-combined-ca-bundle\") pod \"swift-ring-rebalance-gz57l\" (UID: \"6d76298d-bafc-4c57-9e19-f77f982a3187\") " pod="openstack/swift-ring-rebalance-gz57l" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.941224 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d76298d-bafc-4c57-9e19-f77f982a3187-scripts\") pod \"swift-ring-rebalance-gz57l\" (UID: \"6d76298d-bafc-4c57-9e19-f77f982a3187\") " pod="openstack/swift-ring-rebalance-gz57l" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.941244 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxnnh\" (UniqueName: \"kubernetes.io/projected/6d76298d-bafc-4c57-9e19-f77f982a3187-kube-api-access-xxnnh\") pod \"swift-ring-rebalance-gz57l\" (UID: \"6d76298d-bafc-4c57-9e19-f77f982a3187\") " pod="openstack/swift-ring-rebalance-gz57l" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.941235 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55fa04cd-8b44-4759-9978-5b7df697f46d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "55fa04cd-8b44-4759-9978-5b7df697f46d" (UID: "55fa04cd-8b44-4759-9978-5b7df697f46d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.941323 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6d76298d-bafc-4c57-9e19-f77f982a3187-etc-swift\") pod \"swift-ring-rebalance-gz57l\" (UID: \"6d76298d-bafc-4c57-9e19-f77f982a3187\") " pod="openstack/swift-ring-rebalance-gz57l" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.941399 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6d76298d-bafc-4c57-9e19-f77f982a3187-ring-data-devices\") pod \"swift-ring-rebalance-gz57l\" (UID: \"6d76298d-bafc-4c57-9e19-f77f982a3187\") " pod="openstack/swift-ring-rebalance-gz57l" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.941416 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6d76298d-bafc-4c57-9e19-f77f982a3187-dispersionconf\") pod \"swift-ring-rebalance-gz57l\" (UID: \"6d76298d-bafc-4c57-9e19-f77f982a3187\") " pod="openstack/swift-ring-rebalance-gz57l" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.941432 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6d76298d-bafc-4c57-9e19-f77f982a3187-swiftconf\") pod \"swift-ring-rebalance-gz57l\" (UID: \"6d76298d-bafc-4c57-9e19-f77f982a3187\") " pod="openstack/swift-ring-rebalance-gz57l" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.941479 4681 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/55fa04cd-8b44-4759-9978-5b7df697f46d-etc-swift\") on node \"crc\" DevicePath \"\"" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.941313 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55fa04cd-8b44-4759-9978-5b7df697f46d-scripts" (OuterVolumeSpecName: "scripts") pod "55fa04cd-8b44-4759-9978-5b7df697f46d" (UID: "55fa04cd-8b44-4759-9978-5b7df697f46d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.942148 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6d76298d-bafc-4c57-9e19-f77f982a3187-etc-swift\") pod \"swift-ring-rebalance-gz57l\" (UID: \"6d76298d-bafc-4c57-9e19-f77f982a3187\") " pod="openstack/swift-ring-rebalance-gz57l" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.942721 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d76298d-bafc-4c57-9e19-f77f982a3187-scripts\") pod \"swift-ring-rebalance-gz57l\" (UID: \"6d76298d-bafc-4c57-9e19-f77f982a3187\") " pod="openstack/swift-ring-rebalance-gz57l" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.942992 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6d76298d-bafc-4c57-9e19-f77f982a3187-ring-data-devices\") pod \"swift-ring-rebalance-gz57l\" (UID: \"6d76298d-bafc-4c57-9e19-f77f982a3187\") " pod="openstack/swift-ring-rebalance-gz57l" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.945629 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55fa04cd-8b44-4759-9978-5b7df697f46d-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "55fa04cd-8b44-4759-9978-5b7df697f46d" (UID: "55fa04cd-8b44-4759-9978-5b7df697f46d"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.946945 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6d76298d-bafc-4c57-9e19-f77f982a3187-swiftconf\") pod \"swift-ring-rebalance-gz57l\" (UID: \"6d76298d-bafc-4c57-9e19-f77f982a3187\") " pod="openstack/swift-ring-rebalance-gz57l" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.947159 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d76298d-bafc-4c57-9e19-f77f982a3187-combined-ca-bundle\") pod \"swift-ring-rebalance-gz57l\" (UID: \"6d76298d-bafc-4c57-9e19-f77f982a3187\") " pod="openstack/swift-ring-rebalance-gz57l" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.947698 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6d76298d-bafc-4c57-9e19-f77f982a3187-dispersionconf\") pod \"swift-ring-rebalance-gz57l\" (UID: \"6d76298d-bafc-4c57-9e19-f77f982a3187\") " pod="openstack/swift-ring-rebalance-gz57l" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.956693 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxnnh\" (UniqueName: \"kubernetes.io/projected/6d76298d-bafc-4c57-9e19-f77f982a3187-kube-api-access-xxnnh\") pod \"swift-ring-rebalance-gz57l\" (UID: \"6d76298d-bafc-4c57-9e19-f77f982a3187\") " pod="openstack/swift-ring-rebalance-gz57l" Apr 04 02:20:55 crc kubenswrapper[4681]: I0404 02:20:55.989855 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gz57l" Apr 04 02:20:56 crc kubenswrapper[4681]: I0404 02:20:56.043082 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55fa04cd-8b44-4759-9978-5b7df697f46d-combined-ca-bundle\") pod \"55fa04cd-8b44-4759-9978-5b7df697f46d\" (UID: \"55fa04cd-8b44-4759-9978-5b7df697f46d\") " Apr 04 02:20:56 crc kubenswrapper[4681]: I0404 02:20:56.043298 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thzv9\" (UniqueName: \"kubernetes.io/projected/55fa04cd-8b44-4759-9978-5b7df697f46d-kube-api-access-thzv9\") pod \"55fa04cd-8b44-4759-9978-5b7df697f46d\" (UID: \"55fa04cd-8b44-4759-9978-5b7df697f46d\") " Apr 04 02:20:56 crc kubenswrapper[4681]: I0404 02:20:56.043373 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/55fa04cd-8b44-4759-9978-5b7df697f46d-swiftconf\") pod \"55fa04cd-8b44-4759-9978-5b7df697f46d\" (UID: \"55fa04cd-8b44-4759-9978-5b7df697f46d\") " Apr 04 02:20:56 crc kubenswrapper[4681]: I0404 02:20:56.043420 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/55fa04cd-8b44-4759-9978-5b7df697f46d-dispersionconf\") pod \"55fa04cd-8b44-4759-9978-5b7df697f46d\" (UID: \"55fa04cd-8b44-4759-9978-5b7df697f46d\") " Apr 04 02:20:56 crc kubenswrapper[4681]: I0404 02:20:56.043969 4681 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/55fa04cd-8b44-4759-9978-5b7df697f46d-ring-data-devices\") on node \"crc\" DevicePath \"\"" Apr 04 02:20:56 crc kubenswrapper[4681]: I0404 02:20:56.043992 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55fa04cd-8b44-4759-9978-5b7df697f46d-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:20:56 crc kubenswrapper[4681]: I0404 02:20:56.047182 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55fa04cd-8b44-4759-9978-5b7df697f46d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55fa04cd-8b44-4759-9978-5b7df697f46d" (UID: "55fa04cd-8b44-4759-9978-5b7df697f46d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:20:56 crc kubenswrapper[4681]: I0404 02:20:56.047211 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55fa04cd-8b44-4759-9978-5b7df697f46d-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "55fa04cd-8b44-4759-9978-5b7df697f46d" (UID: "55fa04cd-8b44-4759-9978-5b7df697f46d"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:20:56 crc kubenswrapper[4681]: I0404 02:20:56.048478 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55fa04cd-8b44-4759-9978-5b7df697f46d-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "55fa04cd-8b44-4759-9978-5b7df697f46d" (UID: "55fa04cd-8b44-4759-9978-5b7df697f46d"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:20:56 crc kubenswrapper[4681]: I0404 02:20:56.050952 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55fa04cd-8b44-4759-9978-5b7df697f46d-kube-api-access-thzv9" (OuterVolumeSpecName: "kube-api-access-thzv9") pod "55fa04cd-8b44-4759-9978-5b7df697f46d" (UID: "55fa04cd-8b44-4759-9978-5b7df697f46d"). InnerVolumeSpecName "kube-api-access-thzv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:20:56 crc kubenswrapper[4681]: I0404 02:20:56.148391 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55fa04cd-8b44-4759-9978-5b7df697f46d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:20:56 crc kubenswrapper[4681]: I0404 02:20:56.148680 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thzv9\" (UniqueName: \"kubernetes.io/projected/55fa04cd-8b44-4759-9978-5b7df697f46d-kube-api-access-thzv9\") on node \"crc\" DevicePath \"\"" Apr 04 02:20:56 crc kubenswrapper[4681]: I0404 02:20:56.148694 4681 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/55fa04cd-8b44-4759-9978-5b7df697f46d-swiftconf\") on node \"crc\" DevicePath \"\"" Apr 04 02:20:56 crc kubenswrapper[4681]: I0404 02:20:56.148709 4681 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/55fa04cd-8b44-4759-9978-5b7df697f46d-dispersionconf\") on node \"crc\" DevicePath \"\"" Apr 04 02:20:56 crc kubenswrapper[4681]: I0404 02:20:56.498044 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-gz57l"] Apr 04 02:20:56 crc kubenswrapper[4681]: I0404 02:20:56.525805 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:20:56 crc kubenswrapper[4681]: I0404 02:20:56.525846 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:20:56 crc kubenswrapper[4681]: I0404 02:20:56.741659 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d8b7b5bff-h882f" event={"ID":"afa1c1fe-631b-46bf-8735-a6fcc2d3ad32","Type":"ContainerStarted","Data":"7a58dccd147291c6839260b4baa446c8816b5a3ea3ce8179ac61f14b3e2aec55"} Apr 04 02:20:56 crc kubenswrapper[4681]: I0404 02:20:56.741878 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d8b7b5bff-h882f" Apr 04 02:20:56 crc kubenswrapper[4681]: I0404 02:20:56.743077 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gz57l" event={"ID":"6d76298d-bafc-4c57-9e19-f77f982a3187","Type":"ContainerStarted","Data":"36d93a56a995f2fe5aa7fd06bdc95c73d4e365c052eacf2fa4c16f7ad775d0d7"} Apr 04 02:20:56 crc kubenswrapper[4681]: I0404 02:20:56.745249 4681 generic.go:334] "Generic (PLEG): container finished" podID="bfe3f5f0-7263-45d4-8393-eaa1ccebfe19" containerID="5eececdba393d6de2eec67ccc39436bc3121546f00c5a0880b34465a1ecec016" exitCode=0 Apr 04 02:20:56 crc kubenswrapper[4681]: I0404 02:20:56.745344 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-m7hrz" Apr 04 02:20:56 crc kubenswrapper[4681]: I0404 02:20:56.750419 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79d5b69897-l2rjd" event={"ID":"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19","Type":"ContainerDied","Data":"5eececdba393d6de2eec67ccc39436bc3121546f00c5a0880b34465a1ecec016"} Apr 04 02:20:56 crc kubenswrapper[4681]: I0404 02:20:56.760447 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdc00a76-b945-4eca-98d7-1f126a78785f-etc-swift\") pod \"swift-storage-0\" (UID: \"cdc00a76-b945-4eca-98d7-1f126a78785f\") " pod="openstack/swift-storage-0" Apr 04 02:20:56 crc kubenswrapper[4681]: E0404 02:20:56.760717 4681 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Apr 04 02:20:56 crc kubenswrapper[4681]: E0404 02:20:56.760731 4681 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Apr 04 02:20:56 crc kubenswrapper[4681]: E0404 02:20:56.760776 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cdc00a76-b945-4eca-98d7-1f126a78785f-etc-swift podName:cdc00a76-b945-4eca-98d7-1f126a78785f nodeName:}" failed. No retries permitted until 2026-04-04 02:20:58.760762202 +0000 UTC m=+1538.426537322 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cdc00a76-b945-4eca-98d7-1f126a78785f-etc-swift") pod "swift-storage-0" (UID: "cdc00a76-b945-4eca-98d7-1f126a78785f") : configmap "swift-ring-files" not found Apr 04 02:20:56 crc kubenswrapper[4681]: I0404 02:20:56.795523 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d8b7b5bff-h882f" podStartSLOduration=5.795475024 podStartE2EDuration="5.795475024s" podCreationTimestamp="2026-04-04 02:20:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:20:56.781985374 +0000 UTC m=+1536.447760514" watchObservedRunningTime="2026-04-04 02:20:56.795475024 +0000 UTC m=+1536.461250144" Apr 04 02:20:56 crc kubenswrapper[4681]: I0404 02:20:56.833046 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-m7hrz"] Apr 04 02:20:56 crc kubenswrapper[4681]: I0404 02:20:56.841954 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-m7hrz"] Apr 04 02:20:57 crc kubenswrapper[4681]: I0404 02:20:57.017185 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-595d94d48f-jpw4b" podUID="64ec402c-6b37-4942-be19-9dcc436c6650" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.108:5353: i/o timeout" Apr 04 02:20:57 crc kubenswrapper[4681]: I0404 02:20:57.212580 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55fa04cd-8b44-4759-9978-5b7df697f46d" path="/var/lib/kubelet/pods/55fa04cd-8b44-4759-9978-5b7df697f46d/volumes" Apr 04 02:20:57 crc kubenswrapper[4681]: I0404 02:20:57.213078 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e527afea-72d4-4e70-a923-a5007d7d44bf" path="/var/lib/kubelet/pods/e527afea-72d4-4e70-a923-a5007d7d44bf/volumes" Apr 04 02:20:58 crc kubenswrapper[4681]: I0404 02:20:58.761386 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79d5b69897-l2rjd" event={"ID":"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19","Type":"ContainerStarted","Data":"c61bd32edc62af352fd19573a66b63fcf8618beede8596248d09ff9bf3664577"} Apr 04 02:20:58 crc kubenswrapper[4681]: I0404 02:20:58.796520 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdc00a76-b945-4eca-98d7-1f126a78785f-etc-swift\") pod \"swift-storage-0\" (UID: \"cdc00a76-b945-4eca-98d7-1f126a78785f\") " pod="openstack/swift-storage-0" Apr 04 02:20:58 crc kubenswrapper[4681]: E0404 02:20:58.796745 4681 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Apr 04 02:20:58 crc kubenswrapper[4681]: E0404 02:20:58.796770 4681 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Apr 04 02:20:58 crc kubenswrapper[4681]: E0404 02:20:58.796837 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cdc00a76-b945-4eca-98d7-1f126a78785f-etc-swift podName:cdc00a76-b945-4eca-98d7-1f126a78785f nodeName:}" failed. No retries permitted until 2026-04-04 02:21:02.796815691 +0000 UTC m=+1542.462590811 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cdc00a76-b945-4eca-98d7-1f126a78785f-etc-swift") pod "swift-storage-0" (UID: "cdc00a76-b945-4eca-98d7-1f126a78785f") : configmap "swift-ring-files" not found Apr 04 02:21:01 crc kubenswrapper[4681]: I0404 02:21:01.576422 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d8b7b5bff-h882f" Apr 04 02:21:02 crc kubenswrapper[4681]: I0404 02:21:02.876995 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdc00a76-b945-4eca-98d7-1f126a78785f-etc-swift\") pod \"swift-storage-0\" (UID: \"cdc00a76-b945-4eca-98d7-1f126a78785f\") " pod="openstack/swift-storage-0" Apr 04 02:21:02 crc kubenswrapper[4681]: E0404 02:21:02.877329 4681 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Apr 04 02:21:02 crc kubenswrapper[4681]: E0404 02:21:02.877380 4681 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Apr 04 02:21:02 crc kubenswrapper[4681]: E0404 02:21:02.877468 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cdc00a76-b945-4eca-98d7-1f126a78785f-etc-swift podName:cdc00a76-b945-4eca-98d7-1f126a78785f nodeName:}" failed. No retries permitted until 2026-04-04 02:21:10.877441462 +0000 UTC m=+1550.543216612 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cdc00a76-b945-4eca-98d7-1f126a78785f-etc-swift") pod "swift-storage-0" (UID: "cdc00a76-b945-4eca-98d7-1f126a78785f") : configmap "swift-ring-files" not found Apr 04 02:21:10 crc kubenswrapper[4681]: E0404 02:21:10.776707 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.110:5001/podified-master-centos10/openstack-ovn-northd:watcher_latest" Apr 04 02:21:10 crc kubenswrapper[4681]: E0404 02:21:10.777319 4681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.110:5001/podified-master-centos10/openstack-ovn-northd:watcher_latest" Apr 04 02:21:10 crc kubenswrapper[4681]: E0404 02:21:10.777537 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-northd,Image:38.102.83.110:5001/podified-master-centos10/openstack-ovn-northd:watcher_latest,Command:[/usr/bin/ovn-northd],Args:[-vfile:off -vconsole:info --n-threads=1 --ovnnb-db=ssl:ovsdbserver-nb-0.openstack.svc.cluster.local:6641 --ovnsb-db=ssl:ovsdbserver-sb-0.openstack.svc.cluster.local:6642 --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n654h5dch5cdh555h79h98hd5h54bh59h554h58hcch58fh5fh65h5d6h9bh648h69h687h679h675h5bdh564h55bh5f5h57dhf7h8bh76h58bh5fdq,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:certs,Value:n86h5c9h588h6dh5cch88h679h54h54dh549hddhc8h65ch66dhfdhd8h6bh5f7h588h668h556hbch665h599h66dh64bh54fh5bch5d5h597h96hccq,ValueFrom:nil,},EnvVar{Name:certs_metrics,Value:n66chd9h59hf7h55dh65h666h67bh5ch8h658h588h644h55dh67dh64fh75h9bh686h586h99h74h99h699h684h6dh5b4hfch64ch5dfh84h54fq,ValueFrom:nil,},EnvVar{Name:ovnnorthd-config,Value:n5c8h7ch56bh8dh8hc4h5dch9dh68h6bhb7h598h549h5dbh66fh6bh5b4h5cch5d6h55ch57fhfch588h89h5ddh5d6h65bh65bh8dhc4h67dh569q,ValueFrom:nil,},EnvVar{Name:ovnnorthd-scripts,Value:n664hd8h66ch58dh64hc9h66bhd4h558h697h67bh557hdch664h567h669h555h696h556h556h5fh5bh569hbh665h9dh4h9bh564hc8h5b7h5c4q,ValueFrom:nil,},EnvVar{Name:tls-ca-bundle.pem,Value:n5c8h664h644hb7h564h66h696h599h5b9h6fhbfhbhf9h55fh5c9hcdh68bh678h5d8h5b8h66ch648h5b5h557h675h577hfh647hfdh65hcbhffq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bw2bp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/status_check.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/status_check.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-northd-0_openstack(4c79dddc-8bad-4bfb-920f-434aea2c400c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Apr 04 02:21:10 crc kubenswrapper[4681]: I0404 02:21:10.910406 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdc00a76-b945-4eca-98d7-1f126a78785f-etc-swift\") pod \"swift-storage-0\" (UID: \"cdc00a76-b945-4eca-98d7-1f126a78785f\") " pod="openstack/swift-storage-0" Apr 04 02:21:10 crc kubenswrapper[4681]: E0404 02:21:10.910593 4681 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Apr 04 02:21:10 crc kubenswrapper[4681]: E0404 02:21:10.910870 4681 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Apr 04 02:21:10 crc kubenswrapper[4681]: E0404 02:21:10.910929 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cdc00a76-b945-4eca-98d7-1f126a78785f-etc-swift podName:cdc00a76-b945-4eca-98d7-1f126a78785f nodeName:}" failed. No retries permitted until 2026-04-04 02:21:26.910911233 +0000 UTC m=+1566.576686373 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cdc00a76-b945-4eca-98d7-1f126a78785f-etc-swift") pod "swift-storage-0" (UID: "cdc00a76-b945-4eca-98d7-1f126a78785f") : configmap "swift-ring-files" not found Apr 04 02:21:12 crc kubenswrapper[4681]: E0404 02:21:12.150100 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-northd-0" podUID="4c79dddc-8bad-4bfb-920f-434aea2c400c" Apr 04 02:21:12 crc kubenswrapper[4681]: I0404 02:21:12.316820 4681 scope.go:117] "RemoveContainer" containerID="ef016fc482b0c4b464c6e4702f1ebe59beb821e8121c2c0a7dc7ecb4d1877291" Apr 04 02:21:12 crc kubenswrapper[4681]: I0404 02:21:12.896075 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4c79dddc-8bad-4bfb-920f-434aea2c400c","Type":"ContainerStarted","Data":"7727ba3b33efa152d008a08ec22d0bafc2e5532917f10ec7eaa9f92d27449cf5"} Apr 04 02:21:12 crc kubenswrapper[4681]: I0404 02:21:12.896363 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79d5b69897-l2rjd" Apr 04 02:21:12 crc kubenswrapper[4681]: E0404 02:21:12.898101 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.110:5001/podified-master-centos10/openstack-ovn-northd:watcher_latest\\\"\"" pod="openstack/ovn-northd-0" podUID="4c79dddc-8bad-4bfb-920f-434aea2c400c" Apr 04 02:21:12 crc kubenswrapper[4681]: I0404 02:21:12.900195 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79d5b69897-l2rjd" Apr 04 02:21:12 crc kubenswrapper[4681]: I0404 02:21:12.929726 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79d5b69897-l2rjd" podStartSLOduration=19.929693989 podStartE2EDuration="19.929693989s" podCreationTimestamp="2026-04-04 02:20:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:21:12.92136808 +0000 UTC m=+1552.587143200" watchObservedRunningTime="2026-04-04 02:21:12.929693989 +0000 UTC m=+1552.595469119" Apr 04 02:21:13 crc kubenswrapper[4681]: I0404 02:21:13.000865 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d8b7b5bff-h882f"] Apr 04 02:21:13 crc kubenswrapper[4681]: I0404 02:21:13.008591 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d8b7b5bff-h882f" podUID="afa1c1fe-631b-46bf-8735-a6fcc2d3ad32" containerName="dnsmasq-dns" containerID="cri-o://7a58dccd147291c6839260b4baa446c8816b5a3ea3ce8179ac61f14b3e2aec55" gracePeriod=10 Apr 04 02:21:13 crc kubenswrapper[4681]: I0404 02:21:13.904921 4681 generic.go:334] "Generic (PLEG): container finished" podID="afa1c1fe-631b-46bf-8735-a6fcc2d3ad32" containerID="7a58dccd147291c6839260b4baa446c8816b5a3ea3ce8179ac61f14b3e2aec55" exitCode=0 Apr 04 02:21:13 crc kubenswrapper[4681]: I0404 02:21:13.905003 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d8b7b5bff-h882f" event={"ID":"afa1c1fe-631b-46bf-8735-a6fcc2d3ad32","Type":"ContainerDied","Data":"7a58dccd147291c6839260b4baa446c8816b5a3ea3ce8179ac61f14b3e2aec55"} Apr 04 02:21:13 crc kubenswrapper[4681]: E0404 02:21:13.907365 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.110:5001/podified-master-centos10/openstack-ovn-northd:watcher_latest\\\"\"" pod="openstack/ovn-northd-0" podUID="4c79dddc-8bad-4bfb-920f-434aea2c400c" Apr 04 02:21:16 crc kubenswrapper[4681]: I0404 02:21:16.541820 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-jz78r" podUID="616e7c64-534b-41e8-8ad9-0abf8f05d3d5" containerName="ovn-controller" probeResult="failure" output=< Apr 04 02:21:16 crc kubenswrapper[4681]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Apr 04 02:21:16 crc kubenswrapper[4681]: > Apr 04 02:21:16 crc kubenswrapper[4681]: I0404 02:21:16.576695 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5d8b7b5bff-h882f" podUID="afa1c1fe-631b-46bf-8735-a6fcc2d3ad32" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.124:5353: connect: connection refused" Apr 04 02:21:16 crc kubenswrapper[4681]: I0404 02:21:16.934634 4681 generic.go:334] "Generic (PLEG): container finished" podID="274d9ff3-9300-48ad-8172-5be9539f6e7b" containerID="6bd0a70fd89cdba92090f31bfc1808554d991ffdc4bb98db56421cd6149eb1f5" exitCode=0 Apr 04 02:21:16 crc kubenswrapper[4681]: I0404 02:21:16.934726 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"274d9ff3-9300-48ad-8172-5be9539f6e7b","Type":"ContainerDied","Data":"6bd0a70fd89cdba92090f31bfc1808554d991ffdc4bb98db56421cd6149eb1f5"} Apr 04 02:21:17 crc kubenswrapper[4681]: I0404 02:21:17.944851 4681 generic.go:334] "Generic (PLEG): container finished" podID="189dfe5e-4211-48c8-bc76-ea9c229c5d65" containerID="cfc0250f9ff590c86c7a78c7162839570a127e0fbd746ca75f15337422fe7bed" exitCode=0 Apr 04 02:21:17 crc kubenswrapper[4681]: I0404 02:21:17.944946 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"189dfe5e-4211-48c8-bc76-ea9c229c5d65","Type":"ContainerDied","Data":"cfc0250f9ff590c86c7a78c7162839570a127e0fbd746ca75f15337422fe7bed"} Apr 04 02:21:17 crc kubenswrapper[4681]: I0404 02:21:17.948775 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"274d9ff3-9300-48ad-8172-5be9539f6e7b","Type":"ContainerStarted","Data":"daeb2cafed59da4f2eff2f0e710e84123865650508f479b8fb19626d7f281a15"} Apr 04 02:21:17 crc kubenswrapper[4681]: I0404 02:21:17.948993 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:21:18 crc kubenswrapper[4681]: I0404 02:21:18.002062 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=69.827946973 podStartE2EDuration="1m22.002044859s" podCreationTimestamp="2026-04-04 02:19:56 +0000 UTC" firstStartedPulling="2026-04-04 02:20:28.925826966 +0000 UTC m=+1508.591602086" lastFinishedPulling="2026-04-04 02:20:41.099924852 +0000 UTC m=+1520.765699972" observedRunningTime="2026-04-04 02:21:17.995919951 +0000 UTC m=+1557.661695081" watchObservedRunningTime="2026-04-04 02:21:18.002044859 +0000 UTC m=+1557.667819979" Apr 04 02:21:18 crc kubenswrapper[4681]: I0404 02:21:18.420326 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d8b7b5bff-h882f" Apr 04 02:21:18 crc kubenswrapper[4681]: I0404 02:21:18.554941 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32-dns-svc\") pod \"afa1c1fe-631b-46bf-8735-a6fcc2d3ad32\" (UID: \"afa1c1fe-631b-46bf-8735-a6fcc2d3ad32\") " Apr 04 02:21:18 crc kubenswrapper[4681]: I0404 02:21:18.555007 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32-ovsdbserver-nb\") pod \"afa1c1fe-631b-46bf-8735-a6fcc2d3ad32\" (UID: \"afa1c1fe-631b-46bf-8735-a6fcc2d3ad32\") " Apr 04 02:21:18 crc kubenswrapper[4681]: I0404 02:21:18.555078 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32-config\") pod \"afa1c1fe-631b-46bf-8735-a6fcc2d3ad32\" (UID: \"afa1c1fe-631b-46bf-8735-a6fcc2d3ad32\") " Apr 04 02:21:18 crc kubenswrapper[4681]: I0404 02:21:18.555109 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmrfg\" (UniqueName: \"kubernetes.io/projected/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32-kube-api-access-qmrfg\") pod \"afa1c1fe-631b-46bf-8735-a6fcc2d3ad32\" (UID: \"afa1c1fe-631b-46bf-8735-a6fcc2d3ad32\") " Apr 04 02:21:18 crc kubenswrapper[4681]: I0404 02:21:18.555135 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32-ovsdbserver-sb\") pod \"afa1c1fe-631b-46bf-8735-a6fcc2d3ad32\" (UID: \"afa1c1fe-631b-46bf-8735-a6fcc2d3ad32\") " Apr 04 02:21:18 crc kubenswrapper[4681]: I0404 02:21:18.559336 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32-kube-api-access-qmrfg" (OuterVolumeSpecName: "kube-api-access-qmrfg") pod "afa1c1fe-631b-46bf-8735-a6fcc2d3ad32" (UID: "afa1c1fe-631b-46bf-8735-a6fcc2d3ad32"). InnerVolumeSpecName "kube-api-access-qmrfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:21:18 crc kubenswrapper[4681]: I0404 02:21:18.619638 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "afa1c1fe-631b-46bf-8735-a6fcc2d3ad32" (UID: "afa1c1fe-631b-46bf-8735-a6fcc2d3ad32"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:21:18 crc kubenswrapper[4681]: I0404 02:21:18.631776 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "afa1c1fe-631b-46bf-8735-a6fcc2d3ad32" (UID: "afa1c1fe-631b-46bf-8735-a6fcc2d3ad32"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:21:18 crc kubenswrapper[4681]: I0404 02:21:18.633905 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "afa1c1fe-631b-46bf-8735-a6fcc2d3ad32" (UID: "afa1c1fe-631b-46bf-8735-a6fcc2d3ad32"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:21:18 crc kubenswrapper[4681]: I0404 02:21:18.644605 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32-config" (OuterVolumeSpecName: "config") pod "afa1c1fe-631b-46bf-8735-a6fcc2d3ad32" (UID: "afa1c1fe-631b-46bf-8735-a6fcc2d3ad32"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:21:18 crc kubenswrapper[4681]: I0404 02:21:18.657709 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32-dns-svc\") on node \"crc\" DevicePath \"\"" Apr 04 02:21:18 crc kubenswrapper[4681]: I0404 02:21:18.657746 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Apr 04 02:21:18 crc kubenswrapper[4681]: I0404 02:21:18.657761 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:21:18 crc kubenswrapper[4681]: I0404 02:21:18.657776 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmrfg\" (UniqueName: \"kubernetes.io/projected/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32-kube-api-access-qmrfg\") on node \"crc\" DevicePath \"\"" Apr 04 02:21:18 crc kubenswrapper[4681]: I0404 02:21:18.657790 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Apr 04 02:21:18 crc kubenswrapper[4681]: I0404 02:21:18.961809 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d8b7b5bff-h882f" Apr 04 02:21:18 crc kubenswrapper[4681]: I0404 02:21:18.968623 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d8b7b5bff-h882f" event={"ID":"afa1c1fe-631b-46bf-8735-a6fcc2d3ad32","Type":"ContainerDied","Data":"f61a2b4b3e5c202613701675d26897dbfce930e235cc28e5b998e6ab8ef2b114"} Apr 04 02:21:18 crc kubenswrapper[4681]: I0404 02:21:18.968733 4681 scope.go:117] "RemoveContainer" containerID="7a58dccd147291c6839260b4baa446c8816b5a3ea3ce8179ac61f14b3e2aec55" Apr 04 02:21:18 crc kubenswrapper[4681]: I0404 02:21:18.993511 4681 scope.go:117] "RemoveContainer" containerID="f7648c12b94896ab5c25ac4dd59e7129ac21cce7d90999e7f104a71e719c632d" Apr 04 02:21:18 crc kubenswrapper[4681]: I0404 02:21:18.998473 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d8b7b5bff-h882f"] Apr 04 02:21:19 crc kubenswrapper[4681]: I0404 02:21:19.007550 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d8b7b5bff-h882f"] Apr 04 02:21:19 crc kubenswrapper[4681]: I0404 02:21:19.214595 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afa1c1fe-631b-46bf-8735-a6fcc2d3ad32" path="/var/lib/kubelet/pods/afa1c1fe-631b-46bf-8735-a6fcc2d3ad32/volumes" Apr 04 02:21:19 crc kubenswrapper[4681]: I0404 02:21:19.968897 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/notifications-rabbitmq-server-0" event={"ID":"189dfe5e-4211-48c8-bc76-ea9c229c5d65","Type":"ContainerStarted","Data":"c718d3139ffd04922ef81c3879b928cf9ae49a491a34355996fd6c106413009f"} Apr 04 02:21:19 crc kubenswrapper[4681]: I0404 02:21:19.969428 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:21:19 crc kubenswrapper[4681]: I0404 02:21:19.972759 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6579b16f-f45a-4c22-9107-6763d001efb2","Type":"ContainerStarted","Data":"51f88ab3729b37f6e0bb849dffa5692aa33e222efd7a785730e1f01122f26b8a"} Apr 04 02:21:20 crc kubenswrapper[4681]: I0404 02:21:20.005500 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/notifications-rabbitmq-server-0" podStartSLOduration=71.895340365 podStartE2EDuration="1m24.005476489s" podCreationTimestamp="2026-04-04 02:19:56 +0000 UTC" firstStartedPulling="2026-04-04 02:20:28.891415413 +0000 UTC m=+1508.557190533" lastFinishedPulling="2026-04-04 02:20:41.001551537 +0000 UTC m=+1520.667326657" observedRunningTime="2026-04-04 02:21:19.996352869 +0000 UTC m=+1559.662127999" watchObservedRunningTime="2026-04-04 02:21:20.005476489 +0000 UTC m=+1559.671251619" Apr 04 02:21:20 crc kubenswrapper[4681]: I0404 02:21:20.985807 4681 generic.go:334] "Generic (PLEG): container finished" podID="b40175fa-a3b0-40c3-bc35-7d927897b82b" containerID="d0e9586c6a17e8d85e77ce9203d97ce45d37dae710b702194232b433a39aad53" exitCode=0 Apr 04 02:21:20 crc kubenswrapper[4681]: I0404 02:21:20.986063 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b40175fa-a3b0-40c3-bc35-7d927897b82b","Type":"ContainerDied","Data":"d0e9586c6a17e8d85e77ce9203d97ce45d37dae710b702194232b433a39aad53"} Apr 04 02:21:21 crc kubenswrapper[4681]: I0404 02:21:21.533698 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-jz78r" podUID="616e7c64-534b-41e8-8ad9-0abf8f05d3d5" containerName="ovn-controller" probeResult="failure" output=< Apr 04 02:21:21 crc kubenswrapper[4681]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Apr 04 02:21:21 crc kubenswrapper[4681]: > Apr 04 02:21:21 crc kubenswrapper[4681]: I0404 02:21:21.555443 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-ndgrb" Apr 04 02:21:21 crc kubenswrapper[4681]: I0404 02:21:21.587249 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-ndgrb" Apr 04 02:21:21 crc kubenswrapper[4681]: I0404 02:21:21.800954 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-jz78r-config-4xn6v"] Apr 04 02:21:21 crc kubenswrapper[4681]: E0404 02:21:21.801690 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afa1c1fe-631b-46bf-8735-a6fcc2d3ad32" containerName="dnsmasq-dns" Apr 04 02:21:21 crc kubenswrapper[4681]: I0404 02:21:21.801717 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="afa1c1fe-631b-46bf-8735-a6fcc2d3ad32" containerName="dnsmasq-dns" Apr 04 02:21:21 crc kubenswrapper[4681]: E0404 02:21:21.801734 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afa1c1fe-631b-46bf-8735-a6fcc2d3ad32" containerName="init" Apr 04 02:21:21 crc kubenswrapper[4681]: I0404 02:21:21.801741 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="afa1c1fe-631b-46bf-8735-a6fcc2d3ad32" containerName="init" Apr 04 02:21:21 crc kubenswrapper[4681]: I0404 02:21:21.801910 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="afa1c1fe-631b-46bf-8735-a6fcc2d3ad32" containerName="dnsmasq-dns" Apr 04 02:21:21 crc kubenswrapper[4681]: I0404 02:21:21.802477 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jz78r-config-4xn6v" Apr 04 02:21:21 crc kubenswrapper[4681]: I0404 02:21:21.807953 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Apr 04 02:21:21 crc kubenswrapper[4681]: I0404 02:21:21.816116 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jz78r-config-4xn6v"] Apr 04 02:21:21 crc kubenswrapper[4681]: I0404 02:21:21.913316 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d3c9698-a36e-4606-8c79-e12920f642f3-var-run\") pod \"ovn-controller-jz78r-config-4xn6v\" (UID: \"9d3c9698-a36e-4606-8c79-e12920f642f3\") " pod="openstack/ovn-controller-jz78r-config-4xn6v" Apr 04 02:21:21 crc kubenswrapper[4681]: I0404 02:21:21.913401 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d3c9698-a36e-4606-8c79-e12920f642f3-additional-scripts\") pod \"ovn-controller-jz78r-config-4xn6v\" (UID: \"9d3c9698-a36e-4606-8c79-e12920f642f3\") " pod="openstack/ovn-controller-jz78r-config-4xn6v" Apr 04 02:21:21 crc kubenswrapper[4681]: I0404 02:21:21.913590 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d3c9698-a36e-4606-8c79-e12920f642f3-var-log-ovn\") pod \"ovn-controller-jz78r-config-4xn6v\" (UID: \"9d3c9698-a36e-4606-8c79-e12920f642f3\") " pod="openstack/ovn-controller-jz78r-config-4xn6v" Apr 04 02:21:21 crc kubenswrapper[4681]: I0404 02:21:21.913700 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f649m\" (UniqueName: \"kubernetes.io/projected/9d3c9698-a36e-4606-8c79-e12920f642f3-kube-api-access-f649m\") pod \"ovn-controller-jz78r-config-4xn6v\" (UID: \"9d3c9698-a36e-4606-8c79-e12920f642f3\") " pod="openstack/ovn-controller-jz78r-config-4xn6v" Apr 04 02:21:21 crc kubenswrapper[4681]: I0404 02:21:21.913750 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d3c9698-a36e-4606-8c79-e12920f642f3-scripts\") pod \"ovn-controller-jz78r-config-4xn6v\" (UID: \"9d3c9698-a36e-4606-8c79-e12920f642f3\") " pod="openstack/ovn-controller-jz78r-config-4xn6v" Apr 04 02:21:21 crc kubenswrapper[4681]: I0404 02:21:21.913790 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d3c9698-a36e-4606-8c79-e12920f642f3-var-run-ovn\") pod \"ovn-controller-jz78r-config-4xn6v\" (UID: \"9d3c9698-a36e-4606-8c79-e12920f642f3\") " pod="openstack/ovn-controller-jz78r-config-4xn6v" Apr 04 02:21:22 crc kubenswrapper[4681]: I0404 02:21:22.014996 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d3c9698-a36e-4606-8c79-e12920f642f3-var-run-ovn\") pod \"ovn-controller-jz78r-config-4xn6v\" (UID: \"9d3c9698-a36e-4606-8c79-e12920f642f3\") " pod="openstack/ovn-controller-jz78r-config-4xn6v" Apr 04 02:21:22 crc kubenswrapper[4681]: I0404 02:21:22.015118 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d3c9698-a36e-4606-8c79-e12920f642f3-var-run\") pod \"ovn-controller-jz78r-config-4xn6v\" (UID: \"9d3c9698-a36e-4606-8c79-e12920f642f3\") " pod="openstack/ovn-controller-jz78r-config-4xn6v" Apr 04 02:21:22 crc kubenswrapper[4681]: I0404 02:21:22.015188 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d3c9698-a36e-4606-8c79-e12920f642f3-additional-scripts\") pod \"ovn-controller-jz78r-config-4xn6v\" (UID: \"9d3c9698-a36e-4606-8c79-e12920f642f3\") " pod="openstack/ovn-controller-jz78r-config-4xn6v" Apr 04 02:21:22 crc kubenswrapper[4681]: I0404 02:21:22.015226 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d3c9698-a36e-4606-8c79-e12920f642f3-var-log-ovn\") pod \"ovn-controller-jz78r-config-4xn6v\" (UID: \"9d3c9698-a36e-4606-8c79-e12920f642f3\") " pod="openstack/ovn-controller-jz78r-config-4xn6v" Apr 04 02:21:22 crc kubenswrapper[4681]: I0404 02:21:22.015288 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f649m\" (UniqueName: \"kubernetes.io/projected/9d3c9698-a36e-4606-8c79-e12920f642f3-kube-api-access-f649m\") pod \"ovn-controller-jz78r-config-4xn6v\" (UID: \"9d3c9698-a36e-4606-8c79-e12920f642f3\") " pod="openstack/ovn-controller-jz78r-config-4xn6v" Apr 04 02:21:22 crc kubenswrapper[4681]: I0404 02:21:22.015336 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d3c9698-a36e-4606-8c79-e12920f642f3-var-run-ovn\") pod \"ovn-controller-jz78r-config-4xn6v\" (UID: \"9d3c9698-a36e-4606-8c79-e12920f642f3\") " pod="openstack/ovn-controller-jz78r-config-4xn6v" Apr 04 02:21:22 crc kubenswrapper[4681]: I0404 02:21:22.015339 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d3c9698-a36e-4606-8c79-e12920f642f3-scripts\") pod \"ovn-controller-jz78r-config-4xn6v\" (UID: \"9d3c9698-a36e-4606-8c79-e12920f642f3\") " pod="openstack/ovn-controller-jz78r-config-4xn6v" Apr 04 02:21:22 crc kubenswrapper[4681]: I0404 02:21:22.015382 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d3c9698-a36e-4606-8c79-e12920f642f3-var-log-ovn\") pod \"ovn-controller-jz78r-config-4xn6v\" (UID: \"9d3c9698-a36e-4606-8c79-e12920f642f3\") " pod="openstack/ovn-controller-jz78r-config-4xn6v" Apr 04 02:21:22 crc kubenswrapper[4681]: I0404 02:21:22.015335 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d3c9698-a36e-4606-8c79-e12920f642f3-var-run\") pod \"ovn-controller-jz78r-config-4xn6v\" (UID: \"9d3c9698-a36e-4606-8c79-e12920f642f3\") " pod="openstack/ovn-controller-jz78r-config-4xn6v" Apr 04 02:21:22 crc kubenswrapper[4681]: I0404 02:21:22.102255 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d3c9698-a36e-4606-8c79-e12920f642f3-additional-scripts\") pod \"ovn-controller-jz78r-config-4xn6v\" (UID: \"9d3c9698-a36e-4606-8c79-e12920f642f3\") " pod="openstack/ovn-controller-jz78r-config-4xn6v" Apr 04 02:21:22 crc kubenswrapper[4681]: I0404 02:21:22.103777 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d3c9698-a36e-4606-8c79-e12920f642f3-scripts\") pod \"ovn-controller-jz78r-config-4xn6v\" (UID: \"9d3c9698-a36e-4606-8c79-e12920f642f3\") " pod="openstack/ovn-controller-jz78r-config-4xn6v" Apr 04 02:21:22 crc kubenswrapper[4681]: I0404 02:21:22.107172 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f649m\" (UniqueName: \"kubernetes.io/projected/9d3c9698-a36e-4606-8c79-e12920f642f3-kube-api-access-f649m\") pod \"ovn-controller-jz78r-config-4xn6v\" (UID: \"9d3c9698-a36e-4606-8c79-e12920f642f3\") " pod="openstack/ovn-controller-jz78r-config-4xn6v" Apr 04 02:21:22 crc kubenswrapper[4681]: I0404 02:21:22.234825 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jz78r-config-4xn6v" Apr 04 02:21:23 crc kubenswrapper[4681]: I0404 02:21:23.004986 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6579b16f-f45a-4c22-9107-6763d001efb2","Type":"ContainerStarted","Data":"d0239e68aa867b7ab006964518ad4ba1941d3dc34f428aac653fd98e94ec8c99"} Apr 04 02:21:26 crc kubenswrapper[4681]: I0404 02:21:26.524517 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:21:26 crc kubenswrapper[4681]: I0404 02:21:26.525058 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:21:26 crc kubenswrapper[4681]: I0404 02:21:26.525107 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 02:21:26 crc kubenswrapper[4681]: I0404 02:21:26.525934 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-jz78r" podUID="616e7c64-534b-41e8-8ad9-0abf8f05d3d5" containerName="ovn-controller" probeResult="failure" output=< Apr 04 02:21:26 crc kubenswrapper[4681]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Apr 04 02:21:26 crc kubenswrapper[4681]: > Apr 04 02:21:26 crc kubenswrapper[4681]: I0404 02:21:26.526839 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3654cfff66d5807945bfb8fd6cd5a2240bc45afc78ca743b318542d8aeaa09d5"} pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Apr 04 02:21:26 crc kubenswrapper[4681]: I0404 02:21:26.526893 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" containerID="cri-o://3654cfff66d5807945bfb8fd6cd5a2240bc45afc78ca743b318542d8aeaa09d5" gracePeriod=600 Apr 04 02:21:27 crc kubenswrapper[4681]: I0404 02:21:27.000010 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdc00a76-b945-4eca-98d7-1f126a78785f-etc-swift\") pod \"swift-storage-0\" (UID: \"cdc00a76-b945-4eca-98d7-1f126a78785f\") " pod="openstack/swift-storage-0" Apr 04 02:21:27 crc kubenswrapper[4681]: E0404 02:21:27.000260 4681 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Apr 04 02:21:27 crc kubenswrapper[4681]: E0404 02:21:27.000312 4681 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Apr 04 02:21:27 crc kubenswrapper[4681]: E0404 02:21:27.000377 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cdc00a76-b945-4eca-98d7-1f126a78785f-etc-swift podName:cdc00a76-b945-4eca-98d7-1f126a78785f nodeName:}" failed. No retries permitted until 2026-04-04 02:21:59.000356348 +0000 UTC m=+1598.666131498 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cdc00a76-b945-4eca-98d7-1f126a78785f-etc-swift") pod "swift-storage-0" (UID: "cdc00a76-b945-4eca-98d7-1f126a78785f") : configmap "swift-ring-files" not found Apr 04 02:21:27 crc kubenswrapper[4681]: I0404 02:21:27.883333 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="274d9ff3-9300-48ad-8172-5be9539f6e7b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.110:5671: connect: connection refused" Apr 04 02:21:31 crc kubenswrapper[4681]: I0404 02:21:31.544962 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-jz78r" podUID="616e7c64-534b-41e8-8ad9-0abf8f05d3d5" containerName="ovn-controller" probeResult="failure" output=< Apr 04 02:21:31 crc kubenswrapper[4681]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Apr 04 02:21:31 crc kubenswrapper[4681]: > Apr 04 02:21:33 crc kubenswrapper[4681]: I0404 02:21:33.103693 4681 generic.go:334] "Generic (PLEG): container finished" podID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerID="3654cfff66d5807945bfb8fd6cd5a2240bc45afc78ca743b318542d8aeaa09d5" exitCode=0 Apr 04 02:21:33 crc kubenswrapper[4681]: I0404 02:21:33.103797 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerDied","Data":"3654cfff66d5807945bfb8fd6cd5a2240bc45afc78ca743b318542d8aeaa09d5"} Apr 04 02:21:33 crc kubenswrapper[4681]: I0404 02:21:33.104248 4681 scope.go:117] "RemoveContainer" containerID="29e9a58ef2bccc789fece86b7ac9bb80cce347a67979c6787d7300d3e52c5b75" Apr 04 02:21:33 crc kubenswrapper[4681]: I0404 02:21:33.728946 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-jz78r-config-4xn6v"] Apr 04 02:21:33 crc kubenswrapper[4681]: W0404 02:21:33.745066 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d3c9698_a36e_4606_8c79_e12920f642f3.slice/crio-03f2f80a7483ec0d2813da509e03365177d4a475fd549ce21c64a4911f499632 WatchSource:0}: Error finding container 03f2f80a7483ec0d2813da509e03365177d4a475fd549ce21c64a4911f499632: Status 404 returned error can't find the container with id 03f2f80a7483ec0d2813da509e03365177d4a475fd549ce21c64a4911f499632 Apr 04 02:21:34 crc kubenswrapper[4681]: I0404 02:21:34.116891 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jz78r-config-4xn6v" event={"ID":"9d3c9698-a36e-4606-8c79-e12920f642f3","Type":"ContainerStarted","Data":"03f2f80a7483ec0d2813da509e03365177d4a475fd549ce21c64a4911f499632"} Apr 04 02:21:34 crc kubenswrapper[4681]: I0404 02:21:34.119116 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b40175fa-a3b0-40c3-bc35-7d927897b82b","Type":"ContainerStarted","Data":"6b45a0a11701e0e9c68853c0d349941d9a5177198c7082613ce522d101158e75"} Apr 04 02:21:34 crc kubenswrapper[4681]: I0404 02:21:34.119312 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Apr 04 02:21:34 crc kubenswrapper[4681]: I0404 02:21:34.142128 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371938.712664 podStartE2EDuration="1m38.142111963s" podCreationTimestamp="2026-04-04 02:19:56 +0000 UTC" firstStartedPulling="2026-04-04 02:20:29.248431967 +0000 UTC m=+1508.914207087" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:21:34.136380836 +0000 UTC m=+1573.802155956" watchObservedRunningTime="2026-04-04 02:21:34.142111963 +0000 UTC m=+1573.807887083" Apr 04 02:21:34 crc kubenswrapper[4681]: E0404 02:21:34.460876 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.110:5001/podified-master-centos10/openstack-swift-proxy-server:watcher_latest" Apr 04 02:21:34 crc kubenswrapper[4681]: E0404 02:21:34.460921 4681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.110:5001/podified-master-centos10/openstack-swift-proxy-server:watcher_latest" Apr 04 02:21:34 crc kubenswrapper[4681]: E0404 02:21:34.461053 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:swift-ring-rebalance,Image:38.102.83.110:5001/podified-master-centos10/openstack-swift-proxy-server:watcher_latest,Command:[/usr/local/bin/swift-ring-tool all],Args:[],WorkingDir:/etc/swift,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CM_NAME,Value:swift-ring-files,ValueFrom:nil,},EnvVar{Name:NAMESPACE,Value:openstack,ValueFrom:nil,},EnvVar{Name:OWNER_APIVERSION,Value:swift.openstack.org/v1beta1,ValueFrom:nil,},EnvVar{Name:OWNER_KIND,Value:SwiftRing,ValueFrom:nil,},EnvVar{Name:OWNER_NAME,Value:swift-ring,ValueFrom:nil,},EnvVar{Name:OWNER_UID,Value:c3bfa2a5-8cf8-4fa6-9cad-52825f0f00a7,ValueFrom:nil,},EnvVar{Name:SWIFT_MIN_PART_HOURS,Value:1,ValueFrom:nil,},EnvVar{Name:SWIFT_PART_POWER,Value:10,ValueFrom:nil,},EnvVar{Name:SWIFT_REPLICAS,Value:1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/swift-ring-tool,SubPath:swift-ring-tool,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:swiftconf,ReadOnly:true,MountPath:/etc/swift/swift.conf,SubPath:swift.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-swift,ReadOnly:false,MountPath:/etc/swift,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ring-data-devices,ReadOnly:true,MountPath:/var/lib/config-data/ring-devices,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dispersionconf,ReadOnly:true,MountPath:/etc/swift/dispersion.conf,SubPath:dispersion.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xxnnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42445,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-ring-rebalance-gz57l_openstack(6d76298d-bafc-4c57-9e19-f77f982a3187): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Apr 04 02:21:34 crc kubenswrapper[4681]: E0404 02:21:34.462255 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"swift-ring-rebalance\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/swift-ring-rebalance-gz57l" podUID="6d76298d-bafc-4c57-9e19-f77f982a3187" Apr 04 02:21:35 crc kubenswrapper[4681]: I0404 02:21:35.134658 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerStarted","Data":"a115e63e478f2dcd96d6a1ea5579c4abd8544184899aee1395f08415f92823da"} Apr 04 02:21:35 crc kubenswrapper[4681]: I0404 02:21:35.136798 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jz78r-config-4xn6v" event={"ID":"9d3c9698-a36e-4606-8c79-e12920f642f3","Type":"ContainerStarted","Data":"bf6e5429e571e07de9ffe75d154b44f596a33fd05322ef8437505dd567e4b225"} Apr 04 02:21:35 crc kubenswrapper[4681]: E0404 02:21:35.139447 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"swift-ring-rebalance\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.110:5001/podified-master-centos10/openstack-swift-proxy-server:watcher_latest\\\"\"" pod="openstack/swift-ring-rebalance-gz57l" podUID="6d76298d-bafc-4c57-9e19-f77f982a3187" Apr 04 02:21:36 crc kubenswrapper[4681]: I0404 02:21:36.527173 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-jz78r" Apr 04 02:21:37 crc kubenswrapper[4681]: I0404 02:21:37.154013 4681 generic.go:334] "Generic (PLEG): container finished" podID="9d3c9698-a36e-4606-8c79-e12920f642f3" containerID="bf6e5429e571e07de9ffe75d154b44f596a33fd05322ef8437505dd567e4b225" exitCode=0 Apr 04 02:21:37 crc kubenswrapper[4681]: I0404 02:21:37.154068 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jz78r-config-4xn6v" event={"ID":"9d3c9698-a36e-4606-8c79-e12920f642f3","Type":"ContainerDied","Data":"bf6e5429e571e07de9ffe75d154b44f596a33fd05322ef8437505dd567e4b225"} Apr 04 02:21:37 crc kubenswrapper[4681]: I0404 02:21:37.875449 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="274d9ff3-9300-48ad-8172-5be9539f6e7b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.110:5671: connect: connection refused" Apr 04 02:21:38 crc kubenswrapper[4681]: I0404 02:21:38.174794 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/notifications-rabbitmq-server-0" podUID="189dfe5e-4211-48c8-bc76-ea9c229c5d65" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.111:5671: connect: connection refused" Apr 04 02:21:39 crc kubenswrapper[4681]: I0404 02:21:39.870101 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jz78r-config-4xn6v" Apr 04 02:21:39 crc kubenswrapper[4681]: I0404 02:21:39.942386 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f649m\" (UniqueName: \"kubernetes.io/projected/9d3c9698-a36e-4606-8c79-e12920f642f3-kube-api-access-f649m\") pod \"9d3c9698-a36e-4606-8c79-e12920f642f3\" (UID: \"9d3c9698-a36e-4606-8c79-e12920f642f3\") " Apr 04 02:21:39 crc kubenswrapper[4681]: I0404 02:21:39.942466 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d3c9698-a36e-4606-8c79-e12920f642f3-scripts\") pod \"9d3c9698-a36e-4606-8c79-e12920f642f3\" (UID: \"9d3c9698-a36e-4606-8c79-e12920f642f3\") " Apr 04 02:21:39 crc kubenswrapper[4681]: I0404 02:21:39.942581 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d3c9698-a36e-4606-8c79-e12920f642f3-var-run-ovn\") pod \"9d3c9698-a36e-4606-8c79-e12920f642f3\" (UID: \"9d3c9698-a36e-4606-8c79-e12920f642f3\") " Apr 04 02:21:39 crc kubenswrapper[4681]: I0404 02:21:39.942603 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d3c9698-a36e-4606-8c79-e12920f642f3-var-log-ovn\") pod \"9d3c9698-a36e-4606-8c79-e12920f642f3\" (UID: \"9d3c9698-a36e-4606-8c79-e12920f642f3\") " Apr 04 02:21:39 crc kubenswrapper[4681]: I0404 02:21:39.942693 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d3c9698-a36e-4606-8c79-e12920f642f3-var-run\") pod \"9d3c9698-a36e-4606-8c79-e12920f642f3\" (UID: \"9d3c9698-a36e-4606-8c79-e12920f642f3\") " Apr 04 02:21:39 crc kubenswrapper[4681]: I0404 02:21:39.942726 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d3c9698-a36e-4606-8c79-e12920f642f3-additional-scripts\") pod \"9d3c9698-a36e-4606-8c79-e12920f642f3\" (UID: \"9d3c9698-a36e-4606-8c79-e12920f642f3\") " Apr 04 02:21:39 crc kubenswrapper[4681]: I0404 02:21:39.943433 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d3c9698-a36e-4606-8c79-e12920f642f3-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "9d3c9698-a36e-4606-8c79-e12920f642f3" (UID: "9d3c9698-a36e-4606-8c79-e12920f642f3"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:21:39 crc kubenswrapper[4681]: I0404 02:21:39.943743 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d3c9698-a36e-4606-8c79-e12920f642f3-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "9d3c9698-a36e-4606-8c79-e12920f642f3" (UID: "9d3c9698-a36e-4606-8c79-e12920f642f3"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:21:39 crc kubenswrapper[4681]: I0404 02:21:39.943785 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d3c9698-a36e-4606-8c79-e12920f642f3-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "9d3c9698-a36e-4606-8c79-e12920f642f3" (UID: "9d3c9698-a36e-4606-8c79-e12920f642f3"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:21:39 crc kubenswrapper[4681]: I0404 02:21:39.943801 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d3c9698-a36e-4606-8c79-e12920f642f3-var-run" (OuterVolumeSpecName: "var-run") pod "9d3c9698-a36e-4606-8c79-e12920f642f3" (UID: "9d3c9698-a36e-4606-8c79-e12920f642f3"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:21:39 crc kubenswrapper[4681]: I0404 02:21:39.944598 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d3c9698-a36e-4606-8c79-e12920f642f3-scripts" (OuterVolumeSpecName: "scripts") pod "9d3c9698-a36e-4606-8c79-e12920f642f3" (UID: "9d3c9698-a36e-4606-8c79-e12920f642f3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:21:39 crc kubenswrapper[4681]: I0404 02:21:39.954503 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d3c9698-a36e-4606-8c79-e12920f642f3-kube-api-access-f649m" (OuterVolumeSpecName: "kube-api-access-f649m") pod "9d3c9698-a36e-4606-8c79-e12920f642f3" (UID: "9d3c9698-a36e-4606-8c79-e12920f642f3"). InnerVolumeSpecName "kube-api-access-f649m". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:21:40 crc kubenswrapper[4681]: I0404 02:21:40.044351 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d3c9698-a36e-4606-8c79-e12920f642f3-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:21:40 crc kubenswrapper[4681]: I0404 02:21:40.044383 4681 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9d3c9698-a36e-4606-8c79-e12920f642f3-var-run-ovn\") on node \"crc\" DevicePath \"\"" Apr 04 02:21:40 crc kubenswrapper[4681]: I0404 02:21:40.044396 4681 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9d3c9698-a36e-4606-8c79-e12920f642f3-var-log-ovn\") on node \"crc\" DevicePath \"\"" Apr 04 02:21:40 crc kubenswrapper[4681]: I0404 02:21:40.044407 4681 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d3c9698-a36e-4606-8c79-e12920f642f3-var-run\") on node \"crc\" DevicePath \"\"" Apr 04 02:21:40 crc kubenswrapper[4681]: I0404 02:21:40.044419 4681 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/9d3c9698-a36e-4606-8c79-e12920f642f3-additional-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:21:40 crc kubenswrapper[4681]: I0404 02:21:40.044434 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f649m\" (UniqueName: \"kubernetes.io/projected/9d3c9698-a36e-4606-8c79-e12920f642f3-kube-api-access-f649m\") on node \"crc\" DevicePath \"\"" Apr 04 02:21:40 crc kubenswrapper[4681]: I0404 02:21:40.185116 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-jz78r-config-4xn6v" event={"ID":"9d3c9698-a36e-4606-8c79-e12920f642f3","Type":"ContainerDied","Data":"03f2f80a7483ec0d2813da509e03365177d4a475fd549ce21c64a4911f499632"} Apr 04 02:21:40 crc kubenswrapper[4681]: I0404 02:21:40.185162 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03f2f80a7483ec0d2813da509e03365177d4a475fd549ce21c64a4911f499632" Apr 04 02:21:40 crc kubenswrapper[4681]: I0404 02:21:40.185393 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-jz78r-config-4xn6v" Apr 04 02:21:41 crc kubenswrapper[4681]: I0404 02:21:41.025795 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-jz78r-config-4xn6v"] Apr 04 02:21:41 crc kubenswrapper[4681]: I0404 02:21:41.043351 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-jz78r-config-4xn6v"] Apr 04 02:21:41 crc kubenswrapper[4681]: I0404 02:21:41.218600 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d3c9698-a36e-4606-8c79-e12920f642f3" path="/var/lib/kubelet/pods/9d3c9698-a36e-4606-8c79-e12920f642f3/volumes" Apr 04 02:21:41 crc kubenswrapper[4681]: I0404 02:21:41.225865 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"4c79dddc-8bad-4bfb-920f-434aea2c400c","Type":"ContainerStarted","Data":"43f7c52006491ba17b6aef4da150f4ac70a486fb831f809cba60906ae978b135"} Apr 04 02:21:41 crc kubenswrapper[4681]: I0404 02:21:41.226425 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Apr 04 02:21:41 crc kubenswrapper[4681]: I0404 02:21:41.251155 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.415174342 podStartE2EDuration="50.251133863s" podCreationTimestamp="2026-04-04 02:20:51 +0000 UTC" firstStartedPulling="2026-04-04 02:20:52.089218947 +0000 UTC m=+1531.754994067" lastFinishedPulling="2026-04-04 02:21:40.925178428 +0000 UTC m=+1580.590953588" observedRunningTime="2026-04-04 02:21:41.244578083 +0000 UTC m=+1580.910353213" watchObservedRunningTime="2026-04-04 02:21:41.251133863 +0000 UTC m=+1580.916908983" Apr 04 02:21:42 crc kubenswrapper[4681]: I0404 02:21:42.236952 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6579b16f-f45a-4c22-9107-6763d001efb2","Type":"ContainerStarted","Data":"08559eaf4bf17efe29b5473556b8bda99d480fa11f9ecf1103240dcba8545453"} Apr 04 02:21:42 crc kubenswrapper[4681]: I0404 02:21:42.271653 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=26.521742641 podStartE2EDuration="1m39.271627487s" podCreationTimestamp="2026-04-04 02:20:03 +0000 UTC" firstStartedPulling="2026-04-04 02:20:28.929284391 +0000 UTC m=+1508.595059511" lastFinishedPulling="2026-04-04 02:21:41.679169237 +0000 UTC m=+1581.344944357" observedRunningTime="2026-04-04 02:21:42.26334783 +0000 UTC m=+1581.929122950" watchObservedRunningTime="2026-04-04 02:21:42.271627487 +0000 UTC m=+1581.937402597" Apr 04 02:21:44 crc kubenswrapper[4681]: I0404 02:21:44.255852 4681 generic.go:334] "Generic (PLEG): container finished" podID="dd82b7b7-ba75-4588-9dc2-c47ed34762b5" containerID="08930779d9b2f26aa589d9c5f5db22692071c9fb19190b268180bf6cb38432f2" exitCode=0 Apr 04 02:21:44 crc kubenswrapper[4681]: I0404 02:21:44.255912 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dd82b7b7-ba75-4588-9dc2-c47ed34762b5","Type":"ContainerDied","Data":"08930779d9b2f26aa589d9c5f5db22692071c9fb19190b268180bf6cb38432f2"} Apr 04 02:21:44 crc kubenswrapper[4681]: I0404 02:21:44.983967 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:45 crc kubenswrapper[4681]: I0404 02:21:45.269247 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dd82b7b7-ba75-4588-9dc2-c47ed34762b5","Type":"ContainerStarted","Data":"940d43fdb44cad1a55982d38d9b1a8e125883bf9e64aa9a4a7c8e56938c7fcd0"} Apr 04 02:21:45 crc kubenswrapper[4681]: I0404 02:21:45.303310 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=96.447209573 podStartE2EDuration="1m48.303258213s" podCreationTimestamp="2026-04-04 02:19:57 +0000 UTC" firstStartedPulling="2026-04-04 02:20:29.154818102 +0000 UTC m=+1508.820593222" lastFinishedPulling="2026-04-04 02:20:41.010866742 +0000 UTC m=+1520.676641862" observedRunningTime="2026-04-04 02:21:45.289895047 +0000 UTC m=+1584.955670207" watchObservedRunningTime="2026-04-04 02:21:45.303258213 +0000 UTC m=+1584.969033363" Apr 04 02:21:47 crc kubenswrapper[4681]: I0404 02:21:47.590381 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="b40175fa-a3b0-40c3-bc35-7d927897b82b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.109:5671: connect: connection refused" Apr 04 02:21:47 crc kubenswrapper[4681]: I0404 02:21:47.874762 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="274d9ff3-9300-48ad-8172-5be9539f6e7b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.110:5671: connect: connection refused" Apr 04 02:21:48 crc kubenswrapper[4681]: I0404 02:21:48.172668 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/notifications-rabbitmq-server-0" podUID="189dfe5e-4211-48c8-bc76-ea9c229c5d65" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.111:5671: connect: connection refused" Apr 04 02:21:49 crc kubenswrapper[4681]: I0404 02:21:49.614404 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Apr 04 02:21:49 crc kubenswrapper[4681]: I0404 02:21:49.614749 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Apr 04 02:21:49 crc kubenswrapper[4681]: I0404 02:21:49.983867 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:49 crc kubenswrapper[4681]: I0404 02:21:49.986120 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:50 crc kubenswrapper[4681]: I0404 02:21:50.319741 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gz57l" event={"ID":"6d76298d-bafc-4c57-9e19-f77f982a3187","Type":"ContainerStarted","Data":"8b657f56aed55d458125c0685dce962a0fe1cba2b5c650d6b32480862c02fff0"} Apr 04 02:21:50 crc kubenswrapper[4681]: I0404 02:21:50.320480 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:50 crc kubenswrapper[4681]: I0404 02:21:50.355163 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-gz57l" podStartSLOduration=2.614162198 podStartE2EDuration="55.35514083s" podCreationTimestamp="2026-04-04 02:20:55 +0000 UTC" firstStartedPulling="2026-04-04 02:20:56.507868982 +0000 UTC m=+1536.173644102" lastFinishedPulling="2026-04-04 02:21:49.248847574 +0000 UTC m=+1588.914622734" observedRunningTime="2026-04-04 02:21:50.348939221 +0000 UTC m=+1590.014714361" watchObservedRunningTime="2026-04-04 02:21:50.35514083 +0000 UTC m=+1590.020915950" Apr 04 02:21:51 crc kubenswrapper[4681]: I0404 02:21:51.690211 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Apr 04 02:21:52 crc kubenswrapper[4681]: I0404 02:21:52.931896 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Apr 04 02:21:52 crc kubenswrapper[4681]: I0404 02:21:52.933001 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="6579b16f-f45a-4c22-9107-6763d001efb2" containerName="thanos-sidecar" containerID="cri-o://08559eaf4bf17efe29b5473556b8bda99d480fa11f9ecf1103240dcba8545453" gracePeriod=600 Apr 04 02:21:52 crc kubenswrapper[4681]: I0404 02:21:52.933185 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="6579b16f-f45a-4c22-9107-6763d001efb2" containerName="config-reloader" containerID="cri-o://d0239e68aa867b7ab006964518ad4ba1941d3dc34f428aac653fd98e94ec8c99" gracePeriod=600 Apr 04 02:21:52 crc kubenswrapper[4681]: I0404 02:21:52.932579 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="6579b16f-f45a-4c22-9107-6763d001efb2" containerName="prometheus" containerID="cri-o://51f88ab3729b37f6e0bb849dffa5692aa33e222efd7a785730e1f01122f26b8a" gracePeriod=600 Apr 04 02:21:53 crc kubenswrapper[4681]: I0404 02:21:53.343890 4681 generic.go:334] "Generic (PLEG): container finished" podID="6579b16f-f45a-4c22-9107-6763d001efb2" containerID="08559eaf4bf17efe29b5473556b8bda99d480fa11f9ecf1103240dcba8545453" exitCode=0 Apr 04 02:21:53 crc kubenswrapper[4681]: I0404 02:21:53.344141 4681 generic.go:334] "Generic (PLEG): container finished" podID="6579b16f-f45a-4c22-9107-6763d001efb2" containerID="d0239e68aa867b7ab006964518ad4ba1941d3dc34f428aac653fd98e94ec8c99" exitCode=0 Apr 04 02:21:53 crc kubenswrapper[4681]: I0404 02:21:53.344149 4681 generic.go:334] "Generic (PLEG): container finished" podID="6579b16f-f45a-4c22-9107-6763d001efb2" containerID="51f88ab3729b37f6e0bb849dffa5692aa33e222efd7a785730e1f01122f26b8a" exitCode=0 Apr 04 02:21:53 crc kubenswrapper[4681]: I0404 02:21:53.343961 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6579b16f-f45a-4c22-9107-6763d001efb2","Type":"ContainerDied","Data":"08559eaf4bf17efe29b5473556b8bda99d480fa11f9ecf1103240dcba8545453"} Apr 04 02:21:53 crc kubenswrapper[4681]: I0404 02:21:53.344183 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6579b16f-f45a-4c22-9107-6763d001efb2","Type":"ContainerDied","Data":"d0239e68aa867b7ab006964518ad4ba1941d3dc34f428aac653fd98e94ec8c99"} Apr 04 02:21:53 crc kubenswrapper[4681]: I0404 02:21:53.344197 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6579b16f-f45a-4c22-9107-6763d001efb2","Type":"ContainerDied","Data":"51f88ab3729b37f6e0bb849dffa5692aa33e222efd7a785730e1f01122f26b8a"} Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.154856 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.278344 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be343324-7666-479e-a8ae-26270ab2cfcc\") pod \"6579b16f-f45a-4c22-9107-6763d001efb2\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.278707 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6579b16f-f45a-4c22-9107-6763d001efb2-tls-assets\") pod \"6579b16f-f45a-4c22-9107-6763d001efb2\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.278774 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6579b16f-f45a-4c22-9107-6763d001efb2-config-out\") pod \"6579b16f-f45a-4c22-9107-6763d001efb2\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.278795 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6579b16f-f45a-4c22-9107-6763d001efb2-web-config\") pod \"6579b16f-f45a-4c22-9107-6763d001efb2\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.278820 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6579b16f-f45a-4c22-9107-6763d001efb2-thanos-prometheus-http-client-file\") pod \"6579b16f-f45a-4c22-9107-6763d001efb2\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.278857 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2prl7\" (UniqueName: \"kubernetes.io/projected/6579b16f-f45a-4c22-9107-6763d001efb2-kube-api-access-2prl7\") pod \"6579b16f-f45a-4c22-9107-6763d001efb2\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.278892 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6579b16f-f45a-4c22-9107-6763d001efb2-config\") pod \"6579b16f-f45a-4c22-9107-6763d001efb2\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.278921 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6579b16f-f45a-4c22-9107-6763d001efb2-prometheus-metric-storage-rulefiles-0\") pod \"6579b16f-f45a-4c22-9107-6763d001efb2\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.278953 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/6579b16f-f45a-4c22-9107-6763d001efb2-prometheus-metric-storage-rulefiles-1\") pod \"6579b16f-f45a-4c22-9107-6763d001efb2\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.278979 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/6579b16f-f45a-4c22-9107-6763d001efb2-prometheus-metric-storage-rulefiles-2\") pod \"6579b16f-f45a-4c22-9107-6763d001efb2\" (UID: \"6579b16f-f45a-4c22-9107-6763d001efb2\") " Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.280131 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6579b16f-f45a-4c22-9107-6763d001efb2-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "6579b16f-f45a-4c22-9107-6763d001efb2" (UID: "6579b16f-f45a-4c22-9107-6763d001efb2"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.280969 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6579b16f-f45a-4c22-9107-6763d001efb2-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "6579b16f-f45a-4c22-9107-6763d001efb2" (UID: "6579b16f-f45a-4c22-9107-6763d001efb2"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.284499 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6579b16f-f45a-4c22-9107-6763d001efb2-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "6579b16f-f45a-4c22-9107-6763d001efb2" (UID: "6579b16f-f45a-4c22-9107-6763d001efb2"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.284749 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6579b16f-f45a-4c22-9107-6763d001efb2-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "6579b16f-f45a-4c22-9107-6763d001efb2" (UID: "6579b16f-f45a-4c22-9107-6763d001efb2"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.287203 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6579b16f-f45a-4c22-9107-6763d001efb2-kube-api-access-2prl7" (OuterVolumeSpecName: "kube-api-access-2prl7") pod "6579b16f-f45a-4c22-9107-6763d001efb2" (UID: "6579b16f-f45a-4c22-9107-6763d001efb2"). InnerVolumeSpecName "kube-api-access-2prl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.287323 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6579b16f-f45a-4c22-9107-6763d001efb2-config-out" (OuterVolumeSpecName: "config-out") pod "6579b16f-f45a-4c22-9107-6763d001efb2" (UID: "6579b16f-f45a-4c22-9107-6763d001efb2"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.289320 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6579b16f-f45a-4c22-9107-6763d001efb2-config" (OuterVolumeSpecName: "config") pod "6579b16f-f45a-4c22-9107-6763d001efb2" (UID: "6579b16f-f45a-4c22-9107-6763d001efb2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.321612 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be343324-7666-479e-a8ae-26270ab2cfcc" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "6579b16f-f45a-4c22-9107-6763d001efb2" (UID: "6579b16f-f45a-4c22-9107-6763d001efb2"). InnerVolumeSpecName "pvc-be343324-7666-479e-a8ae-26270ab2cfcc". PluginName "kubernetes.io/csi", VolumeGidValue "" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.322208 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6579b16f-f45a-4c22-9107-6763d001efb2-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "6579b16f-f45a-4c22-9107-6763d001efb2" (UID: "6579b16f-f45a-4c22-9107-6763d001efb2"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.327504 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6579b16f-f45a-4c22-9107-6763d001efb2-web-config" (OuterVolumeSpecName: "web-config") pod "6579b16f-f45a-4c22-9107-6763d001efb2" (UID: "6579b16f-f45a-4c22-9107-6763d001efb2"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.356927 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6579b16f-f45a-4c22-9107-6763d001efb2","Type":"ContainerDied","Data":"8d35abc49d7dafa0f07907d8aae7bbc1ea76b82aa232d392eef31205979adb2c"} Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.356993 4681 scope.go:117] "RemoveContainer" containerID="08559eaf4bf17efe29b5473556b8bda99d480fa11f9ecf1103240dcba8545453" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.357007 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.380700 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6579b16f-f45a-4c22-9107-6763d001efb2-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.380743 4681 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6579b16f-f45a-4c22-9107-6763d001efb2-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.380758 4681 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/6579b16f-f45a-4c22-9107-6763d001efb2-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.380773 4681 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/6579b16f-f45a-4c22-9107-6763d001efb2-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.380815 4681 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-be343324-7666-479e-a8ae-26270ab2cfcc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be343324-7666-479e-a8ae-26270ab2cfcc\") on node \"crc\" " Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.380830 4681 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6579b16f-f45a-4c22-9107-6763d001efb2-tls-assets\") on node \"crc\" DevicePath \"\"" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.380842 4681 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6579b16f-f45a-4c22-9107-6763d001efb2-config-out\") on node \"crc\" DevicePath \"\"" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.380852 4681 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6579b16f-f45a-4c22-9107-6763d001efb2-web-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.380863 4681 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6579b16f-f45a-4c22-9107-6763d001efb2-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.380876 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2prl7\" (UniqueName: \"kubernetes.io/projected/6579b16f-f45a-4c22-9107-6763d001efb2-kube-api-access-2prl7\") on node \"crc\" DevicePath \"\"" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.401066 4681 scope.go:117] "RemoveContainer" containerID="d0239e68aa867b7ab006964518ad4ba1941d3dc34f428aac653fd98e94ec8c99" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.415448 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.424192 4681 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.424417 4681 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-be343324-7666-479e-a8ae-26270ab2cfcc" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be343324-7666-479e-a8ae-26270ab2cfcc") on node "crc" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.426713 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.430244 4681 scope.go:117] "RemoveContainer" containerID="51f88ab3729b37f6e0bb849dffa5692aa33e222efd7a785730e1f01122f26b8a" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.459792 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Apr 04 02:21:54 crc kubenswrapper[4681]: E0404 02:21:54.460444 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6579b16f-f45a-4c22-9107-6763d001efb2" containerName="config-reloader" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.460464 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6579b16f-f45a-4c22-9107-6763d001efb2" containerName="config-reloader" Apr 04 02:21:54 crc kubenswrapper[4681]: E0404 02:21:54.460476 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d3c9698-a36e-4606-8c79-e12920f642f3" containerName="ovn-config" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.460482 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d3c9698-a36e-4606-8c79-e12920f642f3" containerName="ovn-config" Apr 04 02:21:54 crc kubenswrapper[4681]: E0404 02:21:54.460494 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6579b16f-f45a-4c22-9107-6763d001efb2" containerName="prometheus" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.460500 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6579b16f-f45a-4c22-9107-6763d001efb2" containerName="prometheus" Apr 04 02:21:54 crc kubenswrapper[4681]: E0404 02:21:54.460516 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6579b16f-f45a-4c22-9107-6763d001efb2" containerName="thanos-sidecar" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.460522 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6579b16f-f45a-4c22-9107-6763d001efb2" containerName="thanos-sidecar" Apr 04 02:21:54 crc kubenswrapper[4681]: E0404 02:21:54.460533 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6579b16f-f45a-4c22-9107-6763d001efb2" containerName="init-config-reloader" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.460539 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6579b16f-f45a-4c22-9107-6763d001efb2" containerName="init-config-reloader" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.460717 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="6579b16f-f45a-4c22-9107-6763d001efb2" containerName="thanos-sidecar" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.460727 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="6579b16f-f45a-4c22-9107-6763d001efb2" containerName="prometheus" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.460737 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d3c9698-a36e-4606-8c79-e12920f642f3" containerName="ovn-config" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.460746 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="6579b16f-f45a-4c22-9107-6763d001efb2" containerName="config-reloader" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.462619 4681 scope.go:117] "RemoveContainer" containerID="1cc25dad350a8aaa54a6f494b053fa55a5f1f7e1c9bf98c525234211428b87b9" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.463105 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.466737 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.467068 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-xgv8d" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.467079 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.467311 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.467548 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.467756 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.469647 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.469850 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.482416 4681 reconciler_common.go:293] "Volume detached for volume \"pvc-be343324-7666-479e-a8ae-26270ab2cfcc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be343324-7666-479e-a8ae-26270ab2cfcc\") on node \"crc\" DevicePath \"\"" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.493858 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.502615 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.584497 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/03a92323-ceb1-4b90-b706-b0d9f924bdd8-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.584567 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/03a92323-ceb1-4b90-b706-b0d9f924bdd8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.584599 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/03a92323-ceb1-4b90-b706-b0d9f924bdd8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.584742 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.584810 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.584838 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.584956 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/03a92323-ceb1-4b90-b706-b0d9f924bdd8-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.585018 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-be343324-7666-479e-a8ae-26270ab2cfcc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be343324-7666-479e-a8ae-26270ab2cfcc\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.585069 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.585119 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-config\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.585205 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9qc2\" (UniqueName: \"kubernetes.io/projected/03a92323-ceb1-4b90-b706-b0d9f924bdd8-kube-api-access-m9qc2\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.586028 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/03a92323-ceb1-4b90-b706-b0d9f924bdd8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.586254 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.687863 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/03a92323-ceb1-4b90-b706-b0d9f924bdd8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.687920 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/03a92323-ceb1-4b90-b706-b0d9f924bdd8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.687958 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.687981 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.687996 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.688023 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/03a92323-ceb1-4b90-b706-b0d9f924bdd8-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.688042 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-be343324-7666-479e-a8ae-26270ab2cfcc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be343324-7666-479e-a8ae-26270ab2cfcc\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.688066 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.688091 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-config\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.688116 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9qc2\" (UniqueName: \"kubernetes.io/projected/03a92323-ceb1-4b90-b706-b0d9f924bdd8-kube-api-access-m9qc2\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.688189 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/03a92323-ceb1-4b90-b706-b0d9f924bdd8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.688207 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.688235 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/03a92323-ceb1-4b90-b706-b0d9f924bdd8-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.688694 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/03a92323-ceb1-4b90-b706-b0d9f924bdd8-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.688907 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/03a92323-ceb1-4b90-b706-b0d9f924bdd8-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.692697 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/03a92323-ceb1-4b90-b706-b0d9f924bdd8-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.692833 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-config\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.693037 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.693473 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.693988 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.694718 4681 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.694752 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-be343324-7666-479e-a8ae-26270ab2cfcc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be343324-7666-479e-a8ae-26270ab2cfcc\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1afbb87d4ef2fe230ee8a94c40d1d069f8d7a05e7e9d3bfdb3b9deafd206a254/globalmount\"" pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.695499 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/03a92323-ceb1-4b90-b706-b0d9f924bdd8-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.696046 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.696307 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/03a92323-ceb1-4b90-b706-b0d9f924bdd8-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.697977 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.719673 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9qc2\" (UniqueName: \"kubernetes.io/projected/03a92323-ceb1-4b90-b706-b0d9f924bdd8-kube-api-access-m9qc2\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.729718 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-be343324-7666-479e-a8ae-26270ab2cfcc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be343324-7666-479e-a8ae-26270ab2cfcc\") pod \"prometheus-metric-storage-0\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:54 crc kubenswrapper[4681]: I0404 02:21:54.831505 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Apr 04 02:21:55 crc kubenswrapper[4681]: I0404 02:21:55.221875 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6579b16f-f45a-4c22-9107-6763d001efb2" path="/var/lib/kubelet/pods/6579b16f-f45a-4c22-9107-6763d001efb2/volumes" Apr 04 02:21:55 crc kubenswrapper[4681]: I0404 02:21:55.361610 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Apr 04 02:21:55 crc kubenswrapper[4681]: W0404 02:21:55.362105 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03a92323_ceb1_4b90_b706_b0d9f924bdd8.slice/crio-0646b11d8a5b8040ce3dcefc0cbaf12a765e18ad6a66decb6b3f11500db5537b WatchSource:0}: Error finding container 0646b11d8a5b8040ce3dcefc0cbaf12a765e18ad6a66decb6b3f11500db5537b: Status 404 returned error can't find the container with id 0646b11d8a5b8040ce3dcefc0cbaf12a765e18ad6a66decb6b3f11500db5537b Apr 04 02:21:56 crc kubenswrapper[4681]: I0404 02:21:56.400381 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"03a92323-ceb1-4b90-b706-b0d9f924bdd8","Type":"ContainerStarted","Data":"0646b11d8a5b8040ce3dcefc0cbaf12a765e18ad6a66decb6b3f11500db5537b"} Apr 04 02:21:57 crc kubenswrapper[4681]: I0404 02:21:57.590323 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Apr 04 02:21:57 crc kubenswrapper[4681]: I0404 02:21:57.877477 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:21:58 crc kubenswrapper[4681]: I0404 02:21:58.019539 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Apr 04 02:21:58 crc kubenswrapper[4681]: I0404 02:21:58.176229 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/notifications-rabbitmq-server-0" Apr 04 02:21:58 crc kubenswrapper[4681]: I0404 02:21:58.186428 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Apr 04 02:21:58 crc kubenswrapper[4681]: I0404 02:21:58.420339 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"03a92323-ceb1-4b90-b706-b0d9f924bdd8","Type":"ContainerStarted","Data":"cf130503707aebac706fc013898e06a0fddbf3e63baa8daccf410f2a5cef86d1"} Apr 04 02:21:59 crc kubenswrapper[4681]: I0404 02:21:59.071872 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdc00a76-b945-4eca-98d7-1f126a78785f-etc-swift\") pod \"swift-storage-0\" (UID: \"cdc00a76-b945-4eca-98d7-1f126a78785f\") " pod="openstack/swift-storage-0" Apr 04 02:21:59 crc kubenswrapper[4681]: E0404 02:21:59.072119 4681 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Apr 04 02:21:59 crc kubenswrapper[4681]: E0404 02:21:59.072312 4681 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Apr 04 02:21:59 crc kubenswrapper[4681]: E0404 02:21:59.072378 4681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cdc00a76-b945-4eca-98d7-1f126a78785f-etc-swift podName:cdc00a76-b945-4eca-98d7-1f126a78785f nodeName:}" failed. No retries permitted until 2026-04-04 02:23:03.072356195 +0000 UTC m=+1662.738131315 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/cdc00a76-b945-4eca-98d7-1f126a78785f-etc-swift") pod "swift-storage-0" (UID: "cdc00a76-b945-4eca-98d7-1f126a78785f") : configmap "swift-ring-files" not found Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.138728 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587822-jdx8j"] Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.140886 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587822-jdx8j" Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.142666 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.143290 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.143613 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.150403 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587822-jdx8j"] Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.189532 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czcnz\" (UniqueName: \"kubernetes.io/projected/a536c680-7c89-488e-befb-087242236628-kube-api-access-czcnz\") pod \"auto-csr-approver-29587822-jdx8j\" (UID: \"a536c680-7c89-488e-befb-087242236628\") " pod="openshift-infra/auto-csr-approver-29587822-jdx8j" Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.291396 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czcnz\" (UniqueName: \"kubernetes.io/projected/a536c680-7c89-488e-befb-087242236628-kube-api-access-czcnz\") pod \"auto-csr-approver-29587822-jdx8j\" (UID: \"a536c680-7c89-488e-befb-087242236628\") " pod="openshift-infra/auto-csr-approver-29587822-jdx8j" Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.323688 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czcnz\" (UniqueName: \"kubernetes.io/projected/a536c680-7c89-488e-befb-087242236628-kube-api-access-czcnz\") pod \"auto-csr-approver-29587822-jdx8j\" (UID: \"a536c680-7c89-488e-befb-087242236628\") " pod="openshift-infra/auto-csr-approver-29587822-jdx8j" Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.437758 4681 generic.go:334] "Generic (PLEG): container finished" podID="6d76298d-bafc-4c57-9e19-f77f982a3187" containerID="8b657f56aed55d458125c0685dce962a0fe1cba2b5c650d6b32480862c02fff0" exitCode=0 Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.437801 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gz57l" event={"ID":"6d76298d-bafc-4c57-9e19-f77f982a3187","Type":"ContainerDied","Data":"8b657f56aed55d458125c0685dce962a0fe1cba2b5c650d6b32480862c02fff0"} Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.462860 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587822-jdx8j" Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.633483 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-5ngdh"] Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.634847 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5ngdh" Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.642716 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-fd52-account-create-update-g868v"] Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.648461 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fd52-account-create-update-g868v" Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.652843 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-fd52-account-create-update-g868v"] Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.653963 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.661999 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-5ngdh"] Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.699670 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf452\" (UniqueName: \"kubernetes.io/projected/d7040185-eeba-423a-b853-8b0845725ca7-kube-api-access-sf452\") pod \"neutron-db-create-5ngdh\" (UID: \"d7040185-eeba-423a-b853-8b0845725ca7\") " pod="openstack/neutron-db-create-5ngdh" Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.699744 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7040185-eeba-423a-b853-8b0845725ca7-operator-scripts\") pod \"neutron-db-create-5ngdh\" (UID: \"d7040185-eeba-423a-b853-8b0845725ca7\") " pod="openstack/neutron-db-create-5ngdh" Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.699783 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d76e7add-8e4a-430f-ac78-55dd1539cb37-operator-scripts\") pod \"neutron-fd52-account-create-update-g868v\" (UID: \"d76e7add-8e4a-430f-ac78-55dd1539cb37\") " pod="openstack/neutron-fd52-account-create-update-g868v" Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.699807 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d9br\" (UniqueName: \"kubernetes.io/projected/d76e7add-8e4a-430f-ac78-55dd1539cb37-kube-api-access-4d9br\") pod \"neutron-fd52-account-create-update-g868v\" (UID: \"d76e7add-8e4a-430f-ac78-55dd1539cb37\") " pod="openstack/neutron-fd52-account-create-update-g868v" Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.800562 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf452\" (UniqueName: \"kubernetes.io/projected/d7040185-eeba-423a-b853-8b0845725ca7-kube-api-access-sf452\") pod \"neutron-db-create-5ngdh\" (UID: \"d7040185-eeba-423a-b853-8b0845725ca7\") " pod="openstack/neutron-db-create-5ngdh" Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.800642 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7040185-eeba-423a-b853-8b0845725ca7-operator-scripts\") pod \"neutron-db-create-5ngdh\" (UID: \"d7040185-eeba-423a-b853-8b0845725ca7\") " pod="openstack/neutron-db-create-5ngdh" Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.800683 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d76e7add-8e4a-430f-ac78-55dd1539cb37-operator-scripts\") pod \"neutron-fd52-account-create-update-g868v\" (UID: \"d76e7add-8e4a-430f-ac78-55dd1539cb37\") " pod="openstack/neutron-fd52-account-create-update-g868v" Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.800709 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d9br\" (UniqueName: \"kubernetes.io/projected/d76e7add-8e4a-430f-ac78-55dd1539cb37-kube-api-access-4d9br\") pod \"neutron-fd52-account-create-update-g868v\" (UID: \"d76e7add-8e4a-430f-ac78-55dd1539cb37\") " pod="openstack/neutron-fd52-account-create-update-g868v" Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.801660 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d76e7add-8e4a-430f-ac78-55dd1539cb37-operator-scripts\") pod \"neutron-fd52-account-create-update-g868v\" (UID: \"d76e7add-8e4a-430f-ac78-55dd1539cb37\") " pod="openstack/neutron-fd52-account-create-update-g868v" Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.801671 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7040185-eeba-423a-b853-8b0845725ca7-operator-scripts\") pod \"neutron-db-create-5ngdh\" (UID: \"d7040185-eeba-423a-b853-8b0845725ca7\") " pod="openstack/neutron-db-create-5ngdh" Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.818548 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d9br\" (UniqueName: \"kubernetes.io/projected/d76e7add-8e4a-430f-ac78-55dd1539cb37-kube-api-access-4d9br\") pod \"neutron-fd52-account-create-update-g868v\" (UID: \"d76e7add-8e4a-430f-ac78-55dd1539cb37\") " pod="openstack/neutron-fd52-account-create-update-g868v" Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.824039 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf452\" (UniqueName: \"kubernetes.io/projected/d7040185-eeba-423a-b853-8b0845725ca7-kube-api-access-sf452\") pod \"neutron-db-create-5ngdh\" (UID: \"d7040185-eeba-423a-b853-8b0845725ca7\") " pod="openstack/neutron-db-create-5ngdh" Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.959686 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5ngdh" Apr 04 02:22:00 crc kubenswrapper[4681]: I0404 02:22:00.975957 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fd52-account-create-update-g868v" Apr 04 02:22:01 crc kubenswrapper[4681]: I0404 02:22:01.050133 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587822-jdx8j"] Apr 04 02:22:01 crc kubenswrapper[4681]: I0404 02:22:01.328409 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-pwvnn"] Apr 04 02:22:01 crc kubenswrapper[4681]: I0404 02:22:01.330151 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pwvnn" Apr 04 02:22:01 crc kubenswrapper[4681]: I0404 02:22:01.354347 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-pwvnn"] Apr 04 02:22:01 crc kubenswrapper[4681]: I0404 02:22:01.447478 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587822-jdx8j" event={"ID":"a536c680-7c89-488e-befb-087242236628","Type":"ContainerStarted","Data":"e88bf97354211444cd4f4a3e5241748c39170f0fdd1849a2c8255c68bcdfefe5"} Apr 04 02:22:01 crc kubenswrapper[4681]: I0404 02:22:01.469869 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b64f-account-create-update-rfm44"] Apr 04 02:22:01 crc kubenswrapper[4681]: I0404 02:22:01.477730 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b64f-account-create-update-rfm44" Apr 04 02:22:01 crc kubenswrapper[4681]: I0404 02:22:01.491414 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Apr 04 02:22:01 crc kubenswrapper[4681]: I0404 02:22:01.496004 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b64f-account-create-update-rfm44"] Apr 04 02:22:01 crc kubenswrapper[4681]: I0404 02:22:01.524149 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dttw8\" (UniqueName: \"kubernetes.io/projected/91c9c400-d63b-4f2a-82fe-e178b9d8041d-kube-api-access-dttw8\") pod \"glance-db-create-pwvnn\" (UID: \"91c9c400-d63b-4f2a-82fe-e178b9d8041d\") " pod="openstack/glance-db-create-pwvnn" Apr 04 02:22:01 crc kubenswrapper[4681]: I0404 02:22:01.524315 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91c9c400-d63b-4f2a-82fe-e178b9d8041d-operator-scripts\") pod \"glance-db-create-pwvnn\" (UID: \"91c9c400-d63b-4f2a-82fe-e178b9d8041d\") " pod="openstack/glance-db-create-pwvnn" Apr 04 02:22:01 crc kubenswrapper[4681]: I0404 02:22:01.559750 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-5ngdh"] Apr 04 02:22:01 crc kubenswrapper[4681]: I0404 02:22:01.628524 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r78zd\" (UniqueName: \"kubernetes.io/projected/747c7dee-388d-4dc0-8a14-12c94c004057-kube-api-access-r78zd\") pod \"glance-b64f-account-create-update-rfm44\" (UID: \"747c7dee-388d-4dc0-8a14-12c94c004057\") " pod="openstack/glance-b64f-account-create-update-rfm44" Apr 04 02:22:01 crc kubenswrapper[4681]: I0404 02:22:01.628923 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dttw8\" (UniqueName: \"kubernetes.io/projected/91c9c400-d63b-4f2a-82fe-e178b9d8041d-kube-api-access-dttw8\") pod \"glance-db-create-pwvnn\" (UID: \"91c9c400-d63b-4f2a-82fe-e178b9d8041d\") " pod="openstack/glance-db-create-pwvnn" Apr 04 02:22:01 crc kubenswrapper[4681]: I0404 02:22:01.629068 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91c9c400-d63b-4f2a-82fe-e178b9d8041d-operator-scripts\") pod \"glance-db-create-pwvnn\" (UID: \"91c9c400-d63b-4f2a-82fe-e178b9d8041d\") " pod="openstack/glance-db-create-pwvnn" Apr 04 02:22:01 crc kubenswrapper[4681]: I0404 02:22:01.629116 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/747c7dee-388d-4dc0-8a14-12c94c004057-operator-scripts\") pod \"glance-b64f-account-create-update-rfm44\" (UID: \"747c7dee-388d-4dc0-8a14-12c94c004057\") " pod="openstack/glance-b64f-account-create-update-rfm44" Apr 04 02:22:01 crc kubenswrapper[4681]: I0404 02:22:01.630009 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91c9c400-d63b-4f2a-82fe-e178b9d8041d-operator-scripts\") pod \"glance-db-create-pwvnn\" (UID: \"91c9c400-d63b-4f2a-82fe-e178b9d8041d\") " pod="openstack/glance-db-create-pwvnn" Apr 04 02:22:01 crc kubenswrapper[4681]: I0404 02:22:01.674406 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-fd52-account-create-update-g868v"] Apr 04 02:22:01 crc kubenswrapper[4681]: I0404 02:22:01.694738 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dttw8\" (UniqueName: \"kubernetes.io/projected/91c9c400-d63b-4f2a-82fe-e178b9d8041d-kube-api-access-dttw8\") pod \"glance-db-create-pwvnn\" (UID: \"91c9c400-d63b-4f2a-82fe-e178b9d8041d\") " pod="openstack/glance-db-create-pwvnn" Apr 04 02:22:01 crc kubenswrapper[4681]: I0404 02:22:01.731077 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/747c7dee-388d-4dc0-8a14-12c94c004057-operator-scripts\") pod \"glance-b64f-account-create-update-rfm44\" (UID: \"747c7dee-388d-4dc0-8a14-12c94c004057\") " pod="openstack/glance-b64f-account-create-update-rfm44" Apr 04 02:22:01 crc kubenswrapper[4681]: I0404 02:22:01.731195 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r78zd\" (UniqueName: \"kubernetes.io/projected/747c7dee-388d-4dc0-8a14-12c94c004057-kube-api-access-r78zd\") pod \"glance-b64f-account-create-update-rfm44\" (UID: \"747c7dee-388d-4dc0-8a14-12c94c004057\") " pod="openstack/glance-b64f-account-create-update-rfm44" Apr 04 02:22:01 crc kubenswrapper[4681]: I0404 02:22:01.732603 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/747c7dee-388d-4dc0-8a14-12c94c004057-operator-scripts\") pod \"glance-b64f-account-create-update-rfm44\" (UID: \"747c7dee-388d-4dc0-8a14-12c94c004057\") " pod="openstack/glance-b64f-account-create-update-rfm44" Apr 04 02:22:01 crc kubenswrapper[4681]: I0404 02:22:01.765039 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r78zd\" (UniqueName: \"kubernetes.io/projected/747c7dee-388d-4dc0-8a14-12c94c004057-kube-api-access-r78zd\") pod \"glance-b64f-account-create-update-rfm44\" (UID: \"747c7dee-388d-4dc0-8a14-12c94c004057\") " pod="openstack/glance-b64f-account-create-update-rfm44" Apr 04 02:22:01 crc kubenswrapper[4681]: I0404 02:22:01.825856 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b64f-account-create-update-rfm44" Apr 04 02:22:01 crc kubenswrapper[4681]: I0404 02:22:01.982860 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pwvnn" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.032993 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gz57l" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.140200 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6d76298d-bafc-4c57-9e19-f77f982a3187-etc-swift\") pod \"6d76298d-bafc-4c57-9e19-f77f982a3187\" (UID: \"6d76298d-bafc-4c57-9e19-f77f982a3187\") " Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.140243 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d76298d-bafc-4c57-9e19-f77f982a3187-scripts\") pod \"6d76298d-bafc-4c57-9e19-f77f982a3187\" (UID: \"6d76298d-bafc-4c57-9e19-f77f982a3187\") " Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.140343 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxnnh\" (UniqueName: \"kubernetes.io/projected/6d76298d-bafc-4c57-9e19-f77f982a3187-kube-api-access-xxnnh\") pod \"6d76298d-bafc-4c57-9e19-f77f982a3187\" (UID: \"6d76298d-bafc-4c57-9e19-f77f982a3187\") " Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.140447 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6d76298d-bafc-4c57-9e19-f77f982a3187-ring-data-devices\") pod \"6d76298d-bafc-4c57-9e19-f77f982a3187\" (UID: \"6d76298d-bafc-4c57-9e19-f77f982a3187\") " Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.140487 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d76298d-bafc-4c57-9e19-f77f982a3187-combined-ca-bundle\") pod \"6d76298d-bafc-4c57-9e19-f77f982a3187\" (UID: \"6d76298d-bafc-4c57-9e19-f77f982a3187\") " Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.140561 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6d76298d-bafc-4c57-9e19-f77f982a3187-swiftconf\") pod \"6d76298d-bafc-4c57-9e19-f77f982a3187\" (UID: \"6d76298d-bafc-4c57-9e19-f77f982a3187\") " Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.140633 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6d76298d-bafc-4c57-9e19-f77f982a3187-dispersionconf\") pod \"6d76298d-bafc-4c57-9e19-f77f982a3187\" (UID: \"6d76298d-bafc-4c57-9e19-f77f982a3187\") " Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.143639 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d76298d-bafc-4c57-9e19-f77f982a3187-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "6d76298d-bafc-4c57-9e19-f77f982a3187" (UID: "6d76298d-bafc-4c57-9e19-f77f982a3187"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.143863 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d76298d-bafc-4c57-9e19-f77f982a3187-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "6d76298d-bafc-4c57-9e19-f77f982a3187" (UID: "6d76298d-bafc-4c57-9e19-f77f982a3187"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.148912 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d76298d-bafc-4c57-9e19-f77f982a3187-kube-api-access-xxnnh" (OuterVolumeSpecName: "kube-api-access-xxnnh") pod "6d76298d-bafc-4c57-9e19-f77f982a3187" (UID: "6d76298d-bafc-4c57-9e19-f77f982a3187"). InnerVolumeSpecName "kube-api-access-xxnnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.150896 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d76298d-bafc-4c57-9e19-f77f982a3187-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "6d76298d-bafc-4c57-9e19-f77f982a3187" (UID: "6d76298d-bafc-4c57-9e19-f77f982a3187"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.170849 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d76298d-bafc-4c57-9e19-f77f982a3187-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d76298d-bafc-4c57-9e19-f77f982a3187" (UID: "6d76298d-bafc-4c57-9e19-f77f982a3187"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.180537 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d76298d-bafc-4c57-9e19-f77f982a3187-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "6d76298d-bafc-4c57-9e19-f77f982a3187" (UID: "6d76298d-bafc-4c57-9e19-f77f982a3187"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.193684 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d76298d-bafc-4c57-9e19-f77f982a3187-scripts" (OuterVolumeSpecName: "scripts") pod "6d76298d-bafc-4c57-9e19-f77f982a3187" (UID: "6d76298d-bafc-4c57-9e19-f77f982a3187"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.243838 4681 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6d76298d-bafc-4c57-9e19-f77f982a3187-swiftconf\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.243875 4681 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6d76298d-bafc-4c57-9e19-f77f982a3187-dispersionconf\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.243888 4681 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6d76298d-bafc-4c57-9e19-f77f982a3187-etc-swift\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.243899 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6d76298d-bafc-4c57-9e19-f77f982a3187-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.243910 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxnnh\" (UniqueName: \"kubernetes.io/projected/6d76298d-bafc-4c57-9e19-f77f982a3187-kube-api-access-xxnnh\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.243921 4681 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6d76298d-bafc-4c57-9e19-f77f982a3187-ring-data-devices\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.243929 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d76298d-bafc-4c57-9e19-f77f982a3187-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.247256 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-tfrts"] Apr 04 02:22:02 crc kubenswrapper[4681]: E0404 02:22:02.247627 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d76298d-bafc-4c57-9e19-f77f982a3187" containerName="swift-ring-rebalance" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.247642 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d76298d-bafc-4c57-9e19-f77f982a3187" containerName="swift-ring-rebalance" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.247814 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d76298d-bafc-4c57-9e19-f77f982a3187" containerName="swift-ring-rebalance" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.248370 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tfrts" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.254364 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-tfrts"] Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.345690 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc5f3138-bfae-4200-9bff-80e1ceae2086-operator-scripts\") pod \"keystone-db-create-tfrts\" (UID: \"dc5f3138-bfae-4200-9bff-80e1ceae2086\") " pod="openstack/keystone-db-create-tfrts" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.345797 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skq7k\" (UniqueName: \"kubernetes.io/projected/dc5f3138-bfae-4200-9bff-80e1ceae2086-kube-api-access-skq7k\") pod \"keystone-db-create-tfrts\" (UID: \"dc5f3138-bfae-4200-9bff-80e1ceae2086\") " pod="openstack/keystone-db-create-tfrts" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.347421 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5bc0-account-create-update-hdl7t"] Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.348604 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5bc0-account-create-update-hdl7t" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.353526 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.356656 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-pwvnn"] Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.409609 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5bc0-account-create-update-hdl7t"] Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.447968 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrgr5\" (UniqueName: \"kubernetes.io/projected/bf1fdc7f-09be-4dd6-8b31-ff80353025e3-kube-api-access-rrgr5\") pod \"keystone-5bc0-account-create-update-hdl7t\" (UID: \"bf1fdc7f-09be-4dd6-8b31-ff80353025e3\") " pod="openstack/keystone-5bc0-account-create-update-hdl7t" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.448074 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skq7k\" (UniqueName: \"kubernetes.io/projected/dc5f3138-bfae-4200-9bff-80e1ceae2086-kube-api-access-skq7k\") pod \"keystone-db-create-tfrts\" (UID: \"dc5f3138-bfae-4200-9bff-80e1ceae2086\") " pod="openstack/keystone-db-create-tfrts" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.448133 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf1fdc7f-09be-4dd6-8b31-ff80353025e3-operator-scripts\") pod \"keystone-5bc0-account-create-update-hdl7t\" (UID: \"bf1fdc7f-09be-4dd6-8b31-ff80353025e3\") " pod="openstack/keystone-5bc0-account-create-update-hdl7t" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.448179 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc5f3138-bfae-4200-9bff-80e1ceae2086-operator-scripts\") pod \"keystone-db-create-tfrts\" (UID: \"dc5f3138-bfae-4200-9bff-80e1ceae2086\") " pod="openstack/keystone-db-create-tfrts" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.448797 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc5f3138-bfae-4200-9bff-80e1ceae2086-operator-scripts\") pod \"keystone-db-create-tfrts\" (UID: \"dc5f3138-bfae-4200-9bff-80e1ceae2086\") " pod="openstack/keystone-db-create-tfrts" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.458035 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b64f-account-create-update-rfm44"] Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.466465 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skq7k\" (UniqueName: \"kubernetes.io/projected/dc5f3138-bfae-4200-9bff-80e1ceae2086-kube-api-access-skq7k\") pod \"keystone-db-create-tfrts\" (UID: \"dc5f3138-bfae-4200-9bff-80e1ceae2086\") " pod="openstack/keystone-db-create-tfrts" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.468908 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b64f-account-create-update-rfm44" event={"ID":"747c7dee-388d-4dc0-8a14-12c94c004057","Type":"ContainerStarted","Data":"0abf69a3e1c93b3e411eaaab84d2456580e01410e5f140a012343f7fe552e60e"} Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.471218 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gz57l" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.471245 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gz57l" event={"ID":"6d76298d-bafc-4c57-9e19-f77f982a3187","Type":"ContainerDied","Data":"36d93a56a995f2fe5aa7fd06bdc95c73d4e365c052eacf2fa4c16f7ad775d0d7"} Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.471421 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36d93a56a995f2fe5aa7fd06bdc95c73d4e365c052eacf2fa4c16f7ad775d0d7" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.475503 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5ngdh" event={"ID":"d7040185-eeba-423a-b853-8b0845725ca7","Type":"ContainerStarted","Data":"ebc1366d022faa787b55cb6ca943489edc330ab3c212b0e79399c09278bb276d"} Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.475692 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5ngdh" event={"ID":"d7040185-eeba-423a-b853-8b0845725ca7","Type":"ContainerStarted","Data":"ff574a5f05e84ef09455aac3cea02232ce81f9b204f55052d4a38fda2ad6e9a0"} Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.477130 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pwvnn" event={"ID":"91c9c400-d63b-4f2a-82fe-e178b9d8041d","Type":"ContainerStarted","Data":"2423af33d508ffac2af51d707e9b0aad4e5b430f040678262fa2ad136222e022"} Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.478214 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fd52-account-create-update-g868v" event={"ID":"d76e7add-8e4a-430f-ac78-55dd1539cb37","Type":"ContainerStarted","Data":"5346988cf8bf7d74c80ef294bc658ae4240719dd9b922c34d2640e000b5ba2ab"} Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.534482 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-2vj6j"] Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.535994 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2vj6j" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.548852 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-4d4e-account-create-update-vwb26"] Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.550866 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf1fdc7f-09be-4dd6-8b31-ff80353025e3-operator-scripts\") pod \"keystone-5bc0-account-create-update-hdl7t\" (UID: \"bf1fdc7f-09be-4dd6-8b31-ff80353025e3\") " pod="openstack/keystone-5bc0-account-create-update-hdl7t" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.551083 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrgr5\" (UniqueName: \"kubernetes.io/projected/bf1fdc7f-09be-4dd6-8b31-ff80353025e3-kube-api-access-rrgr5\") pod \"keystone-5bc0-account-create-update-hdl7t\" (UID: \"bf1fdc7f-09be-4dd6-8b31-ff80353025e3\") " pod="openstack/keystone-5bc0-account-create-update-hdl7t" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.552471 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf1fdc7f-09be-4dd6-8b31-ff80353025e3-operator-scripts\") pod \"keystone-5bc0-account-create-update-hdl7t\" (UID: \"bf1fdc7f-09be-4dd6-8b31-ff80353025e3\") " pod="openstack/keystone-5bc0-account-create-update-hdl7t" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.556433 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4d4e-account-create-update-vwb26" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.561892 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-2vj6j"] Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.563623 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.569964 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4d4e-account-create-update-vwb26"] Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.575098 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrgr5\" (UniqueName: \"kubernetes.io/projected/bf1fdc7f-09be-4dd6-8b31-ff80353025e3-kube-api-access-rrgr5\") pod \"keystone-5bc0-account-create-update-hdl7t\" (UID: \"bf1fdc7f-09be-4dd6-8b31-ff80353025e3\") " pod="openstack/keystone-5bc0-account-create-update-hdl7t" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.601082 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tfrts" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.612127 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5bc0-account-create-update-hdl7t" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.659221 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9gzz\" (UniqueName: \"kubernetes.io/projected/658ab0b4-3080-4229-bda8-98cdaeedd719-kube-api-access-j9gzz\") pod \"placement-db-create-2vj6j\" (UID: \"658ab0b4-3080-4229-bda8-98cdaeedd719\") " pod="openstack/placement-db-create-2vj6j" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.663119 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b80e6a4-dd65-4faa-8163-342276cd3481-operator-scripts\") pod \"placement-4d4e-account-create-update-vwb26\" (UID: \"0b80e6a4-dd65-4faa-8163-342276cd3481\") " pod="openstack/placement-4d4e-account-create-update-vwb26" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.663310 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpcj4\" (UniqueName: \"kubernetes.io/projected/0b80e6a4-dd65-4faa-8163-342276cd3481-kube-api-access-lpcj4\") pod \"placement-4d4e-account-create-update-vwb26\" (UID: \"0b80e6a4-dd65-4faa-8163-342276cd3481\") " pod="openstack/placement-4d4e-account-create-update-vwb26" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.663338 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/658ab0b4-3080-4229-bda8-98cdaeedd719-operator-scripts\") pod \"placement-db-create-2vj6j\" (UID: \"658ab0b4-3080-4229-bda8-98cdaeedd719\") " pod="openstack/placement-db-create-2vj6j" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.768134 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpcj4\" (UniqueName: \"kubernetes.io/projected/0b80e6a4-dd65-4faa-8163-342276cd3481-kube-api-access-lpcj4\") pod \"placement-4d4e-account-create-update-vwb26\" (UID: \"0b80e6a4-dd65-4faa-8163-342276cd3481\") " pod="openstack/placement-4d4e-account-create-update-vwb26" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.768207 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/658ab0b4-3080-4229-bda8-98cdaeedd719-operator-scripts\") pod \"placement-db-create-2vj6j\" (UID: \"658ab0b4-3080-4229-bda8-98cdaeedd719\") " pod="openstack/placement-db-create-2vj6j" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.769947 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/658ab0b4-3080-4229-bda8-98cdaeedd719-operator-scripts\") pod \"placement-db-create-2vj6j\" (UID: \"658ab0b4-3080-4229-bda8-98cdaeedd719\") " pod="openstack/placement-db-create-2vj6j" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.775678 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9gzz\" (UniqueName: \"kubernetes.io/projected/658ab0b4-3080-4229-bda8-98cdaeedd719-kube-api-access-j9gzz\") pod \"placement-db-create-2vj6j\" (UID: \"658ab0b4-3080-4229-bda8-98cdaeedd719\") " pod="openstack/placement-db-create-2vj6j" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.775767 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b80e6a4-dd65-4faa-8163-342276cd3481-operator-scripts\") pod \"placement-4d4e-account-create-update-vwb26\" (UID: \"0b80e6a4-dd65-4faa-8163-342276cd3481\") " pod="openstack/placement-4d4e-account-create-update-vwb26" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.776630 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b80e6a4-dd65-4faa-8163-342276cd3481-operator-scripts\") pod \"placement-4d4e-account-create-update-vwb26\" (UID: \"0b80e6a4-dd65-4faa-8163-342276cd3481\") " pod="openstack/placement-4d4e-account-create-update-vwb26" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.786544 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpcj4\" (UniqueName: \"kubernetes.io/projected/0b80e6a4-dd65-4faa-8163-342276cd3481-kube-api-access-lpcj4\") pod \"placement-4d4e-account-create-update-vwb26\" (UID: \"0b80e6a4-dd65-4faa-8163-342276cd3481\") " pod="openstack/placement-4d4e-account-create-update-vwb26" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.793224 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9gzz\" (UniqueName: \"kubernetes.io/projected/658ab0b4-3080-4229-bda8-98cdaeedd719-kube-api-access-j9gzz\") pod \"placement-db-create-2vj6j\" (UID: \"658ab0b4-3080-4229-bda8-98cdaeedd719\") " pod="openstack/placement-db-create-2vj6j" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.940588 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2vj6j" Apr 04 02:22:02 crc kubenswrapper[4681]: I0404 02:22:02.959870 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4d4e-account-create-update-vwb26" Apr 04 02:22:03 crc kubenswrapper[4681]: I0404 02:22:03.095432 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-tfrts"] Apr 04 02:22:03 crc kubenswrapper[4681]: W0404 02:22:03.118514 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc5f3138_bfae_4200_9bff_80e1ceae2086.slice/crio-f40f8caf668e5e015311c7d811b249ccdf50c547b3c5b45911a0266e8519b42d WatchSource:0}: Error finding container f40f8caf668e5e015311c7d811b249ccdf50c547b3c5b45911a0266e8519b42d: Status 404 returned error can't find the container with id f40f8caf668e5e015311c7d811b249ccdf50c547b3c5b45911a0266e8519b42d Apr 04 02:22:03 crc kubenswrapper[4681]: I0404 02:22:03.164167 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5bc0-account-create-update-hdl7t"] Apr 04 02:22:03 crc kubenswrapper[4681]: W0404 02:22:03.389138 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod658ab0b4_3080_4229_bda8_98cdaeedd719.slice/crio-d84d1b36c443deee6a71f1fd1979d360ef2090288cc950fe1030553bf3953237 WatchSource:0}: Error finding container d84d1b36c443deee6a71f1fd1979d360ef2090288cc950fe1030553bf3953237: Status 404 returned error can't find the container with id d84d1b36c443deee6a71f1fd1979d360ef2090288cc950fe1030553bf3953237 Apr 04 02:22:03 crc kubenswrapper[4681]: I0404 02:22:03.392547 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-2vj6j"] Apr 04 02:22:03 crc kubenswrapper[4681]: I0404 02:22:03.448502 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4d4e-account-create-update-vwb26"] Apr 04 02:22:03 crc kubenswrapper[4681]: I0404 02:22:03.488379 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fd52-account-create-update-g868v" event={"ID":"d76e7add-8e4a-430f-ac78-55dd1539cb37","Type":"ContainerStarted","Data":"5bf421c1163064b0b3abe2737121078d91b5601bc6203f33eb5f9654145e0ced"} Apr 04 02:22:03 crc kubenswrapper[4681]: I0404 02:22:03.489216 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4d4e-account-create-update-vwb26" event={"ID":"0b80e6a4-dd65-4faa-8163-342276cd3481","Type":"ContainerStarted","Data":"988226a483f19c601be1f54361eac6229368a45b0312124d51aeb5ed1a41ba17"} Apr 04 02:22:03 crc kubenswrapper[4681]: I0404 02:22:03.495463 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2vj6j" event={"ID":"658ab0b4-3080-4229-bda8-98cdaeedd719","Type":"ContainerStarted","Data":"d84d1b36c443deee6a71f1fd1979d360ef2090288cc950fe1030553bf3953237"} Apr 04 02:22:03 crc kubenswrapper[4681]: I0404 02:22:03.497214 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5bc0-account-create-update-hdl7t" event={"ID":"bf1fdc7f-09be-4dd6-8b31-ff80353025e3","Type":"ContainerStarted","Data":"6d5d63a8fcb19c4c93a92d6f3aa97dabd22623402ee43b3d6ea40d2770a6a065"} Apr 04 02:22:03 crc kubenswrapper[4681]: I0404 02:22:03.498619 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-tfrts" event={"ID":"dc5f3138-bfae-4200-9bff-80e1ceae2086","Type":"ContainerStarted","Data":"f40f8caf668e5e015311c7d811b249ccdf50c547b3c5b45911a0266e8519b42d"} Apr 04 02:22:03 crc kubenswrapper[4681]: I0404 02:22:03.759127 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-lnswn"] Apr 04 02:22:03 crc kubenswrapper[4681]: I0404 02:22:03.760740 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-lnswn" Apr 04 02:22:03 crc kubenswrapper[4681]: I0404 02:22:03.777523 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-lnswn"] Apr 04 02:22:03 crc kubenswrapper[4681]: I0404 02:22:03.871802 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-ce30-account-create-update-6wsbx"] Apr 04 02:22:03 crc kubenswrapper[4681]: I0404 02:22:03.872976 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-ce30-account-create-update-6wsbx" Apr 04 02:22:03 crc kubenswrapper[4681]: I0404 02:22:03.874825 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Apr 04 02:22:03 crc kubenswrapper[4681]: I0404 02:22:03.890508 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-ce30-account-create-update-6wsbx"] Apr 04 02:22:03 crc kubenswrapper[4681]: I0404 02:22:03.894028 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg8gx\" (UniqueName: \"kubernetes.io/projected/f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa-kube-api-access-bg8gx\") pod \"watcher-db-create-lnswn\" (UID: \"f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa\") " pod="openstack/watcher-db-create-lnswn" Apr 04 02:22:03 crc kubenswrapper[4681]: I0404 02:22:03.894088 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa-operator-scripts\") pod \"watcher-db-create-lnswn\" (UID: \"f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa\") " pod="openstack/watcher-db-create-lnswn" Apr 04 02:22:03 crc kubenswrapper[4681]: I0404 02:22:03.996552 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89925da5-3840-4ec1-9bbb-1f518d3381b9-operator-scripts\") pod \"watcher-ce30-account-create-update-6wsbx\" (UID: \"89925da5-3840-4ec1-9bbb-1f518d3381b9\") " pod="openstack/watcher-ce30-account-create-update-6wsbx" Apr 04 02:22:03 crc kubenswrapper[4681]: I0404 02:22:03.996659 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bg8gx\" (UniqueName: \"kubernetes.io/projected/f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa-kube-api-access-bg8gx\") pod \"watcher-db-create-lnswn\" (UID: \"f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa\") " pod="openstack/watcher-db-create-lnswn" Apr 04 02:22:03 crc kubenswrapper[4681]: I0404 02:22:03.996699 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa-operator-scripts\") pod \"watcher-db-create-lnswn\" (UID: \"f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa\") " pod="openstack/watcher-db-create-lnswn" Apr 04 02:22:03 crc kubenswrapper[4681]: I0404 02:22:03.996740 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlc8f\" (UniqueName: \"kubernetes.io/projected/89925da5-3840-4ec1-9bbb-1f518d3381b9-kube-api-access-hlc8f\") pod \"watcher-ce30-account-create-update-6wsbx\" (UID: \"89925da5-3840-4ec1-9bbb-1f518d3381b9\") " pod="openstack/watcher-ce30-account-create-update-6wsbx" Apr 04 02:22:03 crc kubenswrapper[4681]: I0404 02:22:03.998460 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa-operator-scripts\") pod \"watcher-db-create-lnswn\" (UID: \"f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa\") " pod="openstack/watcher-db-create-lnswn" Apr 04 02:22:04 crc kubenswrapper[4681]: I0404 02:22:04.017669 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bg8gx\" (UniqueName: \"kubernetes.io/projected/f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa-kube-api-access-bg8gx\") pod \"watcher-db-create-lnswn\" (UID: \"f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa\") " pod="openstack/watcher-db-create-lnswn" Apr 04 02:22:04 crc kubenswrapper[4681]: I0404 02:22:04.075856 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-lnswn" Apr 04 02:22:04 crc kubenswrapper[4681]: I0404 02:22:04.097891 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89925da5-3840-4ec1-9bbb-1f518d3381b9-operator-scripts\") pod \"watcher-ce30-account-create-update-6wsbx\" (UID: \"89925da5-3840-4ec1-9bbb-1f518d3381b9\") " pod="openstack/watcher-ce30-account-create-update-6wsbx" Apr 04 02:22:04 crc kubenswrapper[4681]: I0404 02:22:04.098026 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlc8f\" (UniqueName: \"kubernetes.io/projected/89925da5-3840-4ec1-9bbb-1f518d3381b9-kube-api-access-hlc8f\") pod \"watcher-ce30-account-create-update-6wsbx\" (UID: \"89925da5-3840-4ec1-9bbb-1f518d3381b9\") " pod="openstack/watcher-ce30-account-create-update-6wsbx" Apr 04 02:22:04 crc kubenswrapper[4681]: I0404 02:22:04.098648 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89925da5-3840-4ec1-9bbb-1f518d3381b9-operator-scripts\") pod \"watcher-ce30-account-create-update-6wsbx\" (UID: \"89925da5-3840-4ec1-9bbb-1f518d3381b9\") " pod="openstack/watcher-ce30-account-create-update-6wsbx" Apr 04 02:22:04 crc kubenswrapper[4681]: I0404 02:22:04.119226 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlc8f\" (UniqueName: \"kubernetes.io/projected/89925da5-3840-4ec1-9bbb-1f518d3381b9-kube-api-access-hlc8f\") pod \"watcher-ce30-account-create-update-6wsbx\" (UID: \"89925da5-3840-4ec1-9bbb-1f518d3381b9\") " pod="openstack/watcher-ce30-account-create-update-6wsbx" Apr 04 02:22:04 crc kubenswrapper[4681]: I0404 02:22:04.189014 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-ce30-account-create-update-6wsbx" Apr 04 02:22:04 crc kubenswrapper[4681]: I0404 02:22:04.524246 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-5ngdh" podStartSLOduration=4.5242279960000005 podStartE2EDuration="4.524227996s" podCreationTimestamp="2026-04-04 02:22:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:22:04.523308231 +0000 UTC m=+1604.189083351" watchObservedRunningTime="2026-04-04 02:22:04.524227996 +0000 UTC m=+1604.190003116" Apr 04 02:22:04 crc kubenswrapper[4681]: I0404 02:22:04.765747 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-lnswn"] Apr 04 02:22:04 crc kubenswrapper[4681]: I0404 02:22:04.808347 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-ce30-account-create-update-6wsbx"] Apr 04 02:22:05 crc kubenswrapper[4681]: I0404 02:22:05.521805 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b64f-account-create-update-rfm44" event={"ID":"747c7dee-388d-4dc0-8a14-12c94c004057","Type":"ContainerStarted","Data":"8027fab556f5ede2774072e77d8aafeadb021899a22270658e0242f7b2c45284"} Apr 04 02:22:05 crc kubenswrapper[4681]: I0404 02:22:05.523248 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-ce30-account-create-update-6wsbx" event={"ID":"89925da5-3840-4ec1-9bbb-1f518d3381b9","Type":"ContainerStarted","Data":"99d5616410e7e4c48b727ad23e71f4a98cbac3926c831aeb25431b7723549896"} Apr 04 02:22:05 crc kubenswrapper[4681]: I0404 02:22:05.524551 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-lnswn" event={"ID":"f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa","Type":"ContainerStarted","Data":"009a5b342e2e707d9b719cc420564042e9a2823cc979398b59ff690ff7e89a76"} Apr 04 02:22:05 crc kubenswrapper[4681]: I0404 02:22:05.525984 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pwvnn" event={"ID":"91c9c400-d63b-4f2a-82fe-e178b9d8041d","Type":"ContainerStarted","Data":"21af8f396dcbc6ca76360326bf1de1bf03c9709327dce1353dafb881a6d9abaa"} Apr 04 02:22:06 crc kubenswrapper[4681]: I0404 02:22:06.542082 4681 generic.go:334] "Generic (PLEG): container finished" podID="03a92323-ceb1-4b90-b706-b0d9f924bdd8" containerID="cf130503707aebac706fc013898e06a0fddbf3e63baa8daccf410f2a5cef86d1" exitCode=0 Apr 04 02:22:06 crc kubenswrapper[4681]: I0404 02:22:06.542183 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"03a92323-ceb1-4b90-b706-b0d9f924bdd8","Type":"ContainerDied","Data":"cf130503707aebac706fc013898e06a0fddbf3e63baa8daccf410f2a5cef86d1"} Apr 04 02:22:06 crc kubenswrapper[4681]: I0404 02:22:06.544448 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4d4e-account-create-update-vwb26" event={"ID":"0b80e6a4-dd65-4faa-8163-342276cd3481","Type":"ContainerStarted","Data":"10d0ffbcb1100858bab42542e23f39fb1b56d55c49a433c8c7c0a33121827542"} Apr 04 02:22:06 crc kubenswrapper[4681]: I0404 02:22:06.547983 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2vj6j" event={"ID":"658ab0b4-3080-4229-bda8-98cdaeedd719","Type":"ContainerStarted","Data":"0d2683cc806fef9b17226200f3ba10d873e4c34436ebcca3867b93dfab439352"} Apr 04 02:22:06 crc kubenswrapper[4681]: I0404 02:22:06.558904 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5bc0-account-create-update-hdl7t" event={"ID":"bf1fdc7f-09be-4dd6-8b31-ff80353025e3","Type":"ContainerStarted","Data":"7e4c9137d135b2fcb0f177a4f676ad1683c4862f5f34ce907a485533a6cabf04"} Apr 04 02:22:06 crc kubenswrapper[4681]: I0404 02:22:06.562919 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-ce30-account-create-update-6wsbx" event={"ID":"89925da5-3840-4ec1-9bbb-1f518d3381b9","Type":"ContainerStarted","Data":"c53d88edb80844cd47a7c826429c345ad1734c4067ea19cf431bddaa3cf78c88"} Apr 04 02:22:06 crc kubenswrapper[4681]: I0404 02:22:06.564871 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-lnswn" event={"ID":"f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa","Type":"ContainerStarted","Data":"7d17ec41e4b267bce3a20031accb704f09acaf9e6a0a2f9431a94dd6889ed48d"} Apr 04 02:22:06 crc kubenswrapper[4681]: I0404 02:22:06.570780 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-tfrts" event={"ID":"dc5f3138-bfae-4200-9bff-80e1ceae2086","Type":"ContainerStarted","Data":"015e6476b6f4adbaa77e32abbcceff10eb2bb8d539ab9c4cd2759e3b907df6de"} Apr 04 02:22:06 crc kubenswrapper[4681]: I0404 02:22:06.610057 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5bc0-account-create-update-hdl7t" podStartSLOduration=4.610033514 podStartE2EDuration="4.610033514s" podCreationTimestamp="2026-04-04 02:22:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:22:06.588815903 +0000 UTC m=+1606.254591023" watchObservedRunningTime="2026-04-04 02:22:06.610033514 +0000 UTC m=+1606.275808644" Apr 04 02:22:06 crc kubenswrapper[4681]: I0404 02:22:06.640519 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-2vj6j" podStartSLOduration=4.640498939 podStartE2EDuration="4.640498939s" podCreationTimestamp="2026-04-04 02:22:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:22:06.618538557 +0000 UTC m=+1606.284313677" watchObservedRunningTime="2026-04-04 02:22:06.640498939 +0000 UTC m=+1606.306274059" Apr 04 02:22:06 crc kubenswrapper[4681]: I0404 02:22:06.642057 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-ce30-account-create-update-6wsbx" podStartSLOduration=3.642047242 podStartE2EDuration="3.642047242s" podCreationTimestamp="2026-04-04 02:22:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:22:06.639693488 +0000 UTC m=+1606.305468608" watchObservedRunningTime="2026-04-04 02:22:06.642047242 +0000 UTC m=+1606.307822362" Apr 04 02:22:06 crc kubenswrapper[4681]: I0404 02:22:06.655297 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-tfrts" podStartSLOduration=4.655278314 podStartE2EDuration="4.655278314s" podCreationTimestamp="2026-04-04 02:22:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:22:06.653010323 +0000 UTC m=+1606.318785443" watchObservedRunningTime="2026-04-04 02:22:06.655278314 +0000 UTC m=+1606.321053434" Apr 04 02:22:06 crc kubenswrapper[4681]: I0404 02:22:06.673494 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-create-lnswn" podStartSLOduration=3.673476644 podStartE2EDuration="3.673476644s" podCreationTimestamp="2026-04-04 02:22:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:22:06.667279114 +0000 UTC m=+1606.333054234" watchObservedRunningTime="2026-04-04 02:22:06.673476644 +0000 UTC m=+1606.339251764" Apr 04 02:22:06 crc kubenswrapper[4681]: I0404 02:22:06.696628 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-b64f-account-create-update-rfm44" podStartSLOduration=5.696609237 podStartE2EDuration="5.696609237s" podCreationTimestamp="2026-04-04 02:22:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:22:06.680842005 +0000 UTC m=+1606.346617115" watchObservedRunningTime="2026-04-04 02:22:06.696609237 +0000 UTC m=+1606.362384357" Apr 04 02:22:06 crc kubenswrapper[4681]: I0404 02:22:06.698776 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-4d4e-account-create-update-vwb26" podStartSLOduration=4.698763707 podStartE2EDuration="4.698763707s" podCreationTimestamp="2026-04-04 02:22:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:22:06.696178786 +0000 UTC m=+1606.361953906" watchObservedRunningTime="2026-04-04 02:22:06.698763707 +0000 UTC m=+1606.364538837" Apr 04 02:22:06 crc kubenswrapper[4681]: I0404 02:22:06.723238 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-fd52-account-create-update-g868v" podStartSLOduration=6.723213517 podStartE2EDuration="6.723213517s" podCreationTimestamp="2026-04-04 02:22:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:22:06.710249551 +0000 UTC m=+1606.376024671" watchObservedRunningTime="2026-04-04 02:22:06.723213517 +0000 UTC m=+1606.388988637" Apr 04 02:22:06 crc kubenswrapper[4681]: I0404 02:22:06.728474 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-pwvnn" podStartSLOduration=5.72845432 podStartE2EDuration="5.72845432s" podCreationTimestamp="2026-04-04 02:22:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:22:06.72152202 +0000 UTC m=+1606.387297150" watchObservedRunningTime="2026-04-04 02:22:06.72845432 +0000 UTC m=+1606.394229440" Apr 04 02:22:07 crc kubenswrapper[4681]: I0404 02:22:07.592914 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"03a92323-ceb1-4b90-b706-b0d9f924bdd8","Type":"ContainerStarted","Data":"003583dad88aa5da1629220cea8700d686ed286cbe9b53127604904cd23c5237"} Apr 04 02:22:07 crc kubenswrapper[4681]: I0404 02:22:07.947993 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-zwrl6"] Apr 04 02:22:07 crc kubenswrapper[4681]: I0404 02:22:07.959920 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zwrl6" Apr 04 02:22:07 crc kubenswrapper[4681]: I0404 02:22:07.960306 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zwrl6"] Apr 04 02:22:07 crc kubenswrapper[4681]: I0404 02:22:07.970703 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.048922 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-7lzkj"] Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.050721 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7lzkj" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.059774 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-7lzkj"] Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.075846 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsvpl\" (UniqueName: \"kubernetes.io/projected/58a8ceed-9c8d-4f64-a21a-8868d39acb26-kube-api-access-dsvpl\") pod \"root-account-create-update-zwrl6\" (UID: \"58a8ceed-9c8d-4f64-a21a-8868d39acb26\") " pod="openstack/root-account-create-update-zwrl6" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.076191 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58a8ceed-9c8d-4f64-a21a-8868d39acb26-operator-scripts\") pod \"root-account-create-update-zwrl6\" (UID: \"58a8ceed-9c8d-4f64-a21a-8868d39acb26\") " pod="openstack/root-account-create-update-zwrl6" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.164586 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-4dhqt"] Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.165931 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4dhqt" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.174355 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-81c4-account-create-update-xh2xd"] Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.175437 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-81c4-account-create-update-xh2xd" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.176671 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.177674 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsvpl\" (UniqueName: \"kubernetes.io/projected/58a8ceed-9c8d-4f64-a21a-8868d39acb26-kube-api-access-dsvpl\") pod \"root-account-create-update-zwrl6\" (UID: \"58a8ceed-9c8d-4f64-a21a-8868d39acb26\") " pod="openstack/root-account-create-update-zwrl6" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.177738 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78c6ae52-4069-4291-bf5b-2a3567e923d0-operator-scripts\") pod \"barbican-db-create-7lzkj\" (UID: \"78c6ae52-4069-4291-bf5b-2a3567e923d0\") " pod="openstack/barbican-db-create-7lzkj" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.177782 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvnwg\" (UniqueName: \"kubernetes.io/projected/78c6ae52-4069-4291-bf5b-2a3567e923d0-kube-api-access-kvnwg\") pod \"barbican-db-create-7lzkj\" (UID: \"78c6ae52-4069-4291-bf5b-2a3567e923d0\") " pod="openstack/barbican-db-create-7lzkj" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.177831 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58a8ceed-9c8d-4f64-a21a-8868d39acb26-operator-scripts\") pod \"root-account-create-update-zwrl6\" (UID: \"58a8ceed-9c8d-4f64-a21a-8868d39acb26\") " pod="openstack/root-account-create-update-zwrl6" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.178683 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58a8ceed-9c8d-4f64-a21a-8868d39acb26-operator-scripts\") pod \"root-account-create-update-zwrl6\" (UID: \"58a8ceed-9c8d-4f64-a21a-8868d39acb26\") " pod="openstack/root-account-create-update-zwrl6" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.203406 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsvpl\" (UniqueName: \"kubernetes.io/projected/58a8ceed-9c8d-4f64-a21a-8868d39acb26-kube-api-access-dsvpl\") pod \"root-account-create-update-zwrl6\" (UID: \"58a8ceed-9c8d-4f64-a21a-8868d39acb26\") " pod="openstack/root-account-create-update-zwrl6" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.216961 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4dhqt"] Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.240720 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-81c4-account-create-update-xh2xd"] Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.279412 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78c6ae52-4069-4291-bf5b-2a3567e923d0-operator-scripts\") pod \"barbican-db-create-7lzkj\" (UID: \"78c6ae52-4069-4291-bf5b-2a3567e923d0\") " pod="openstack/barbican-db-create-7lzkj" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.279515 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvnwg\" (UniqueName: \"kubernetes.io/projected/78c6ae52-4069-4291-bf5b-2a3567e923d0-kube-api-access-kvnwg\") pod \"barbican-db-create-7lzkj\" (UID: \"78c6ae52-4069-4291-bf5b-2a3567e923d0\") " pod="openstack/barbican-db-create-7lzkj" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.279557 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbwzd\" (UniqueName: \"kubernetes.io/projected/d042e61e-59c3-408a-a5c1-95f6f8f52c21-kube-api-access-kbwzd\") pod \"barbican-81c4-account-create-update-xh2xd\" (UID: \"d042e61e-59c3-408a-a5c1-95f6f8f52c21\") " pod="openstack/barbican-81c4-account-create-update-xh2xd" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.279612 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c378514c-b92c-4cd6-83a0-c1ac658b6e9b-operator-scripts\") pod \"cinder-db-create-4dhqt\" (UID: \"c378514c-b92c-4cd6-83a0-c1ac658b6e9b\") " pod="openstack/cinder-db-create-4dhqt" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.279673 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx5dz\" (UniqueName: \"kubernetes.io/projected/c378514c-b92c-4cd6-83a0-c1ac658b6e9b-kube-api-access-sx5dz\") pod \"cinder-db-create-4dhqt\" (UID: \"c378514c-b92c-4cd6-83a0-c1ac658b6e9b\") " pod="openstack/cinder-db-create-4dhqt" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.279722 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d042e61e-59c3-408a-a5c1-95f6f8f52c21-operator-scripts\") pod \"barbican-81c4-account-create-update-xh2xd\" (UID: \"d042e61e-59c3-408a-a5c1-95f6f8f52c21\") " pod="openstack/barbican-81c4-account-create-update-xh2xd" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.280588 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-3458-account-create-update-ldpbt"] Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.281712 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78c6ae52-4069-4291-bf5b-2a3567e923d0-operator-scripts\") pod \"barbican-db-create-7lzkj\" (UID: \"78c6ae52-4069-4291-bf5b-2a3567e923d0\") " pod="openstack/barbican-db-create-7lzkj" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.281798 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3458-account-create-update-ldpbt" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.283401 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.288693 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zwrl6" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.291654 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3458-account-create-update-ldpbt"] Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.298973 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvnwg\" (UniqueName: \"kubernetes.io/projected/78c6ae52-4069-4291-bf5b-2a3567e923d0-kube-api-access-kvnwg\") pod \"barbican-db-create-7lzkj\" (UID: \"78c6ae52-4069-4291-bf5b-2a3567e923d0\") " pod="openstack/barbican-db-create-7lzkj" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.368056 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7lzkj" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.382239 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbwzd\" (UniqueName: \"kubernetes.io/projected/d042e61e-59c3-408a-a5c1-95f6f8f52c21-kube-api-access-kbwzd\") pod \"barbican-81c4-account-create-update-xh2xd\" (UID: \"d042e61e-59c3-408a-a5c1-95f6f8f52c21\") " pod="openstack/barbican-81c4-account-create-update-xh2xd" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.382355 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88cffa15-91ef-48fe-bd03-46cf3e2b4b9c-operator-scripts\") pod \"cinder-3458-account-create-update-ldpbt\" (UID: \"88cffa15-91ef-48fe-bd03-46cf3e2b4b9c\") " pod="openstack/cinder-3458-account-create-update-ldpbt" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.382381 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c378514c-b92c-4cd6-83a0-c1ac658b6e9b-operator-scripts\") pod \"cinder-db-create-4dhqt\" (UID: \"c378514c-b92c-4cd6-83a0-c1ac658b6e9b\") " pod="openstack/cinder-db-create-4dhqt" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.382418 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snc8f\" (UniqueName: \"kubernetes.io/projected/88cffa15-91ef-48fe-bd03-46cf3e2b4b9c-kube-api-access-snc8f\") pod \"cinder-3458-account-create-update-ldpbt\" (UID: \"88cffa15-91ef-48fe-bd03-46cf3e2b4b9c\") " pod="openstack/cinder-3458-account-create-update-ldpbt" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.382466 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx5dz\" (UniqueName: \"kubernetes.io/projected/c378514c-b92c-4cd6-83a0-c1ac658b6e9b-kube-api-access-sx5dz\") pod \"cinder-db-create-4dhqt\" (UID: \"c378514c-b92c-4cd6-83a0-c1ac658b6e9b\") " pod="openstack/cinder-db-create-4dhqt" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.382490 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d042e61e-59c3-408a-a5c1-95f6f8f52c21-operator-scripts\") pod \"barbican-81c4-account-create-update-xh2xd\" (UID: \"d042e61e-59c3-408a-a5c1-95f6f8f52c21\") " pod="openstack/barbican-81c4-account-create-update-xh2xd" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.383367 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d042e61e-59c3-408a-a5c1-95f6f8f52c21-operator-scripts\") pod \"barbican-81c4-account-create-update-xh2xd\" (UID: \"d042e61e-59c3-408a-a5c1-95f6f8f52c21\") " pod="openstack/barbican-81c4-account-create-update-xh2xd" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.384504 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c378514c-b92c-4cd6-83a0-c1ac658b6e9b-operator-scripts\") pod \"cinder-db-create-4dhqt\" (UID: \"c378514c-b92c-4cd6-83a0-c1ac658b6e9b\") " pod="openstack/cinder-db-create-4dhqt" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.404091 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbwzd\" (UniqueName: \"kubernetes.io/projected/d042e61e-59c3-408a-a5c1-95f6f8f52c21-kube-api-access-kbwzd\") pod \"barbican-81c4-account-create-update-xh2xd\" (UID: \"d042e61e-59c3-408a-a5c1-95f6f8f52c21\") " pod="openstack/barbican-81c4-account-create-update-xh2xd" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.407156 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx5dz\" (UniqueName: \"kubernetes.io/projected/c378514c-b92c-4cd6-83a0-c1ac658b6e9b-kube-api-access-sx5dz\") pod \"cinder-db-create-4dhqt\" (UID: \"c378514c-b92c-4cd6-83a0-c1ac658b6e9b\") " pod="openstack/cinder-db-create-4dhqt" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.484463 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88cffa15-91ef-48fe-bd03-46cf3e2b4b9c-operator-scripts\") pod \"cinder-3458-account-create-update-ldpbt\" (UID: \"88cffa15-91ef-48fe-bd03-46cf3e2b4b9c\") " pod="openstack/cinder-3458-account-create-update-ldpbt" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.484528 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snc8f\" (UniqueName: \"kubernetes.io/projected/88cffa15-91ef-48fe-bd03-46cf3e2b4b9c-kube-api-access-snc8f\") pod \"cinder-3458-account-create-update-ldpbt\" (UID: \"88cffa15-91ef-48fe-bd03-46cf3e2b4b9c\") " pod="openstack/cinder-3458-account-create-update-ldpbt" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.486193 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88cffa15-91ef-48fe-bd03-46cf3e2b4b9c-operator-scripts\") pod \"cinder-3458-account-create-update-ldpbt\" (UID: \"88cffa15-91ef-48fe-bd03-46cf3e2b4b9c\") " pod="openstack/cinder-3458-account-create-update-ldpbt" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.509876 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snc8f\" (UniqueName: \"kubernetes.io/projected/88cffa15-91ef-48fe-bd03-46cf3e2b4b9c-kube-api-access-snc8f\") pod \"cinder-3458-account-create-update-ldpbt\" (UID: \"88cffa15-91ef-48fe-bd03-46cf3e2b4b9c\") " pod="openstack/cinder-3458-account-create-update-ldpbt" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.563719 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4dhqt" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.575194 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-81c4-account-create-update-xh2xd" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.624574 4681 generic.go:334] "Generic (PLEG): container finished" podID="91c9c400-d63b-4f2a-82fe-e178b9d8041d" containerID="21af8f396dcbc6ca76360326bf1de1bf03c9709327dce1353dafb881a6d9abaa" exitCode=0 Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.624828 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pwvnn" event={"ID":"91c9c400-d63b-4f2a-82fe-e178b9d8041d","Type":"ContainerDied","Data":"21af8f396dcbc6ca76360326bf1de1bf03c9709327dce1353dafb881a6d9abaa"} Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.632543 4681 generic.go:334] "Generic (PLEG): container finished" podID="658ab0b4-3080-4229-bda8-98cdaeedd719" containerID="0d2683cc806fef9b17226200f3ba10d873e4c34436ebcca3867b93dfab439352" exitCode=0 Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.632649 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2vj6j" event={"ID":"658ab0b4-3080-4229-bda8-98cdaeedd719","Type":"ContainerDied","Data":"0d2683cc806fef9b17226200f3ba10d873e4c34436ebcca3867b93dfab439352"} Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.636903 4681 generic.go:334] "Generic (PLEG): container finished" podID="dc5f3138-bfae-4200-9bff-80e1ceae2086" containerID="015e6476b6f4adbaa77e32abbcceff10eb2bb8d539ab9c4cd2759e3b907df6de" exitCode=0 Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.636973 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-tfrts" event={"ID":"dc5f3138-bfae-4200-9bff-80e1ceae2086","Type":"ContainerDied","Data":"015e6476b6f4adbaa77e32abbcceff10eb2bb8d539ab9c4cd2759e3b907df6de"} Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.644540 4681 generic.go:334] "Generic (PLEG): container finished" podID="a536c680-7c89-488e-befb-087242236628" containerID="c7e1b6a03e18aea1164db3877c25464aaa6d1293ab041182ec06680694ceba31" exitCode=0 Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.644606 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587822-jdx8j" event={"ID":"a536c680-7c89-488e-befb-087242236628","Type":"ContainerDied","Data":"c7e1b6a03e18aea1164db3877c25464aaa6d1293ab041182ec06680694ceba31"} Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.653400 4681 generic.go:334] "Generic (PLEG): container finished" podID="d7040185-eeba-423a-b853-8b0845725ca7" containerID="ebc1366d022faa787b55cb6ca943489edc330ab3c212b0e79399c09278bb276d" exitCode=0 Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.653438 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5ngdh" event={"ID":"d7040185-eeba-423a-b853-8b0845725ca7","Type":"ContainerDied","Data":"ebc1366d022faa787b55cb6ca943489edc330ab3c212b0e79399c09278bb276d"} Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.770049 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-zwrl6"] Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.776810 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3458-account-create-update-ldpbt" Apr 04 02:22:08 crc kubenswrapper[4681]: I0404 02:22:08.873664 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-7lzkj"] Apr 04 02:22:09 crc kubenswrapper[4681]: W0404 02:22:09.150197 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc378514c_b92c_4cd6_83a0_c1ac658b6e9b.slice/crio-5ef546581228edb0a9387d56e1bb5bd000d55c5d0e83963b80f85d1e27cd9600 WatchSource:0}: Error finding container 5ef546581228edb0a9387d56e1bb5bd000d55c5d0e83963b80f85d1e27cd9600: Status 404 returned error can't find the container with id 5ef546581228edb0a9387d56e1bb5bd000d55c5d0e83963b80f85d1e27cd9600 Apr 04 02:22:09 crc kubenswrapper[4681]: I0404 02:22:09.153827 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4dhqt"] Apr 04 02:22:09 crc kubenswrapper[4681]: I0404 02:22:09.160472 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-81c4-account-create-update-xh2xd"] Apr 04 02:22:09 crc kubenswrapper[4681]: W0404 02:22:09.199903 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd042e61e_59c3_408a_a5c1_95f6f8f52c21.slice/crio-fce55a1d6cac36806c40c92e7c77ccaaced18f5e584967a7c6e0f5f06d0841c3 WatchSource:0}: Error finding container fce55a1d6cac36806c40c92e7c77ccaaced18f5e584967a7c6e0f5f06d0841c3: Status 404 returned error can't find the container with id fce55a1d6cac36806c40c92e7c77ccaaced18f5e584967a7c6e0f5f06d0841c3 Apr 04 02:22:09 crc kubenswrapper[4681]: I0404 02:22:09.275797 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3458-account-create-update-ldpbt"] Apr 04 02:22:09 crc kubenswrapper[4681]: W0404 02:22:09.400024 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88cffa15_91ef_48fe_bd03_46cf3e2b4b9c.slice/crio-86d0fe94c78e0a3ddcf371f2e5f107fc17f85b2eae8419914b1b2911853460ea WatchSource:0}: Error finding container 86d0fe94c78e0a3ddcf371f2e5f107fc17f85b2eae8419914b1b2911853460ea: Status 404 returned error can't find the container with id 86d0fe94c78e0a3ddcf371f2e5f107fc17f85b2eae8419914b1b2911853460ea Apr 04 02:22:09 crc kubenswrapper[4681]: I0404 02:22:09.664339 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-81c4-account-create-update-xh2xd" event={"ID":"d042e61e-59c3-408a-a5c1-95f6f8f52c21","Type":"ContainerStarted","Data":"fce55a1d6cac36806c40c92e7c77ccaaced18f5e584967a7c6e0f5f06d0841c3"} Apr 04 02:22:09 crc kubenswrapper[4681]: I0404 02:22:09.666996 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7lzkj" event={"ID":"78c6ae52-4069-4291-bf5b-2a3567e923d0","Type":"ContainerStarted","Data":"7bb0d006149d9edb94bfcdeac169a69bc67c0467a1fb0c895279a936df7b7602"} Apr 04 02:22:09 crc kubenswrapper[4681]: I0404 02:22:09.668597 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zwrl6" event={"ID":"58a8ceed-9c8d-4f64-a21a-8868d39acb26","Type":"ContainerStarted","Data":"ce1272eed317518e298d1f40b397c2303c494d5c27259cffa9b216f792332cd3"} Apr 04 02:22:09 crc kubenswrapper[4681]: I0404 02:22:09.669911 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4dhqt" event={"ID":"c378514c-b92c-4cd6-83a0-c1ac658b6e9b","Type":"ContainerStarted","Data":"5ef546581228edb0a9387d56e1bb5bd000d55c5d0e83963b80f85d1e27cd9600"} Apr 04 02:22:09 crc kubenswrapper[4681]: I0404 02:22:09.671050 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3458-account-create-update-ldpbt" event={"ID":"88cffa15-91ef-48fe-bd03-46cf3e2b4b9c","Type":"ContainerStarted","Data":"86d0fe94c78e0a3ddcf371f2e5f107fc17f85b2eae8419914b1b2911853460ea"} Apr 04 02:22:09 crc kubenswrapper[4681]: I0404 02:22:09.672643 4681 generic.go:334] "Generic (PLEG): container finished" podID="f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa" containerID="7d17ec41e4b267bce3a20031accb704f09acaf9e6a0a2f9431a94dd6889ed48d" exitCode=0 Apr 04 02:22:09 crc kubenswrapper[4681]: I0404 02:22:09.672725 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-lnswn" event={"ID":"f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa","Type":"ContainerDied","Data":"7d17ec41e4b267bce3a20031accb704f09acaf9e6a0a2f9431a94dd6889ed48d"} Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.359055 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2vj6j" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.371524 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tfrts" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.388052 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587822-jdx8j" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.410781 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pwvnn" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.413476 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5ngdh" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.431802 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9gzz\" (UniqueName: \"kubernetes.io/projected/658ab0b4-3080-4229-bda8-98cdaeedd719-kube-api-access-j9gzz\") pod \"658ab0b4-3080-4229-bda8-98cdaeedd719\" (UID: \"658ab0b4-3080-4229-bda8-98cdaeedd719\") " Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.431922 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/658ab0b4-3080-4229-bda8-98cdaeedd719-operator-scripts\") pod \"658ab0b4-3080-4229-bda8-98cdaeedd719\" (UID: \"658ab0b4-3080-4229-bda8-98cdaeedd719\") " Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.433465 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/658ab0b4-3080-4229-bda8-98cdaeedd719-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "658ab0b4-3080-4229-bda8-98cdaeedd719" (UID: "658ab0b4-3080-4229-bda8-98cdaeedd719"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.447520 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/658ab0b4-3080-4229-bda8-98cdaeedd719-kube-api-access-j9gzz" (OuterVolumeSpecName: "kube-api-access-j9gzz") pod "658ab0b4-3080-4229-bda8-98cdaeedd719" (UID: "658ab0b4-3080-4229-bda8-98cdaeedd719"). InnerVolumeSpecName "kube-api-access-j9gzz". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.534088 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91c9c400-d63b-4f2a-82fe-e178b9d8041d-operator-scripts\") pod \"91c9c400-d63b-4f2a-82fe-e178b9d8041d\" (UID: \"91c9c400-d63b-4f2a-82fe-e178b9d8041d\") " Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.534149 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dttw8\" (UniqueName: \"kubernetes.io/projected/91c9c400-d63b-4f2a-82fe-e178b9d8041d-kube-api-access-dttw8\") pod \"91c9c400-d63b-4f2a-82fe-e178b9d8041d\" (UID: \"91c9c400-d63b-4f2a-82fe-e178b9d8041d\") " Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.534216 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7040185-eeba-423a-b853-8b0845725ca7-operator-scripts\") pod \"d7040185-eeba-423a-b853-8b0845725ca7\" (UID: \"d7040185-eeba-423a-b853-8b0845725ca7\") " Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.534289 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czcnz\" (UniqueName: \"kubernetes.io/projected/a536c680-7c89-488e-befb-087242236628-kube-api-access-czcnz\") pod \"a536c680-7c89-488e-befb-087242236628\" (UID: \"a536c680-7c89-488e-befb-087242236628\") " Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.534324 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf452\" (UniqueName: \"kubernetes.io/projected/d7040185-eeba-423a-b853-8b0845725ca7-kube-api-access-sf452\") pod \"d7040185-eeba-423a-b853-8b0845725ca7\" (UID: \"d7040185-eeba-423a-b853-8b0845725ca7\") " Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.534401 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skq7k\" (UniqueName: \"kubernetes.io/projected/dc5f3138-bfae-4200-9bff-80e1ceae2086-kube-api-access-skq7k\") pod \"dc5f3138-bfae-4200-9bff-80e1ceae2086\" (UID: \"dc5f3138-bfae-4200-9bff-80e1ceae2086\") " Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.534501 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91c9c400-d63b-4f2a-82fe-e178b9d8041d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "91c9c400-d63b-4f2a-82fe-e178b9d8041d" (UID: "91c9c400-d63b-4f2a-82fe-e178b9d8041d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.534524 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc5f3138-bfae-4200-9bff-80e1ceae2086-operator-scripts\") pod \"dc5f3138-bfae-4200-9bff-80e1ceae2086\" (UID: \"dc5f3138-bfae-4200-9bff-80e1ceae2086\") " Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.534661 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7040185-eeba-423a-b853-8b0845725ca7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d7040185-eeba-423a-b853-8b0845725ca7" (UID: "d7040185-eeba-423a-b853-8b0845725ca7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.534947 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc5f3138-bfae-4200-9bff-80e1ceae2086-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc5f3138-bfae-4200-9bff-80e1ceae2086" (UID: "dc5f3138-bfae-4200-9bff-80e1ceae2086"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.535186 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc5f3138-bfae-4200-9bff-80e1ceae2086-operator-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.535202 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/91c9c400-d63b-4f2a-82fe-e178b9d8041d-operator-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.535211 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9gzz\" (UniqueName: \"kubernetes.io/projected/658ab0b4-3080-4229-bda8-98cdaeedd719-kube-api-access-j9gzz\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.535223 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7040185-eeba-423a-b853-8b0845725ca7-operator-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.535231 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/658ab0b4-3080-4229-bda8-98cdaeedd719-operator-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.540432 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc5f3138-bfae-4200-9bff-80e1ceae2086-kube-api-access-skq7k" (OuterVolumeSpecName: "kube-api-access-skq7k") pod "dc5f3138-bfae-4200-9bff-80e1ceae2086" (UID: "dc5f3138-bfae-4200-9bff-80e1ceae2086"). InnerVolumeSpecName "kube-api-access-skq7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.540532 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a536c680-7c89-488e-befb-087242236628-kube-api-access-czcnz" (OuterVolumeSpecName: "kube-api-access-czcnz") pod "a536c680-7c89-488e-befb-087242236628" (UID: "a536c680-7c89-488e-befb-087242236628"). InnerVolumeSpecName "kube-api-access-czcnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.540829 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91c9c400-d63b-4f2a-82fe-e178b9d8041d-kube-api-access-dttw8" (OuterVolumeSpecName: "kube-api-access-dttw8") pod "91c9c400-d63b-4f2a-82fe-e178b9d8041d" (UID: "91c9c400-d63b-4f2a-82fe-e178b9d8041d"). InnerVolumeSpecName "kube-api-access-dttw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.546854 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7040185-eeba-423a-b853-8b0845725ca7-kube-api-access-sf452" (OuterVolumeSpecName: "kube-api-access-sf452") pod "d7040185-eeba-423a-b853-8b0845725ca7" (UID: "d7040185-eeba-423a-b853-8b0845725ca7"). InnerVolumeSpecName "kube-api-access-sf452". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.636547 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dttw8\" (UniqueName: \"kubernetes.io/projected/91c9c400-d63b-4f2a-82fe-e178b9d8041d-kube-api-access-dttw8\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.636820 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czcnz\" (UniqueName: \"kubernetes.io/projected/a536c680-7c89-488e-befb-087242236628-kube-api-access-czcnz\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.636829 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sf452\" (UniqueName: \"kubernetes.io/projected/d7040185-eeba-423a-b853-8b0845725ca7-kube-api-access-sf452\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.636838 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skq7k\" (UniqueName: \"kubernetes.io/projected/dc5f3138-bfae-4200-9bff-80e1ceae2086-kube-api-access-skq7k\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.691188 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7lzkj" event={"ID":"78c6ae52-4069-4291-bf5b-2a3567e923d0","Type":"ContainerStarted","Data":"4dcb735305ffd307ec46ac893d47f664e53672c58324610bf11041ba59e76b4e"} Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.694579 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zwrl6" event={"ID":"58a8ceed-9c8d-4f64-a21a-8868d39acb26","Type":"ContainerStarted","Data":"3f7e807ead5438442082ebc11cfbfe063147e5777f588d021a586b1b4cdae775"} Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.698813 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587822-jdx8j" event={"ID":"a536c680-7c89-488e-befb-087242236628","Type":"ContainerDied","Data":"e88bf97354211444cd4f4a3e5241748c39170f0fdd1849a2c8255c68bcdfefe5"} Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.698851 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e88bf97354211444cd4f4a3e5241748c39170f0fdd1849a2c8255c68bcdfefe5" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.698912 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587822-jdx8j" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.700898 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-5ngdh" event={"ID":"d7040185-eeba-423a-b853-8b0845725ca7","Type":"ContainerDied","Data":"ff574a5f05e84ef09455aac3cea02232ce81f9b204f55052d4a38fda2ad6e9a0"} Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.700939 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff574a5f05e84ef09455aac3cea02232ce81f9b204f55052d4a38fda2ad6e9a0" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.700989 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-5ngdh" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.711914 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-pwvnn" event={"ID":"91c9c400-d63b-4f2a-82fe-e178b9d8041d","Type":"ContainerDied","Data":"2423af33d508ffac2af51d707e9b0aad4e5b430f040678262fa2ad136222e022"} Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.711947 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2423af33d508ffac2af51d707e9b0aad4e5b430f040678262fa2ad136222e022" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.712007 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-pwvnn" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.721013 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-7lzkj" podStartSLOduration=2.720995018 podStartE2EDuration="2.720995018s" podCreationTimestamp="2026-04-04 02:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:22:10.705974366 +0000 UTC m=+1610.371749486" watchObservedRunningTime="2026-04-04 02:22:10.720995018 +0000 UTC m=+1610.386770138" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.726490 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-2vj6j" event={"ID":"658ab0b4-3080-4229-bda8-98cdaeedd719","Type":"ContainerDied","Data":"d84d1b36c443deee6a71f1fd1979d360ef2090288cc950fe1030553bf3953237"} Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.726527 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d84d1b36c443deee6a71f1fd1979d360ef2090288cc950fe1030553bf3953237" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.726579 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-2vj6j" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.734700 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-81c4-account-create-update-xh2xd" event={"ID":"d042e61e-59c3-408a-a5c1-95f6f8f52c21","Type":"ContainerStarted","Data":"72bedd2aeed60c819f3ec3bb587a679907467c7231b1ce44fab871ce6c27f73c"} Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.749429 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-zwrl6" podStartSLOduration=3.749407927 podStartE2EDuration="3.749407927s" podCreationTimestamp="2026-04-04 02:22:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:22:10.734360194 +0000 UTC m=+1610.400135314" watchObservedRunningTime="2026-04-04 02:22:10.749407927 +0000 UTC m=+1610.415183047" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.751221 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"03a92323-ceb1-4b90-b706-b0d9f924bdd8","Type":"ContainerStarted","Data":"1b95be46818a425f71dd352fae0232fb95f43768be84cde531e71eaff5f2625f"} Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.752910 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4dhqt" event={"ID":"c378514c-b92c-4cd6-83a0-c1ac658b6e9b","Type":"ContainerStarted","Data":"13c21cee4646c0d73633253387281a7f3991e88ca65e2c6fe760a57c88cb50c6"} Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.754120 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3458-account-create-update-ldpbt" event={"ID":"88cffa15-91ef-48fe-bd03-46cf3e2b4b9c","Type":"ContainerStarted","Data":"15b539cfe9d4581737836dbe14ea6deedb8a8fad8ff36f006ff507ffa4ac7136"} Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.756470 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-tfrts" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.757669 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-tfrts" event={"ID":"dc5f3138-bfae-4200-9bff-80e1ceae2086","Type":"ContainerDied","Data":"f40f8caf668e5e015311c7d811b249ccdf50c547b3c5b45911a0266e8519b42d"} Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.757708 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f40f8caf668e5e015311c7d811b249ccdf50c547b3c5b45911a0266e8519b42d" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.767545 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-81c4-account-create-update-xh2xd" podStartSLOduration=2.767527723 podStartE2EDuration="2.767527723s" podCreationTimestamp="2026-04-04 02:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:22:10.762241768 +0000 UTC m=+1610.428016888" watchObservedRunningTime="2026-04-04 02:22:10.767527723 +0000 UTC m=+1610.433302843" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.787281 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-3458-account-create-update-ldpbt" podStartSLOduration=2.787241513 podStartE2EDuration="2.787241513s" podCreationTimestamp="2026-04-04 02:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:22:10.780812427 +0000 UTC m=+1610.446587547" watchObservedRunningTime="2026-04-04 02:22:10.787241513 +0000 UTC m=+1610.453016623" Apr 04 02:22:10 crc kubenswrapper[4681]: I0404 02:22:10.802609 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-4dhqt" podStartSLOduration=2.802589235 podStartE2EDuration="2.802589235s" podCreationTimestamp="2026-04-04 02:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:22:10.799579722 +0000 UTC m=+1610.465354842" watchObservedRunningTime="2026-04-04 02:22:10.802589235 +0000 UTC m=+1610.468364355" Apr 04 02:22:11 crc kubenswrapper[4681]: I0404 02:22:11.069256 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-lnswn" Apr 04 02:22:11 crc kubenswrapper[4681]: I0404 02:22:11.155577 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa-operator-scripts\") pod \"f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa\" (UID: \"f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa\") " Apr 04 02:22:11 crc kubenswrapper[4681]: I0404 02:22:11.155771 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bg8gx\" (UniqueName: \"kubernetes.io/projected/f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa-kube-api-access-bg8gx\") pod \"f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa\" (UID: \"f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa\") " Apr 04 02:22:11 crc kubenswrapper[4681]: I0404 02:22:11.156299 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa" (UID: "f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:22:11 crc kubenswrapper[4681]: I0404 02:22:11.160114 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa-kube-api-access-bg8gx" (OuterVolumeSpecName: "kube-api-access-bg8gx") pod "f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa" (UID: "f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa"). InnerVolumeSpecName "kube-api-access-bg8gx". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:22:11 crc kubenswrapper[4681]: I0404 02:22:11.257897 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bg8gx\" (UniqueName: \"kubernetes.io/projected/f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa-kube-api-access-bg8gx\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:11 crc kubenswrapper[4681]: I0404 02:22:11.257939 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa-operator-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:11 crc kubenswrapper[4681]: I0404 02:22:11.475313 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587816-dlnrp"] Apr 04 02:22:11 crc kubenswrapper[4681]: I0404 02:22:11.486495 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587816-dlnrp"] Apr 04 02:22:11 crc kubenswrapper[4681]: I0404 02:22:11.767103 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-lnswn" Apr 04 02:22:11 crc kubenswrapper[4681]: I0404 02:22:11.767147 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-lnswn" event={"ID":"f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa","Type":"ContainerDied","Data":"009a5b342e2e707d9b719cc420564042e9a2823cc979398b59ff690ff7e89a76"} Apr 04 02:22:11 crc kubenswrapper[4681]: I0404 02:22:11.767196 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="009a5b342e2e707d9b719cc420564042e9a2823cc979398b59ff690ff7e89a76" Apr 04 02:22:11 crc kubenswrapper[4681]: I0404 02:22:11.771743 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"03a92323-ceb1-4b90-b706-b0d9f924bdd8","Type":"ContainerStarted","Data":"44e0064a035b6cb26bde2c70e3bf59c59b5f159533019c04f98c4f0c47779629"} Apr 04 02:22:11 crc kubenswrapper[4681]: I0404 02:22:11.814214 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.814192596 podStartE2EDuration="17.814192596s" podCreationTimestamp="2026-04-04 02:21:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:22:11.804545311 +0000 UTC m=+1611.470320431" watchObservedRunningTime="2026-04-04 02:22:11.814192596 +0000 UTC m=+1611.479967716" Apr 04 02:22:12 crc kubenswrapper[4681]: I0404 02:22:12.784209 4681 generic.go:334] "Generic (PLEG): container finished" podID="7de30d66-63ae-43ca-8d87-33b3fc14f4b2" containerID="36d56db957703fcbe4a0b7947518ae20c7387fc303d7734d7efd544002e6f079" exitCode=0 Apr 04 02:22:12 crc kubenswrapper[4681]: I0404 02:22:12.784326 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7de30d66-63ae-43ca-8d87-33b3fc14f4b2","Type":"ContainerDied","Data":"36d56db957703fcbe4a0b7947518ae20c7387fc303d7734d7efd544002e6f079"} Apr 04 02:22:13 crc kubenswrapper[4681]: I0404 02:22:13.215432 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21cbe3ab-4b9a-49b1-90c6-a86457d33b81" path="/var/lib/kubelet/pods/21cbe3ab-4b9a-49b1-90c6-a86457d33b81/volumes" Apr 04 02:22:13 crc kubenswrapper[4681]: E0404 02:22:13.241559 4681 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod747c7dee_388d_4dc0_8a14_12c94c004057.slice/crio-conmon-8027fab556f5ede2774072e77d8aafeadb021899a22270658e0242f7b2c45284.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod747c7dee_388d_4dc0_8a14_12c94c004057.slice/crio-8027fab556f5ede2774072e77d8aafeadb021899a22270658e0242f7b2c45284.scope\": RecentStats: unable to find data in memory cache]" Apr 04 02:22:13 crc kubenswrapper[4681]: I0404 02:22:13.794342 4681 generic.go:334] "Generic (PLEG): container finished" podID="0b80e6a4-dd65-4faa-8163-342276cd3481" containerID="10d0ffbcb1100858bab42542e23f39fb1b56d55c49a433c8c7c0a33121827542" exitCode=0 Apr 04 02:22:13 crc kubenswrapper[4681]: I0404 02:22:13.794443 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4d4e-account-create-update-vwb26" event={"ID":"0b80e6a4-dd65-4faa-8163-342276cd3481","Type":"ContainerDied","Data":"10d0ffbcb1100858bab42542e23f39fb1b56d55c49a433c8c7c0a33121827542"} Apr 04 02:22:13 crc kubenswrapper[4681]: I0404 02:22:13.797009 4681 generic.go:334] "Generic (PLEG): container finished" podID="747c7dee-388d-4dc0-8a14-12c94c004057" containerID="8027fab556f5ede2774072e77d8aafeadb021899a22270658e0242f7b2c45284" exitCode=0 Apr 04 02:22:13 crc kubenswrapper[4681]: I0404 02:22:13.797043 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b64f-account-create-update-rfm44" event={"ID":"747c7dee-388d-4dc0-8a14-12c94c004057","Type":"ContainerDied","Data":"8027fab556f5ede2774072e77d8aafeadb021899a22270658e0242f7b2c45284"} Apr 04 02:22:13 crc kubenswrapper[4681]: I0404 02:22:13.799085 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"7de30d66-63ae-43ca-8d87-33b3fc14f4b2","Type":"ContainerStarted","Data":"22b5ed2d0ec29cb7ffb8343546d24ac2c1fc3f4d4aa32ead2097af5553f76932"} Apr 04 02:22:13 crc kubenswrapper[4681]: I0404 02:22:13.800457 4681 generic.go:334] "Generic (PLEG): container finished" podID="d76e7add-8e4a-430f-ac78-55dd1539cb37" containerID="5bf421c1163064b0b3abe2737121078d91b5601bc6203f33eb5f9654145e0ced" exitCode=0 Apr 04 02:22:13 crc kubenswrapper[4681]: I0404 02:22:13.800497 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fd52-account-create-update-g868v" event={"ID":"d76e7add-8e4a-430f-ac78-55dd1539cb37","Type":"ContainerDied","Data":"5bf421c1163064b0b3abe2737121078d91b5601bc6203f33eb5f9654145e0ced"} Apr 04 02:22:13 crc kubenswrapper[4681]: I0404 02:22:13.848305 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=-9223371902.006487 podStartE2EDuration="2m14.848289826s" podCreationTimestamp="2026-04-04 02:19:59 +0000 UTC" firstStartedPulling="2026-04-04 02:20:29.257769824 +0000 UTC m=+1508.923544944" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:22:13.840481882 +0000 UTC m=+1613.506257012" watchObservedRunningTime="2026-04-04 02:22:13.848289826 +0000 UTC m=+1613.514064946" Apr 04 02:22:14 crc kubenswrapper[4681]: I0404 02:22:14.816918 4681 generic.go:334] "Generic (PLEG): container finished" podID="bf1fdc7f-09be-4dd6-8b31-ff80353025e3" containerID="7e4c9137d135b2fcb0f177a4f676ad1683c4862f5f34ce907a485533a6cabf04" exitCode=0 Apr 04 02:22:14 crc kubenswrapper[4681]: I0404 02:22:14.817008 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5bc0-account-create-update-hdl7t" event={"ID":"bf1fdc7f-09be-4dd6-8b31-ff80353025e3","Type":"ContainerDied","Data":"7e4c9137d135b2fcb0f177a4f676ad1683c4862f5f34ce907a485533a6cabf04"} Apr 04 02:22:14 crc kubenswrapper[4681]: I0404 02:22:14.831590 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.269374 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4d4e-account-create-update-vwb26" Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.275678 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fd52-account-create-update-g868v" Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.293424 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b64f-account-create-update-rfm44" Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.338412 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b80e6a4-dd65-4faa-8163-342276cd3481-operator-scripts\") pod \"0b80e6a4-dd65-4faa-8163-342276cd3481\" (UID: \"0b80e6a4-dd65-4faa-8163-342276cd3481\") " Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.338477 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d9br\" (UniqueName: \"kubernetes.io/projected/d76e7add-8e4a-430f-ac78-55dd1539cb37-kube-api-access-4d9br\") pod \"d76e7add-8e4a-430f-ac78-55dd1539cb37\" (UID: \"d76e7add-8e4a-430f-ac78-55dd1539cb37\") " Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.338537 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d76e7add-8e4a-430f-ac78-55dd1539cb37-operator-scripts\") pod \"d76e7add-8e4a-430f-ac78-55dd1539cb37\" (UID: \"d76e7add-8e4a-430f-ac78-55dd1539cb37\") " Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.338567 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpcj4\" (UniqueName: \"kubernetes.io/projected/0b80e6a4-dd65-4faa-8163-342276cd3481-kube-api-access-lpcj4\") pod \"0b80e6a4-dd65-4faa-8163-342276cd3481\" (UID: \"0b80e6a4-dd65-4faa-8163-342276cd3481\") " Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.339063 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b80e6a4-dd65-4faa-8163-342276cd3481-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0b80e6a4-dd65-4faa-8163-342276cd3481" (UID: "0b80e6a4-dd65-4faa-8163-342276cd3481"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.339477 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b80e6a4-dd65-4faa-8163-342276cd3481-operator-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.340513 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d76e7add-8e4a-430f-ac78-55dd1539cb37-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d76e7add-8e4a-430f-ac78-55dd1539cb37" (UID: "d76e7add-8e4a-430f-ac78-55dd1539cb37"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.344202 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d76e7add-8e4a-430f-ac78-55dd1539cb37-kube-api-access-4d9br" (OuterVolumeSpecName: "kube-api-access-4d9br") pod "d76e7add-8e4a-430f-ac78-55dd1539cb37" (UID: "d76e7add-8e4a-430f-ac78-55dd1539cb37"). InnerVolumeSpecName "kube-api-access-4d9br". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.346430 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b80e6a4-dd65-4faa-8163-342276cd3481-kube-api-access-lpcj4" (OuterVolumeSpecName: "kube-api-access-lpcj4") pod "0b80e6a4-dd65-4faa-8163-342276cd3481" (UID: "0b80e6a4-dd65-4faa-8163-342276cd3481"). InnerVolumeSpecName "kube-api-access-lpcj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.440658 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r78zd\" (UniqueName: \"kubernetes.io/projected/747c7dee-388d-4dc0-8a14-12c94c004057-kube-api-access-r78zd\") pod \"747c7dee-388d-4dc0-8a14-12c94c004057\" (UID: \"747c7dee-388d-4dc0-8a14-12c94c004057\") " Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.440720 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/747c7dee-388d-4dc0-8a14-12c94c004057-operator-scripts\") pod \"747c7dee-388d-4dc0-8a14-12c94c004057\" (UID: \"747c7dee-388d-4dc0-8a14-12c94c004057\") " Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.441168 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d9br\" (UniqueName: \"kubernetes.io/projected/d76e7add-8e4a-430f-ac78-55dd1539cb37-kube-api-access-4d9br\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.441191 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d76e7add-8e4a-430f-ac78-55dd1539cb37-operator-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.441201 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpcj4\" (UniqueName: \"kubernetes.io/projected/0b80e6a4-dd65-4faa-8163-342276cd3481-kube-api-access-lpcj4\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.441621 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/747c7dee-388d-4dc0-8a14-12c94c004057-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "747c7dee-388d-4dc0-8a14-12c94c004057" (UID: "747c7dee-388d-4dc0-8a14-12c94c004057"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.443373 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/747c7dee-388d-4dc0-8a14-12c94c004057-kube-api-access-r78zd" (OuterVolumeSpecName: "kube-api-access-r78zd") pod "747c7dee-388d-4dc0-8a14-12c94c004057" (UID: "747c7dee-388d-4dc0-8a14-12c94c004057"). InnerVolumeSpecName "kube-api-access-r78zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.543366 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r78zd\" (UniqueName: \"kubernetes.io/projected/747c7dee-388d-4dc0-8a14-12c94c004057-kube-api-access-r78zd\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.543408 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/747c7dee-388d-4dc0-8a14-12c94c004057-operator-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.831565 4681 generic.go:334] "Generic (PLEG): container finished" podID="89925da5-3840-4ec1-9bbb-1f518d3381b9" containerID="c53d88edb80844cd47a7c826429c345ad1734c4067ea19cf431bddaa3cf78c88" exitCode=0 Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.831660 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-ce30-account-create-update-6wsbx" event={"ID":"89925da5-3840-4ec1-9bbb-1f518d3381b9","Type":"ContainerDied","Data":"c53d88edb80844cd47a7c826429c345ad1734c4067ea19cf431bddaa3cf78c88"} Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.835384 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-fd52-account-create-update-g868v" Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.835382 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-fd52-account-create-update-g868v" event={"ID":"d76e7add-8e4a-430f-ac78-55dd1539cb37","Type":"ContainerDied","Data":"5346988cf8bf7d74c80ef294bc658ae4240719dd9b922c34d2640e000b5ba2ab"} Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.835753 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5346988cf8bf7d74c80ef294bc658ae4240719dd9b922c34d2640e000b5ba2ab" Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.837178 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4d4e-account-create-update-vwb26" event={"ID":"0b80e6a4-dd65-4faa-8163-342276cd3481","Type":"ContainerDied","Data":"988226a483f19c601be1f54361eac6229368a45b0312124d51aeb5ed1a41ba17"} Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.837214 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="988226a483f19c601be1f54361eac6229368a45b0312124d51aeb5ed1a41ba17" Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.837303 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4d4e-account-create-update-vwb26" Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.840384 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b64f-account-create-update-rfm44" Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.840824 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b64f-account-create-update-rfm44" event={"ID":"747c7dee-388d-4dc0-8a14-12c94c004057","Type":"ContainerDied","Data":"0abf69a3e1c93b3e411eaaab84d2456580e01410e5f140a012343f7fe552e60e"} Apr 04 02:22:15 crc kubenswrapper[4681]: I0404 02:22:15.840880 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0abf69a3e1c93b3e411eaaab84d2456580e01410e5f140a012343f7fe552e60e" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.076714 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5bc0-account-create-update-hdl7t" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.152793 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf1fdc7f-09be-4dd6-8b31-ff80353025e3-operator-scripts\") pod \"bf1fdc7f-09be-4dd6-8b31-ff80353025e3\" (UID: \"bf1fdc7f-09be-4dd6-8b31-ff80353025e3\") " Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.153170 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf1fdc7f-09be-4dd6-8b31-ff80353025e3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bf1fdc7f-09be-4dd6-8b31-ff80353025e3" (UID: "bf1fdc7f-09be-4dd6-8b31-ff80353025e3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.153437 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrgr5\" (UniqueName: \"kubernetes.io/projected/bf1fdc7f-09be-4dd6-8b31-ff80353025e3-kube-api-access-rrgr5\") pod \"bf1fdc7f-09be-4dd6-8b31-ff80353025e3\" (UID: \"bf1fdc7f-09be-4dd6-8b31-ff80353025e3\") " Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.153932 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf1fdc7f-09be-4dd6-8b31-ff80353025e3-operator-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.157677 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf1fdc7f-09be-4dd6-8b31-ff80353025e3-kube-api-access-rrgr5" (OuterVolumeSpecName: "kube-api-access-rrgr5") pod "bf1fdc7f-09be-4dd6-8b31-ff80353025e3" (UID: "bf1fdc7f-09be-4dd6-8b31-ff80353025e3"). InnerVolumeSpecName "kube-api-access-rrgr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.256775 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrgr5\" (UniqueName: \"kubernetes.io/projected/bf1fdc7f-09be-4dd6-8b31-ff80353025e3-kube-api-access-rrgr5\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.572174 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-9vwnm"] Apr 04 02:22:16 crc kubenswrapper[4681]: E0404 02:22:16.572529 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="658ab0b4-3080-4229-bda8-98cdaeedd719" containerName="mariadb-database-create" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.572547 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="658ab0b4-3080-4229-bda8-98cdaeedd719" containerName="mariadb-database-create" Apr 04 02:22:16 crc kubenswrapper[4681]: E0404 02:22:16.572563 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa" containerName="mariadb-database-create" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.572568 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa" containerName="mariadb-database-create" Apr 04 02:22:16 crc kubenswrapper[4681]: E0404 02:22:16.572585 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d76e7add-8e4a-430f-ac78-55dd1539cb37" containerName="mariadb-account-create-update" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.572591 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d76e7add-8e4a-430f-ac78-55dd1539cb37" containerName="mariadb-account-create-update" Apr 04 02:22:16 crc kubenswrapper[4681]: E0404 02:22:16.572601 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf1fdc7f-09be-4dd6-8b31-ff80353025e3" containerName="mariadb-account-create-update" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.572608 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf1fdc7f-09be-4dd6-8b31-ff80353025e3" containerName="mariadb-account-create-update" Apr 04 02:22:16 crc kubenswrapper[4681]: E0404 02:22:16.572619 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc5f3138-bfae-4200-9bff-80e1ceae2086" containerName="mariadb-database-create" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.572625 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc5f3138-bfae-4200-9bff-80e1ceae2086" containerName="mariadb-database-create" Apr 04 02:22:16 crc kubenswrapper[4681]: E0404 02:22:16.572634 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b80e6a4-dd65-4faa-8163-342276cd3481" containerName="mariadb-account-create-update" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.572639 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b80e6a4-dd65-4faa-8163-342276cd3481" containerName="mariadb-account-create-update" Apr 04 02:22:16 crc kubenswrapper[4681]: E0404 02:22:16.572649 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7040185-eeba-423a-b853-8b0845725ca7" containerName="mariadb-database-create" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.572655 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7040185-eeba-423a-b853-8b0845725ca7" containerName="mariadb-database-create" Apr 04 02:22:16 crc kubenswrapper[4681]: E0404 02:22:16.572767 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="747c7dee-388d-4dc0-8a14-12c94c004057" containerName="mariadb-account-create-update" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.572773 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="747c7dee-388d-4dc0-8a14-12c94c004057" containerName="mariadb-account-create-update" Apr 04 02:22:16 crc kubenswrapper[4681]: E0404 02:22:16.572786 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91c9c400-d63b-4f2a-82fe-e178b9d8041d" containerName="mariadb-database-create" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.572791 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="91c9c400-d63b-4f2a-82fe-e178b9d8041d" containerName="mariadb-database-create" Apr 04 02:22:16 crc kubenswrapper[4681]: E0404 02:22:16.572801 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a536c680-7c89-488e-befb-087242236628" containerName="oc" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.572808 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="a536c680-7c89-488e-befb-087242236628" containerName="oc" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.572996 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b80e6a4-dd65-4faa-8163-342276cd3481" containerName="mariadb-account-create-update" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.573022 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="a536c680-7c89-488e-befb-087242236628" containerName="oc" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.573030 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="747c7dee-388d-4dc0-8a14-12c94c004057" containerName="mariadb-account-create-update" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.573038 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7040185-eeba-423a-b853-8b0845725ca7" containerName="mariadb-database-create" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.573053 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="658ab0b4-3080-4229-bda8-98cdaeedd719" containerName="mariadb-database-create" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.573063 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa" containerName="mariadb-database-create" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.573073 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="91c9c400-d63b-4f2a-82fe-e178b9d8041d" containerName="mariadb-database-create" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.573083 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf1fdc7f-09be-4dd6-8b31-ff80353025e3" containerName="mariadb-account-create-update" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.573096 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d76e7add-8e4a-430f-ac78-55dd1539cb37" containerName="mariadb-account-create-update" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.573105 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc5f3138-bfae-4200-9bff-80e1ceae2086" containerName="mariadb-database-create" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.573720 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9vwnm" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.577693 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.577855 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-9bgjc" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.585692 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9vwnm"] Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.662891 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9x787\" (UniqueName: \"kubernetes.io/projected/ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e-kube-api-access-9x787\") pod \"glance-db-sync-9vwnm\" (UID: \"ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e\") " pod="openstack/glance-db-sync-9vwnm" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.663081 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e-db-sync-config-data\") pod \"glance-db-sync-9vwnm\" (UID: \"ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e\") " pod="openstack/glance-db-sync-9vwnm" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.663136 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e-combined-ca-bundle\") pod \"glance-db-sync-9vwnm\" (UID: \"ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e\") " pod="openstack/glance-db-sync-9vwnm" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.663572 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e-config-data\") pod \"glance-db-sync-9vwnm\" (UID: \"ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e\") " pod="openstack/glance-db-sync-9vwnm" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.772134 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e-db-sync-config-data\") pod \"glance-db-sync-9vwnm\" (UID: \"ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e\") " pod="openstack/glance-db-sync-9vwnm" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.772203 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e-combined-ca-bundle\") pod \"glance-db-sync-9vwnm\" (UID: \"ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e\") " pod="openstack/glance-db-sync-9vwnm" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.772316 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e-config-data\") pod \"glance-db-sync-9vwnm\" (UID: \"ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e\") " pod="openstack/glance-db-sync-9vwnm" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.772419 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9x787\" (UniqueName: \"kubernetes.io/projected/ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e-kube-api-access-9x787\") pod \"glance-db-sync-9vwnm\" (UID: \"ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e\") " pod="openstack/glance-db-sync-9vwnm" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.777242 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e-combined-ca-bundle\") pod \"glance-db-sync-9vwnm\" (UID: \"ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e\") " pod="openstack/glance-db-sync-9vwnm" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.779895 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e-db-sync-config-data\") pod \"glance-db-sync-9vwnm\" (UID: \"ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e\") " pod="openstack/glance-db-sync-9vwnm" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.783903 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e-config-data\") pod \"glance-db-sync-9vwnm\" (UID: \"ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e\") " pod="openstack/glance-db-sync-9vwnm" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.816879 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9x787\" (UniqueName: \"kubernetes.io/projected/ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e-kube-api-access-9x787\") pod \"glance-db-sync-9vwnm\" (UID: \"ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e\") " pod="openstack/glance-db-sync-9vwnm" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.849973 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5bc0-account-create-update-hdl7t" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.849970 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5bc0-account-create-update-hdl7t" event={"ID":"bf1fdc7f-09be-4dd6-8b31-ff80353025e3","Type":"ContainerDied","Data":"6d5d63a8fcb19c4c93a92d6f3aa97dabd22623402ee43b3d6ea40d2770a6a065"} Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.850090 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d5d63a8fcb19c4c93a92d6f3aa97dabd22623402ee43b3d6ea40d2770a6a065" Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.852683 4681 generic.go:334] "Generic (PLEG): container finished" podID="78c6ae52-4069-4291-bf5b-2a3567e923d0" containerID="4dcb735305ffd307ec46ac893d47f664e53672c58324610bf11041ba59e76b4e" exitCode=0 Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.852766 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7lzkj" event={"ID":"78c6ae52-4069-4291-bf5b-2a3567e923d0","Type":"ContainerDied","Data":"4dcb735305ffd307ec46ac893d47f664e53672c58324610bf11041ba59e76b4e"} Apr 04 02:22:16 crc kubenswrapper[4681]: I0404 02:22:16.893539 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9vwnm" Apr 04 02:22:17 crc kubenswrapper[4681]: I0404 02:22:17.300327 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-ce30-account-create-update-6wsbx" Apr 04 02:22:17 crc kubenswrapper[4681]: I0404 02:22:17.301088 4681 scope.go:117] "RemoveContainer" containerID="c457b473bb243d16a05cb24de21f923745e72352c272eee3ba000a0b53241a12" Apr 04 02:22:17 crc kubenswrapper[4681]: I0404 02:22:17.386158 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlc8f\" (UniqueName: \"kubernetes.io/projected/89925da5-3840-4ec1-9bbb-1f518d3381b9-kube-api-access-hlc8f\") pod \"89925da5-3840-4ec1-9bbb-1f518d3381b9\" (UID: \"89925da5-3840-4ec1-9bbb-1f518d3381b9\") " Apr 04 02:22:17 crc kubenswrapper[4681]: I0404 02:22:17.386357 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89925da5-3840-4ec1-9bbb-1f518d3381b9-operator-scripts\") pod \"89925da5-3840-4ec1-9bbb-1f518d3381b9\" (UID: \"89925da5-3840-4ec1-9bbb-1f518d3381b9\") " Apr 04 02:22:17 crc kubenswrapper[4681]: I0404 02:22:17.387555 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89925da5-3840-4ec1-9bbb-1f518d3381b9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "89925da5-3840-4ec1-9bbb-1f518d3381b9" (UID: "89925da5-3840-4ec1-9bbb-1f518d3381b9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:22:17 crc kubenswrapper[4681]: I0404 02:22:17.395173 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89925da5-3840-4ec1-9bbb-1f518d3381b9-kube-api-access-hlc8f" (OuterVolumeSpecName: "kube-api-access-hlc8f") pod "89925da5-3840-4ec1-9bbb-1f518d3381b9" (UID: "89925da5-3840-4ec1-9bbb-1f518d3381b9"). InnerVolumeSpecName "kube-api-access-hlc8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:22:17 crc kubenswrapper[4681]: I0404 02:22:17.489101 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlc8f\" (UniqueName: \"kubernetes.io/projected/89925da5-3840-4ec1-9bbb-1f518d3381b9-kube-api-access-hlc8f\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:17 crc kubenswrapper[4681]: I0404 02:22:17.489141 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/89925da5-3840-4ec1-9bbb-1f518d3381b9-operator-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:17 crc kubenswrapper[4681]: I0404 02:22:17.549502 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-9vwnm"] Apr 04 02:22:17 crc kubenswrapper[4681]: W0404 02:22:17.554098 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac1bff3d_1bb6_4ab9_9540_46f39fea9a8e.slice/crio-1208eea38ad6d5f860d7f5934eeee97caf5a9de48228317592bddc17bb8cb052 WatchSource:0}: Error finding container 1208eea38ad6d5f860d7f5934eeee97caf5a9de48228317592bddc17bb8cb052: Status 404 returned error can't find the container with id 1208eea38ad6d5f860d7f5934eeee97caf5a9de48228317592bddc17bb8cb052 Apr 04 02:22:17 crc kubenswrapper[4681]: I0404 02:22:17.863617 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9vwnm" event={"ID":"ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e","Type":"ContainerStarted","Data":"1208eea38ad6d5f860d7f5934eeee97caf5a9de48228317592bddc17bb8cb052"} Apr 04 02:22:17 crc kubenswrapper[4681]: I0404 02:22:17.865508 4681 generic.go:334] "Generic (PLEG): container finished" podID="c378514c-b92c-4cd6-83a0-c1ac658b6e9b" containerID="13c21cee4646c0d73633253387281a7f3991e88ca65e2c6fe760a57c88cb50c6" exitCode=0 Apr 04 02:22:17 crc kubenswrapper[4681]: I0404 02:22:17.865560 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4dhqt" event={"ID":"c378514c-b92c-4cd6-83a0-c1ac658b6e9b","Type":"ContainerDied","Data":"13c21cee4646c0d73633253387281a7f3991e88ca65e2c6fe760a57c88cb50c6"} Apr 04 02:22:17 crc kubenswrapper[4681]: I0404 02:22:17.867212 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-ce30-account-create-update-6wsbx" event={"ID":"89925da5-3840-4ec1-9bbb-1f518d3381b9","Type":"ContainerDied","Data":"99d5616410e7e4c48b727ad23e71f4a98cbac3926c831aeb25431b7723549896"} Apr 04 02:22:17 crc kubenswrapper[4681]: I0404 02:22:17.867307 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99d5616410e7e4c48b727ad23e71f4a98cbac3926c831aeb25431b7723549896" Apr 04 02:22:17 crc kubenswrapper[4681]: I0404 02:22:17.867355 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-ce30-account-create-update-6wsbx" Apr 04 02:22:17 crc kubenswrapper[4681]: I0404 02:22:17.937924 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-ck5kg"] Apr 04 02:22:17 crc kubenswrapper[4681]: E0404 02:22:17.939062 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89925da5-3840-4ec1-9bbb-1f518d3381b9" containerName="mariadb-account-create-update" Apr 04 02:22:17 crc kubenswrapper[4681]: I0404 02:22:17.939085 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="89925da5-3840-4ec1-9bbb-1f518d3381b9" containerName="mariadb-account-create-update" Apr 04 02:22:17 crc kubenswrapper[4681]: I0404 02:22:17.939314 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="89925da5-3840-4ec1-9bbb-1f518d3381b9" containerName="mariadb-account-create-update" Apr 04 02:22:17 crc kubenswrapper[4681]: I0404 02:22:17.939884 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ck5kg" Apr 04 02:22:17 crc kubenswrapper[4681]: I0404 02:22:17.942178 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Apr 04 02:22:17 crc kubenswrapper[4681]: I0404 02:22:17.942321 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Apr 04 02:22:17 crc kubenswrapper[4681]: I0404 02:22:17.942477 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-lv9fh" Apr 04 02:22:17 crc kubenswrapper[4681]: I0404 02:22:17.942647 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Apr 04 02:22:17 crc kubenswrapper[4681]: I0404 02:22:17.970658 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-ck5kg"] Apr 04 02:22:17 crc kubenswrapper[4681]: I0404 02:22:17.997985 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5ad0f4-4c98-4351-83df-037a25fe6447-config-data\") pod \"keystone-db-sync-ck5kg\" (UID: \"ab5ad0f4-4c98-4351-83df-037a25fe6447\") " pod="openstack/keystone-db-sync-ck5kg" Apr 04 02:22:17 crc kubenswrapper[4681]: I0404 02:22:17.998061 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5ad0f4-4c98-4351-83df-037a25fe6447-combined-ca-bundle\") pod \"keystone-db-sync-ck5kg\" (UID: \"ab5ad0f4-4c98-4351-83df-037a25fe6447\") " pod="openstack/keystone-db-sync-ck5kg" Apr 04 02:22:17 crc kubenswrapper[4681]: I0404 02:22:17.998194 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2fd8\" (UniqueName: \"kubernetes.io/projected/ab5ad0f4-4c98-4351-83df-037a25fe6447-kube-api-access-b2fd8\") pod \"keystone-db-sync-ck5kg\" (UID: \"ab5ad0f4-4c98-4351-83df-037a25fe6447\") " pod="openstack/keystone-db-sync-ck5kg" Apr 04 02:22:18 crc kubenswrapper[4681]: I0404 02:22:18.100123 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5ad0f4-4c98-4351-83df-037a25fe6447-config-data\") pod \"keystone-db-sync-ck5kg\" (UID: \"ab5ad0f4-4c98-4351-83df-037a25fe6447\") " pod="openstack/keystone-db-sync-ck5kg" Apr 04 02:22:18 crc kubenswrapper[4681]: I0404 02:22:18.100303 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5ad0f4-4c98-4351-83df-037a25fe6447-combined-ca-bundle\") pod \"keystone-db-sync-ck5kg\" (UID: \"ab5ad0f4-4c98-4351-83df-037a25fe6447\") " pod="openstack/keystone-db-sync-ck5kg" Apr 04 02:22:18 crc kubenswrapper[4681]: I0404 02:22:18.100433 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2fd8\" (UniqueName: \"kubernetes.io/projected/ab5ad0f4-4c98-4351-83df-037a25fe6447-kube-api-access-b2fd8\") pod \"keystone-db-sync-ck5kg\" (UID: \"ab5ad0f4-4c98-4351-83df-037a25fe6447\") " pod="openstack/keystone-db-sync-ck5kg" Apr 04 02:22:18 crc kubenswrapper[4681]: I0404 02:22:18.107979 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5ad0f4-4c98-4351-83df-037a25fe6447-combined-ca-bundle\") pod \"keystone-db-sync-ck5kg\" (UID: \"ab5ad0f4-4c98-4351-83df-037a25fe6447\") " pod="openstack/keystone-db-sync-ck5kg" Apr 04 02:22:18 crc kubenswrapper[4681]: I0404 02:22:18.108576 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5ad0f4-4c98-4351-83df-037a25fe6447-config-data\") pod \"keystone-db-sync-ck5kg\" (UID: \"ab5ad0f4-4c98-4351-83df-037a25fe6447\") " pod="openstack/keystone-db-sync-ck5kg" Apr 04 02:22:18 crc kubenswrapper[4681]: I0404 02:22:18.126961 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2fd8\" (UniqueName: \"kubernetes.io/projected/ab5ad0f4-4c98-4351-83df-037a25fe6447-kube-api-access-b2fd8\") pod \"keystone-db-sync-ck5kg\" (UID: \"ab5ad0f4-4c98-4351-83df-037a25fe6447\") " pod="openstack/keystone-db-sync-ck5kg" Apr 04 02:22:18 crc kubenswrapper[4681]: I0404 02:22:18.228012 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7lzkj" Apr 04 02:22:18 crc kubenswrapper[4681]: I0404 02:22:18.264442 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ck5kg" Apr 04 02:22:18 crc kubenswrapper[4681]: I0404 02:22:18.304095 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvnwg\" (UniqueName: \"kubernetes.io/projected/78c6ae52-4069-4291-bf5b-2a3567e923d0-kube-api-access-kvnwg\") pod \"78c6ae52-4069-4291-bf5b-2a3567e923d0\" (UID: \"78c6ae52-4069-4291-bf5b-2a3567e923d0\") " Apr 04 02:22:18 crc kubenswrapper[4681]: I0404 02:22:18.304347 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78c6ae52-4069-4291-bf5b-2a3567e923d0-operator-scripts\") pod \"78c6ae52-4069-4291-bf5b-2a3567e923d0\" (UID: \"78c6ae52-4069-4291-bf5b-2a3567e923d0\") " Apr 04 02:22:18 crc kubenswrapper[4681]: I0404 02:22:18.307070 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78c6ae52-4069-4291-bf5b-2a3567e923d0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "78c6ae52-4069-4291-bf5b-2a3567e923d0" (UID: "78c6ae52-4069-4291-bf5b-2a3567e923d0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:22:18 crc kubenswrapper[4681]: I0404 02:22:18.318559 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78c6ae52-4069-4291-bf5b-2a3567e923d0-kube-api-access-kvnwg" (OuterVolumeSpecName: "kube-api-access-kvnwg") pod "78c6ae52-4069-4291-bf5b-2a3567e923d0" (UID: "78c6ae52-4069-4291-bf5b-2a3567e923d0"). InnerVolumeSpecName "kube-api-access-kvnwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:22:18 crc kubenswrapper[4681]: I0404 02:22:18.407335 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/78c6ae52-4069-4291-bf5b-2a3567e923d0-operator-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:18 crc kubenswrapper[4681]: I0404 02:22:18.407391 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvnwg\" (UniqueName: \"kubernetes.io/projected/78c6ae52-4069-4291-bf5b-2a3567e923d0-kube-api-access-kvnwg\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:18 crc kubenswrapper[4681]: I0404 02:22:18.786663 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-ck5kg"] Apr 04 02:22:18 crc kubenswrapper[4681]: W0404 02:22:18.800490 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab5ad0f4_4c98_4351_83df_037a25fe6447.slice/crio-6ccd67023a1748e28ae39943105529e3f5b73ddb0a589cb68e4ac77548b01664 WatchSource:0}: Error finding container 6ccd67023a1748e28ae39943105529e3f5b73ddb0a589cb68e4ac77548b01664: Status 404 returned error can't find the container with id 6ccd67023a1748e28ae39943105529e3f5b73ddb0a589cb68e4ac77548b01664 Apr 04 02:22:18 crc kubenswrapper[4681]: I0404 02:22:18.878223 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ck5kg" event={"ID":"ab5ad0f4-4c98-4351-83df-037a25fe6447","Type":"ContainerStarted","Data":"6ccd67023a1748e28ae39943105529e3f5b73ddb0a589cb68e4ac77548b01664"} Apr 04 02:22:18 crc kubenswrapper[4681]: I0404 02:22:18.879983 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7lzkj" Apr 04 02:22:18 crc kubenswrapper[4681]: I0404 02:22:18.879981 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7lzkj" event={"ID":"78c6ae52-4069-4291-bf5b-2a3567e923d0","Type":"ContainerDied","Data":"7bb0d006149d9edb94bfcdeac169a69bc67c0467a1fb0c895279a936df7b7602"} Apr 04 02:22:18 crc kubenswrapper[4681]: I0404 02:22:18.880031 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7bb0d006149d9edb94bfcdeac169a69bc67c0467a1fb0c895279a936df7b7602" Apr 04 02:22:19 crc kubenswrapper[4681]: I0404 02:22:19.318431 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4dhqt" Apr 04 02:22:19 crc kubenswrapper[4681]: I0404 02:22:19.425455 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sx5dz\" (UniqueName: \"kubernetes.io/projected/c378514c-b92c-4cd6-83a0-c1ac658b6e9b-kube-api-access-sx5dz\") pod \"c378514c-b92c-4cd6-83a0-c1ac658b6e9b\" (UID: \"c378514c-b92c-4cd6-83a0-c1ac658b6e9b\") " Apr 04 02:22:19 crc kubenswrapper[4681]: I0404 02:22:19.425535 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c378514c-b92c-4cd6-83a0-c1ac658b6e9b-operator-scripts\") pod \"c378514c-b92c-4cd6-83a0-c1ac658b6e9b\" (UID: \"c378514c-b92c-4cd6-83a0-c1ac658b6e9b\") " Apr 04 02:22:19 crc kubenswrapper[4681]: I0404 02:22:19.426382 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c378514c-b92c-4cd6-83a0-c1ac658b6e9b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c378514c-b92c-4cd6-83a0-c1ac658b6e9b" (UID: "c378514c-b92c-4cd6-83a0-c1ac658b6e9b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:22:19 crc kubenswrapper[4681]: I0404 02:22:19.431516 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c378514c-b92c-4cd6-83a0-c1ac658b6e9b-kube-api-access-sx5dz" (OuterVolumeSpecName: "kube-api-access-sx5dz") pod "c378514c-b92c-4cd6-83a0-c1ac658b6e9b" (UID: "c378514c-b92c-4cd6-83a0-c1ac658b6e9b"). InnerVolumeSpecName "kube-api-access-sx5dz". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:22:19 crc kubenswrapper[4681]: I0404 02:22:19.527800 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sx5dz\" (UniqueName: \"kubernetes.io/projected/c378514c-b92c-4cd6-83a0-c1ac658b6e9b-kube-api-access-sx5dz\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:19 crc kubenswrapper[4681]: I0404 02:22:19.527839 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c378514c-b92c-4cd6-83a0-c1ac658b6e9b-operator-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:19 crc kubenswrapper[4681]: I0404 02:22:19.892507 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4dhqt" event={"ID":"c378514c-b92c-4cd6-83a0-c1ac658b6e9b","Type":"ContainerDied","Data":"5ef546581228edb0a9387d56e1bb5bd000d55c5d0e83963b80f85d1e27cd9600"} Apr 04 02:22:19 crc kubenswrapper[4681]: I0404 02:22:19.892934 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ef546581228edb0a9387d56e1bb5bd000d55c5d0e83963b80f85d1e27cd9600" Apr 04 02:22:19 crc kubenswrapper[4681]: I0404 02:22:19.892775 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4dhqt" Apr 04 02:22:20 crc kubenswrapper[4681]: I0404 02:22:20.706187 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Apr 04 02:22:20 crc kubenswrapper[4681]: I0404 02:22:20.706466 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Apr 04 02:22:20 crc kubenswrapper[4681]: I0404 02:22:20.905734 4681 generic.go:334] "Generic (PLEG): container finished" podID="58a8ceed-9c8d-4f64-a21a-8868d39acb26" containerID="3f7e807ead5438442082ebc11cfbfe063147e5777f588d021a586b1b4cdae775" exitCode=0 Apr 04 02:22:20 crc kubenswrapper[4681]: I0404 02:22:20.905971 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zwrl6" event={"ID":"58a8ceed-9c8d-4f64-a21a-8868d39acb26","Type":"ContainerDied","Data":"3f7e807ead5438442082ebc11cfbfe063147e5777f588d021a586b1b4cdae775"} Apr 04 02:22:21 crc kubenswrapper[4681]: I0404 02:22:21.142527 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-hrgm8"] Apr 04 02:22:21 crc kubenswrapper[4681]: E0404 02:22:21.142897 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78c6ae52-4069-4291-bf5b-2a3567e923d0" containerName="mariadb-database-create" Apr 04 02:22:21 crc kubenswrapper[4681]: I0404 02:22:21.142915 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="78c6ae52-4069-4291-bf5b-2a3567e923d0" containerName="mariadb-database-create" Apr 04 02:22:21 crc kubenswrapper[4681]: E0404 02:22:21.142925 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c378514c-b92c-4cd6-83a0-c1ac658b6e9b" containerName="mariadb-database-create" Apr 04 02:22:21 crc kubenswrapper[4681]: I0404 02:22:21.142932 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="c378514c-b92c-4cd6-83a0-c1ac658b6e9b" containerName="mariadb-database-create" Apr 04 02:22:21 crc kubenswrapper[4681]: I0404 02:22:21.143108 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="c378514c-b92c-4cd6-83a0-c1ac658b6e9b" containerName="mariadb-database-create" Apr 04 02:22:21 crc kubenswrapper[4681]: I0404 02:22:21.143133 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="78c6ae52-4069-4291-bf5b-2a3567e923d0" containerName="mariadb-database-create" Apr 04 02:22:21 crc kubenswrapper[4681]: I0404 02:22:21.143666 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-hrgm8" Apr 04 02:22:21 crc kubenswrapper[4681]: I0404 02:22:21.147964 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-txcwh" Apr 04 02:22:21 crc kubenswrapper[4681]: I0404 02:22:21.148141 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Apr 04 02:22:21 crc kubenswrapper[4681]: I0404 02:22:21.159243 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-hrgm8"] Apr 04 02:22:21 crc kubenswrapper[4681]: I0404 02:22:21.262639 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0011158a-2855-4b60-9798-77badda0f40c-db-sync-config-data\") pod \"watcher-db-sync-hrgm8\" (UID: \"0011158a-2855-4b60-9798-77badda0f40c\") " pod="openstack/watcher-db-sync-hrgm8" Apr 04 02:22:21 crc kubenswrapper[4681]: I0404 02:22:21.263182 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0011158a-2855-4b60-9798-77badda0f40c-config-data\") pod \"watcher-db-sync-hrgm8\" (UID: \"0011158a-2855-4b60-9798-77badda0f40c\") " pod="openstack/watcher-db-sync-hrgm8" Apr 04 02:22:21 crc kubenswrapper[4681]: I0404 02:22:21.263228 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0011158a-2855-4b60-9798-77badda0f40c-combined-ca-bundle\") pod \"watcher-db-sync-hrgm8\" (UID: \"0011158a-2855-4b60-9798-77badda0f40c\") " pod="openstack/watcher-db-sync-hrgm8" Apr 04 02:22:21 crc kubenswrapper[4681]: I0404 02:22:21.263332 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5swv\" (UniqueName: \"kubernetes.io/projected/0011158a-2855-4b60-9798-77badda0f40c-kube-api-access-f5swv\") pod \"watcher-db-sync-hrgm8\" (UID: \"0011158a-2855-4b60-9798-77badda0f40c\") " pod="openstack/watcher-db-sync-hrgm8" Apr 04 02:22:21 crc kubenswrapper[4681]: I0404 02:22:21.364798 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0011158a-2855-4b60-9798-77badda0f40c-config-data\") pod \"watcher-db-sync-hrgm8\" (UID: \"0011158a-2855-4b60-9798-77badda0f40c\") " pod="openstack/watcher-db-sync-hrgm8" Apr 04 02:22:21 crc kubenswrapper[4681]: I0404 02:22:21.364857 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0011158a-2855-4b60-9798-77badda0f40c-combined-ca-bundle\") pod \"watcher-db-sync-hrgm8\" (UID: \"0011158a-2855-4b60-9798-77badda0f40c\") " pod="openstack/watcher-db-sync-hrgm8" Apr 04 02:22:21 crc kubenswrapper[4681]: I0404 02:22:21.364899 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5swv\" (UniqueName: \"kubernetes.io/projected/0011158a-2855-4b60-9798-77badda0f40c-kube-api-access-f5swv\") pod \"watcher-db-sync-hrgm8\" (UID: \"0011158a-2855-4b60-9798-77badda0f40c\") " pod="openstack/watcher-db-sync-hrgm8" Apr 04 02:22:21 crc kubenswrapper[4681]: I0404 02:22:21.364943 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0011158a-2855-4b60-9798-77badda0f40c-db-sync-config-data\") pod \"watcher-db-sync-hrgm8\" (UID: \"0011158a-2855-4b60-9798-77badda0f40c\") " pod="openstack/watcher-db-sync-hrgm8" Apr 04 02:22:21 crc kubenswrapper[4681]: I0404 02:22:21.370814 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0011158a-2855-4b60-9798-77badda0f40c-combined-ca-bundle\") pod \"watcher-db-sync-hrgm8\" (UID: \"0011158a-2855-4b60-9798-77badda0f40c\") " pod="openstack/watcher-db-sync-hrgm8" Apr 04 02:22:21 crc kubenswrapper[4681]: I0404 02:22:21.375651 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0011158a-2855-4b60-9798-77badda0f40c-config-data\") pod \"watcher-db-sync-hrgm8\" (UID: \"0011158a-2855-4b60-9798-77badda0f40c\") " pod="openstack/watcher-db-sync-hrgm8" Apr 04 02:22:21 crc kubenswrapper[4681]: I0404 02:22:21.376100 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0011158a-2855-4b60-9798-77badda0f40c-db-sync-config-data\") pod \"watcher-db-sync-hrgm8\" (UID: \"0011158a-2855-4b60-9798-77badda0f40c\") " pod="openstack/watcher-db-sync-hrgm8" Apr 04 02:22:21 crc kubenswrapper[4681]: I0404 02:22:21.381846 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5swv\" (UniqueName: \"kubernetes.io/projected/0011158a-2855-4b60-9798-77badda0f40c-kube-api-access-f5swv\") pod \"watcher-db-sync-hrgm8\" (UID: \"0011158a-2855-4b60-9798-77badda0f40c\") " pod="openstack/watcher-db-sync-hrgm8" Apr 04 02:22:21 crc kubenswrapper[4681]: I0404 02:22:21.476559 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-hrgm8" Apr 04 02:22:21 crc kubenswrapper[4681]: I0404 02:22:21.922066 4681 generic.go:334] "Generic (PLEG): container finished" podID="88cffa15-91ef-48fe-bd03-46cf3e2b4b9c" containerID="15b539cfe9d4581737836dbe14ea6deedb8a8fad8ff36f006ff507ffa4ac7136" exitCode=0 Apr 04 02:22:21 crc kubenswrapper[4681]: I0404 02:22:21.922136 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3458-account-create-update-ldpbt" event={"ID":"88cffa15-91ef-48fe-bd03-46cf3e2b4b9c","Type":"ContainerDied","Data":"15b539cfe9d4581737836dbe14ea6deedb8a8fad8ff36f006ff507ffa4ac7136"} Apr 04 02:22:21 crc kubenswrapper[4681]: I0404 02:22:21.923868 4681 generic.go:334] "Generic (PLEG): container finished" podID="d042e61e-59c3-408a-a5c1-95f6f8f52c21" containerID="72bedd2aeed60c819f3ec3bb587a679907467c7231b1ce44fab871ce6c27f73c" exitCode=0 Apr 04 02:22:21 crc kubenswrapper[4681]: I0404 02:22:21.923987 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-81c4-account-create-update-xh2xd" event={"ID":"d042e61e-59c3-408a-a5c1-95f6f8f52c21","Type":"ContainerDied","Data":"72bedd2aeed60c819f3ec3bb587a679907467c7231b1ce44fab871ce6c27f73c"} Apr 04 02:22:24 crc kubenswrapper[4681]: I0404 02:22:24.831762 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Apr 04 02:22:24 crc kubenswrapper[4681]: I0404 02:22:24.837329 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Apr 04 02:22:24 crc kubenswrapper[4681]: I0404 02:22:24.971335 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Apr 04 02:22:27 crc kubenswrapper[4681]: I0404 02:22:27.719664 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-81c4-account-create-update-xh2xd" Apr 04 02:22:27 crc kubenswrapper[4681]: I0404 02:22:27.729427 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zwrl6" Apr 04 02:22:27 crc kubenswrapper[4681]: I0404 02:22:27.742517 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3458-account-create-update-ldpbt" Apr 04 02:22:27 crc kubenswrapper[4681]: I0404 02:22:27.805949 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsvpl\" (UniqueName: \"kubernetes.io/projected/58a8ceed-9c8d-4f64-a21a-8868d39acb26-kube-api-access-dsvpl\") pod \"58a8ceed-9c8d-4f64-a21a-8868d39acb26\" (UID: \"58a8ceed-9c8d-4f64-a21a-8868d39acb26\") " Apr 04 02:22:27 crc kubenswrapper[4681]: I0404 02:22:27.806140 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d042e61e-59c3-408a-a5c1-95f6f8f52c21-operator-scripts\") pod \"d042e61e-59c3-408a-a5c1-95f6f8f52c21\" (UID: \"d042e61e-59c3-408a-a5c1-95f6f8f52c21\") " Apr 04 02:22:27 crc kubenswrapper[4681]: I0404 02:22:27.806178 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88cffa15-91ef-48fe-bd03-46cf3e2b4b9c-operator-scripts\") pod \"88cffa15-91ef-48fe-bd03-46cf3e2b4b9c\" (UID: \"88cffa15-91ef-48fe-bd03-46cf3e2b4b9c\") " Apr 04 02:22:27 crc kubenswrapper[4681]: I0404 02:22:27.806347 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbwzd\" (UniqueName: \"kubernetes.io/projected/d042e61e-59c3-408a-a5c1-95f6f8f52c21-kube-api-access-kbwzd\") pod \"d042e61e-59c3-408a-a5c1-95f6f8f52c21\" (UID: \"d042e61e-59c3-408a-a5c1-95f6f8f52c21\") " Apr 04 02:22:27 crc kubenswrapper[4681]: I0404 02:22:27.806456 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snc8f\" (UniqueName: \"kubernetes.io/projected/88cffa15-91ef-48fe-bd03-46cf3e2b4b9c-kube-api-access-snc8f\") pod \"88cffa15-91ef-48fe-bd03-46cf3e2b4b9c\" (UID: \"88cffa15-91ef-48fe-bd03-46cf3e2b4b9c\") " Apr 04 02:22:27 crc kubenswrapper[4681]: I0404 02:22:27.806560 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58a8ceed-9c8d-4f64-a21a-8868d39acb26-operator-scripts\") pod \"58a8ceed-9c8d-4f64-a21a-8868d39acb26\" (UID: \"58a8ceed-9c8d-4f64-a21a-8868d39acb26\") " Apr 04 02:22:27 crc kubenswrapper[4681]: I0404 02:22:27.807584 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88cffa15-91ef-48fe-bd03-46cf3e2b4b9c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "88cffa15-91ef-48fe-bd03-46cf3e2b4b9c" (UID: "88cffa15-91ef-48fe-bd03-46cf3e2b4b9c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:22:27 crc kubenswrapper[4681]: I0404 02:22:27.808411 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d042e61e-59c3-408a-a5c1-95f6f8f52c21-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d042e61e-59c3-408a-a5c1-95f6f8f52c21" (UID: "d042e61e-59c3-408a-a5c1-95f6f8f52c21"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:22:27 crc kubenswrapper[4681]: I0404 02:22:27.808961 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58a8ceed-9c8d-4f64-a21a-8868d39acb26-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "58a8ceed-9c8d-4f64-a21a-8868d39acb26" (UID: "58a8ceed-9c8d-4f64-a21a-8868d39acb26"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:22:27 crc kubenswrapper[4681]: I0404 02:22:27.815458 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d042e61e-59c3-408a-a5c1-95f6f8f52c21-kube-api-access-kbwzd" (OuterVolumeSpecName: "kube-api-access-kbwzd") pod "d042e61e-59c3-408a-a5c1-95f6f8f52c21" (UID: "d042e61e-59c3-408a-a5c1-95f6f8f52c21"). InnerVolumeSpecName "kube-api-access-kbwzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:22:27 crc kubenswrapper[4681]: I0404 02:22:27.820254 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58a8ceed-9c8d-4f64-a21a-8868d39acb26-kube-api-access-dsvpl" (OuterVolumeSpecName: "kube-api-access-dsvpl") pod "58a8ceed-9c8d-4f64-a21a-8868d39acb26" (UID: "58a8ceed-9c8d-4f64-a21a-8868d39acb26"). InnerVolumeSpecName "kube-api-access-dsvpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:22:27 crc kubenswrapper[4681]: I0404 02:22:27.820350 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88cffa15-91ef-48fe-bd03-46cf3e2b4b9c-kube-api-access-snc8f" (OuterVolumeSpecName: "kube-api-access-snc8f") pod "88cffa15-91ef-48fe-bd03-46cf3e2b4b9c" (UID: "88cffa15-91ef-48fe-bd03-46cf3e2b4b9c"). InnerVolumeSpecName "kube-api-access-snc8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:22:27 crc kubenswrapper[4681]: I0404 02:22:27.908933 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snc8f\" (UniqueName: \"kubernetes.io/projected/88cffa15-91ef-48fe-bd03-46cf3e2b4b9c-kube-api-access-snc8f\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:27 crc kubenswrapper[4681]: I0404 02:22:27.909400 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58a8ceed-9c8d-4f64-a21a-8868d39acb26-operator-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:27 crc kubenswrapper[4681]: I0404 02:22:27.909420 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsvpl\" (UniqueName: \"kubernetes.io/projected/58a8ceed-9c8d-4f64-a21a-8868d39acb26-kube-api-access-dsvpl\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:27 crc kubenswrapper[4681]: I0404 02:22:27.909438 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d042e61e-59c3-408a-a5c1-95f6f8f52c21-operator-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:27 crc kubenswrapper[4681]: I0404 02:22:27.909458 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88cffa15-91ef-48fe-bd03-46cf3e2b4b9c-operator-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:27 crc kubenswrapper[4681]: I0404 02:22:27.909475 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbwzd\" (UniqueName: \"kubernetes.io/projected/d042e61e-59c3-408a-a5c1-95f6f8f52c21-kube-api-access-kbwzd\") on node \"crc\" DevicePath \"\"" Apr 04 02:22:27 crc kubenswrapper[4681]: I0404 02:22:27.994771 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3458-account-create-update-ldpbt" Apr 04 02:22:27 crc kubenswrapper[4681]: I0404 02:22:27.994772 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3458-account-create-update-ldpbt" event={"ID":"88cffa15-91ef-48fe-bd03-46cf3e2b4b9c","Type":"ContainerDied","Data":"86d0fe94c78e0a3ddcf371f2e5f107fc17f85b2eae8419914b1b2911853460ea"} Apr 04 02:22:27 crc kubenswrapper[4681]: I0404 02:22:27.994895 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86d0fe94c78e0a3ddcf371f2e5f107fc17f85b2eae8419914b1b2911853460ea" Apr 04 02:22:27 crc kubenswrapper[4681]: I0404 02:22:27.997436 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-81c4-account-create-update-xh2xd" Apr 04 02:22:27 crc kubenswrapper[4681]: I0404 02:22:27.997467 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-81c4-account-create-update-xh2xd" event={"ID":"d042e61e-59c3-408a-a5c1-95f6f8f52c21","Type":"ContainerDied","Data":"fce55a1d6cac36806c40c92e7c77ccaaced18f5e584967a7c6e0f5f06d0841c3"} Apr 04 02:22:27 crc kubenswrapper[4681]: I0404 02:22:27.997527 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fce55a1d6cac36806c40c92e7c77ccaaced18f5e584967a7c6e0f5f06d0841c3" Apr 04 02:22:27 crc kubenswrapper[4681]: I0404 02:22:27.999236 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-zwrl6" event={"ID":"58a8ceed-9c8d-4f64-a21a-8868d39acb26","Type":"ContainerDied","Data":"ce1272eed317518e298d1f40b397c2303c494d5c27259cffa9b216f792332cd3"} Apr 04 02:22:27 crc kubenswrapper[4681]: I0404 02:22:27.999287 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce1272eed317518e298d1f40b397c2303c494d5c27259cffa9b216f792332cd3" Apr 04 02:22:27 crc kubenswrapper[4681]: I0404 02:22:27.999366 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-zwrl6" Apr 04 02:22:42 crc kubenswrapper[4681]: I0404 02:22:42.294619 4681 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.191842301s: [/var/lib/containers/storage/overlay/1de14f4871788b8bfaff091827e40213a94940ab3426d61812a073368bcda202/diff /var/log/pods/openstack_ovn-northd-0_4c79dddc-8bad-4bfb-920f-434aea2c400c/ovn-northd/0.log]; will not log again for this container unless duration exceeds 2s Apr 04 02:22:58 crc kubenswrapper[4681]: E0404 02:22:58.044335 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[etc-swift], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openstack/swift-storage-0" podUID="cdc00a76-b945-4eca-98d7-1f126a78785f" Apr 04 02:22:58 crc kubenswrapper[4681]: I0404 02:22:58.300198 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Apr 04 02:23:03 crc kubenswrapper[4681]: I0404 02:23:03.101718 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdc00a76-b945-4eca-98d7-1f126a78785f-etc-swift\") pod \"swift-storage-0\" (UID: \"cdc00a76-b945-4eca-98d7-1f126a78785f\") " pod="openstack/swift-storage-0" Apr 04 02:23:03 crc kubenswrapper[4681]: I0404 02:23:03.122982 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cdc00a76-b945-4eca-98d7-1f126a78785f-etc-swift\") pod \"swift-storage-0\" (UID: \"cdc00a76-b945-4eca-98d7-1f126a78785f\") " pod="openstack/swift-storage-0" Apr 04 02:23:03 crc kubenswrapper[4681]: I0404 02:23:03.402193 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Apr 04 02:23:05 crc kubenswrapper[4681]: E0404 02:23:05.942869 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.110:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Apr 04 02:23:05 crc kubenswrapper[4681]: E0404 02:23:05.945330 4681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.110:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Apr 04 02:23:05 crc kubenswrapper[4681]: E0404 02:23:05.945626 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:38.102.83.110:5001/podified-master-centos10/openstack-glance-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9x787,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-9vwnm_openstack(ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Apr 04 02:23:05 crc kubenswrapper[4681]: E0404 02:23:05.947701 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-9vwnm" podUID="ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e" Apr 04 02:23:15 crc kubenswrapper[4681]: E0404 02:23:15.068633 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.110:5001/podified-master-centos10/openstack-glance-api:watcher_latest\\\"\"" pod="openstack/glance-db-sync-9vwnm" podUID="ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e" Apr 04 02:23:15 crc kubenswrapper[4681]: I0404 02:23:15.103141 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Apr 04 02:23:15 crc kubenswrapper[4681]: I0404 02:23:15.262419 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Apr 04 02:23:15 crc kubenswrapper[4681]: I0404 02:23:15.461767 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ck5kg" event={"ID":"ab5ad0f4-4c98-4351-83df-037a25fe6447","Type":"ContainerStarted","Data":"fe8eb6697cf3f151e7f12279d37ffd126e139882026bf4e41885afea76100b12"} Apr 04 02:23:15 crc kubenswrapper[4681]: I0404 02:23:15.539135 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Apr 04 02:23:15 crc kubenswrapper[4681]: W0404 02:23:15.648351 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0011158a_2855_4b60_9798_77badda0f40c.slice/crio-d8c99c0157e0422493241afb8a05976a44726be003d4dd357e787d7b6fcc34e7 WatchSource:0}: Error finding container d8c99c0157e0422493241afb8a05976a44726be003d4dd357e787d7b6fcc34e7: Status 404 returned error can't find the container with id d8c99c0157e0422493241afb8a05976a44726be003d4dd357e787d7b6fcc34e7 Apr 04 02:23:15 crc kubenswrapper[4681]: I0404 02:23:15.648596 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-hrgm8"] Apr 04 02:23:16 crc kubenswrapper[4681]: I0404 02:23:16.470968 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-hrgm8" event={"ID":"0011158a-2855-4b60-9798-77badda0f40c","Type":"ContainerStarted","Data":"d8c99c0157e0422493241afb8a05976a44726be003d4dd357e787d7b6fcc34e7"} Apr 04 02:23:16 crc kubenswrapper[4681]: I0404 02:23:16.473166 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdc00a76-b945-4eca-98d7-1f126a78785f","Type":"ContainerStarted","Data":"d15b881eda7e39cd3114159e4a34fd8a02be74be429e27ab2655fb5096583acf"} Apr 04 02:23:16 crc kubenswrapper[4681]: I0404 02:23:16.495784 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-ck5kg" podStartSLOduration=3.1921542609999998 podStartE2EDuration="59.495767377s" podCreationTimestamp="2026-04-04 02:22:17 +0000 UTC" firstStartedPulling="2026-04-04 02:22:18.803178645 +0000 UTC m=+1618.468953765" lastFinishedPulling="2026-04-04 02:23:15.106791761 +0000 UTC m=+1674.772566881" observedRunningTime="2026-04-04 02:23:16.484447907 +0000 UTC m=+1676.150223027" watchObservedRunningTime="2026-04-04 02:23:16.495767377 +0000 UTC m=+1676.161542497" Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.010521 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h5zv5"] Apr 04 02:23:19 crc kubenswrapper[4681]: E0404 02:23:19.012246 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58a8ceed-9c8d-4f64-a21a-8868d39acb26" containerName="mariadb-account-create-update" Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.012283 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="58a8ceed-9c8d-4f64-a21a-8868d39acb26" containerName="mariadb-account-create-update" Apr 04 02:23:19 crc kubenswrapper[4681]: E0404 02:23:19.012313 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d042e61e-59c3-408a-a5c1-95f6f8f52c21" containerName="mariadb-account-create-update" Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.012321 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d042e61e-59c3-408a-a5c1-95f6f8f52c21" containerName="mariadb-account-create-update" Apr 04 02:23:19 crc kubenswrapper[4681]: E0404 02:23:19.012344 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88cffa15-91ef-48fe-bd03-46cf3e2b4b9c" containerName="mariadb-account-create-update" Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.012353 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="88cffa15-91ef-48fe-bd03-46cf3e2b4b9c" containerName="mariadb-account-create-update" Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.012583 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="88cffa15-91ef-48fe-bd03-46cf3e2b4b9c" containerName="mariadb-account-create-update" Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.012602 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d042e61e-59c3-408a-a5c1-95f6f8f52c21" containerName="mariadb-account-create-update" Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.012628 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="58a8ceed-9c8d-4f64-a21a-8868d39acb26" containerName="mariadb-account-create-update" Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.014123 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h5zv5" Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.025648 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h5zv5"] Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.107556 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56f8d1b1-7cb9-400a-a94b-71bf1fcf0902-utilities\") pod \"redhat-operators-h5zv5\" (UID: \"56f8d1b1-7cb9-400a-a94b-71bf1fcf0902\") " pod="openshift-marketplace/redhat-operators-h5zv5" Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.107625 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lphdq\" (UniqueName: \"kubernetes.io/projected/56f8d1b1-7cb9-400a-a94b-71bf1fcf0902-kube-api-access-lphdq\") pod \"redhat-operators-h5zv5\" (UID: \"56f8d1b1-7cb9-400a-a94b-71bf1fcf0902\") " pod="openshift-marketplace/redhat-operators-h5zv5" Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.107817 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56f8d1b1-7cb9-400a-a94b-71bf1fcf0902-catalog-content\") pod \"redhat-operators-h5zv5\" (UID: \"56f8d1b1-7cb9-400a-a94b-71bf1fcf0902\") " pod="openshift-marketplace/redhat-operators-h5zv5" Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.209130 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56f8d1b1-7cb9-400a-a94b-71bf1fcf0902-catalog-content\") pod \"redhat-operators-h5zv5\" (UID: \"56f8d1b1-7cb9-400a-a94b-71bf1fcf0902\") " pod="openshift-marketplace/redhat-operators-h5zv5" Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.209235 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56f8d1b1-7cb9-400a-a94b-71bf1fcf0902-utilities\") pod \"redhat-operators-h5zv5\" (UID: \"56f8d1b1-7cb9-400a-a94b-71bf1fcf0902\") " pod="openshift-marketplace/redhat-operators-h5zv5" Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.209287 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lphdq\" (UniqueName: \"kubernetes.io/projected/56f8d1b1-7cb9-400a-a94b-71bf1fcf0902-kube-api-access-lphdq\") pod \"redhat-operators-h5zv5\" (UID: \"56f8d1b1-7cb9-400a-a94b-71bf1fcf0902\") " pod="openshift-marketplace/redhat-operators-h5zv5" Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.209721 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56f8d1b1-7cb9-400a-a94b-71bf1fcf0902-catalog-content\") pod \"redhat-operators-h5zv5\" (UID: \"56f8d1b1-7cb9-400a-a94b-71bf1fcf0902\") " pod="openshift-marketplace/redhat-operators-h5zv5" Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.209800 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56f8d1b1-7cb9-400a-a94b-71bf1fcf0902-utilities\") pod \"redhat-operators-h5zv5\" (UID: \"56f8d1b1-7cb9-400a-a94b-71bf1fcf0902\") " pod="openshift-marketplace/redhat-operators-h5zv5" Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.229711 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lphdq\" (UniqueName: \"kubernetes.io/projected/56f8d1b1-7cb9-400a-a94b-71bf1fcf0902-kube-api-access-lphdq\") pod \"redhat-operators-h5zv5\" (UID: \"56f8d1b1-7cb9-400a-a94b-71bf1fcf0902\") " pod="openshift-marketplace/redhat-operators-h5zv5" Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.351372 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h5zv5" Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.392176 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-zwrl6"] Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.399798 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-zwrl6"] Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.493859 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-4q2fr"] Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.495087 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4q2fr" Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.497707 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.523911 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4q2fr"] Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.617587 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18b756c8-3f48-42ed-a4e4-895e2335fdb3-operator-scripts\") pod \"root-account-create-update-4q2fr\" (UID: \"18b756c8-3f48-42ed-a4e4-895e2335fdb3\") " pod="openstack/root-account-create-update-4q2fr" Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.617766 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksdp9\" (UniqueName: \"kubernetes.io/projected/18b756c8-3f48-42ed-a4e4-895e2335fdb3-kube-api-access-ksdp9\") pod \"root-account-create-update-4q2fr\" (UID: \"18b756c8-3f48-42ed-a4e4-895e2335fdb3\") " pod="openstack/root-account-create-update-4q2fr" Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.719122 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksdp9\" (UniqueName: \"kubernetes.io/projected/18b756c8-3f48-42ed-a4e4-895e2335fdb3-kube-api-access-ksdp9\") pod \"root-account-create-update-4q2fr\" (UID: \"18b756c8-3f48-42ed-a4e4-895e2335fdb3\") " pod="openstack/root-account-create-update-4q2fr" Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.719290 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18b756c8-3f48-42ed-a4e4-895e2335fdb3-operator-scripts\") pod \"root-account-create-update-4q2fr\" (UID: \"18b756c8-3f48-42ed-a4e4-895e2335fdb3\") " pod="openstack/root-account-create-update-4q2fr" Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.720028 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18b756c8-3f48-42ed-a4e4-895e2335fdb3-operator-scripts\") pod \"root-account-create-update-4q2fr\" (UID: \"18b756c8-3f48-42ed-a4e4-895e2335fdb3\") " pod="openstack/root-account-create-update-4q2fr" Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.745193 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksdp9\" (UniqueName: \"kubernetes.io/projected/18b756c8-3f48-42ed-a4e4-895e2335fdb3-kube-api-access-ksdp9\") pod \"root-account-create-update-4q2fr\" (UID: \"18b756c8-3f48-42ed-a4e4-895e2335fdb3\") " pod="openstack/root-account-create-update-4q2fr" Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.829150 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4q2fr" Apr 04 02:23:19 crc kubenswrapper[4681]: I0404 02:23:19.873368 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h5zv5"] Apr 04 02:23:19 crc kubenswrapper[4681]: W0404 02:23:19.882625 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56f8d1b1_7cb9_400a_a94b_71bf1fcf0902.slice/crio-5749ac5909873847d332dc6e5debb4de1613a169e07ef94519d894b0acec6d44 WatchSource:0}: Error finding container 5749ac5909873847d332dc6e5debb4de1613a169e07ef94519d894b0acec6d44: Status 404 returned error can't find the container with id 5749ac5909873847d332dc6e5debb4de1613a169e07ef94519d894b0acec6d44 Apr 04 02:23:20 crc kubenswrapper[4681]: I0404 02:23:20.336337 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-4q2fr"] Apr 04 02:23:20 crc kubenswrapper[4681]: I0404 02:23:20.545770 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdc00a76-b945-4eca-98d7-1f126a78785f","Type":"ContainerStarted","Data":"92064bb50efc3621ff4f02dfdc330cdb0bde126ef057f4eb212ef3b0eafe89ee"} Apr 04 02:23:20 crc kubenswrapper[4681]: I0404 02:23:20.546190 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdc00a76-b945-4eca-98d7-1f126a78785f","Type":"ContainerStarted","Data":"6bc826c82523a3afbee1ada927334b70866c879540b16813a47a043461ae9d42"} Apr 04 02:23:20 crc kubenswrapper[4681]: I0404 02:23:20.547991 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4q2fr" event={"ID":"18b756c8-3f48-42ed-a4e4-895e2335fdb3","Type":"ContainerStarted","Data":"84247cc30b80d5fb325fae9ba3549c18041288c0407f7abc918422885f5eb153"} Apr 04 02:23:20 crc kubenswrapper[4681]: I0404 02:23:20.553552 4681 generic.go:334] "Generic (PLEG): container finished" podID="56f8d1b1-7cb9-400a-a94b-71bf1fcf0902" containerID="e5b50429033c9d35bc05c8e935f01f9273eba046a67ebbe1892e67cdcff6a59e" exitCode=0 Apr 04 02:23:20 crc kubenswrapper[4681]: I0404 02:23:20.553592 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h5zv5" event={"ID":"56f8d1b1-7cb9-400a-a94b-71bf1fcf0902","Type":"ContainerDied","Data":"e5b50429033c9d35bc05c8e935f01f9273eba046a67ebbe1892e67cdcff6a59e"} Apr 04 02:23:20 crc kubenswrapper[4681]: I0404 02:23:20.553615 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h5zv5" event={"ID":"56f8d1b1-7cb9-400a-a94b-71bf1fcf0902","Type":"ContainerStarted","Data":"5749ac5909873847d332dc6e5debb4de1613a169e07ef94519d894b0acec6d44"} Apr 04 02:23:21 crc kubenswrapper[4681]: I0404 02:23:21.256466 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58a8ceed-9c8d-4f64-a21a-8868d39acb26" path="/var/lib/kubelet/pods/58a8ceed-9c8d-4f64-a21a-8868d39acb26/volumes" Apr 04 02:23:21 crc kubenswrapper[4681]: I0404 02:23:21.599631 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4q2fr" event={"ID":"18b756c8-3f48-42ed-a4e4-895e2335fdb3","Type":"ContainerStarted","Data":"f3646211de8e3cb02146e2c4f79e70656d3d863b138564f95885191fb284eab9"} Apr 04 02:23:21 crc kubenswrapper[4681]: I0404 02:23:21.604113 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdc00a76-b945-4eca-98d7-1f126a78785f","Type":"ContainerStarted","Data":"8990c061bd4fdaf1af309edf9f594b67726c594f15755a0b9196bdfc88debf69"} Apr 04 02:23:21 crc kubenswrapper[4681]: I0404 02:23:21.622811 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-4q2fr" podStartSLOduration=2.622792894 podStartE2EDuration="2.622792894s" podCreationTimestamp="2026-04-04 02:23:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:23:21.614822715 +0000 UTC m=+1681.280597835" watchObservedRunningTime="2026-04-04 02:23:21.622792894 +0000 UTC m=+1681.288568034" Apr 04 02:23:22 crc kubenswrapper[4681]: I0404 02:23:22.615829 4681 generic.go:334] "Generic (PLEG): container finished" podID="18b756c8-3f48-42ed-a4e4-895e2335fdb3" containerID="f3646211de8e3cb02146e2c4f79e70656d3d863b138564f95885191fb284eab9" exitCode=0 Apr 04 02:23:22 crc kubenswrapper[4681]: I0404 02:23:22.615905 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4q2fr" event={"ID":"18b756c8-3f48-42ed-a4e4-895e2335fdb3","Type":"ContainerDied","Data":"f3646211de8e3cb02146e2c4f79e70656d3d863b138564f95885191fb284eab9"} Apr 04 02:23:25 crc kubenswrapper[4681]: I0404 02:23:25.750304 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4q2fr" Apr 04 02:23:25 crc kubenswrapper[4681]: I0404 02:23:25.863468 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksdp9\" (UniqueName: \"kubernetes.io/projected/18b756c8-3f48-42ed-a4e4-895e2335fdb3-kube-api-access-ksdp9\") pod \"18b756c8-3f48-42ed-a4e4-895e2335fdb3\" (UID: \"18b756c8-3f48-42ed-a4e4-895e2335fdb3\") " Apr 04 02:23:25 crc kubenswrapper[4681]: I0404 02:23:25.863647 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18b756c8-3f48-42ed-a4e4-895e2335fdb3-operator-scripts\") pod \"18b756c8-3f48-42ed-a4e4-895e2335fdb3\" (UID: \"18b756c8-3f48-42ed-a4e4-895e2335fdb3\") " Apr 04 02:23:25 crc kubenswrapper[4681]: I0404 02:23:25.864492 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18b756c8-3f48-42ed-a4e4-895e2335fdb3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "18b756c8-3f48-42ed-a4e4-895e2335fdb3" (UID: "18b756c8-3f48-42ed-a4e4-895e2335fdb3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:23:25 crc kubenswrapper[4681]: I0404 02:23:25.872194 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18b756c8-3f48-42ed-a4e4-895e2335fdb3-kube-api-access-ksdp9" (OuterVolumeSpecName: "kube-api-access-ksdp9") pod "18b756c8-3f48-42ed-a4e4-895e2335fdb3" (UID: "18b756c8-3f48-42ed-a4e4-895e2335fdb3"). InnerVolumeSpecName "kube-api-access-ksdp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:23:25 crc kubenswrapper[4681]: I0404 02:23:25.965120 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksdp9\" (UniqueName: \"kubernetes.io/projected/18b756c8-3f48-42ed-a4e4-895e2335fdb3-kube-api-access-ksdp9\") on node \"crc\" DevicePath \"\"" Apr 04 02:23:25 crc kubenswrapper[4681]: I0404 02:23:25.965148 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18b756c8-3f48-42ed-a4e4-895e2335fdb3-operator-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:23:26 crc kubenswrapper[4681]: I0404 02:23:26.652123 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-4q2fr" event={"ID":"18b756c8-3f48-42ed-a4e4-895e2335fdb3","Type":"ContainerDied","Data":"84247cc30b80d5fb325fae9ba3549c18041288c0407f7abc918422885f5eb153"} Apr 04 02:23:26 crc kubenswrapper[4681]: I0404 02:23:26.652169 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84247cc30b80d5fb325fae9ba3549c18041288c0407f7abc918422885f5eb153" Apr 04 02:23:26 crc kubenswrapper[4681]: I0404 02:23:26.652195 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-4q2fr" Apr 04 02:23:31 crc kubenswrapper[4681]: I0404 02:23:31.722441 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdc00a76-b945-4eca-98d7-1f126a78785f","Type":"ContainerStarted","Data":"0bb1da0f8fc5b840b8a389c0c0229e972912e5bf82e55610dc18afae1bfe8d12"} Apr 04 02:23:42 crc kubenswrapper[4681]: E0404 02:23:42.743965 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.110:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Apr 04 02:23:42 crc kubenswrapper[4681]: E0404 02:23:42.744601 4681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.110:5001/podified-master-centos10/openstack-watcher-api:watcher_latest" Apr 04 02:23:42 crc kubenswrapper[4681]: E0404 02:23:42.745235 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-db-sync,Image:38.102.83.110:5001/podified-master-centos10/openstack-watcher-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/watcher/watcher.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f5swv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-db-sync-hrgm8_openstack(0011158a-2855-4b60-9798-77badda0f40c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Apr 04 02:23:42 crc kubenswrapper[4681]: E0404 02:23:42.748076 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-db-sync-hrgm8" podUID="0011158a-2855-4b60-9798-77badda0f40c" Apr 04 02:23:43 crc kubenswrapper[4681]: E0404 02:23:43.048821 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.110:5001/podified-master-centos10/openstack-watcher-api:watcher_latest\\\"\"" pod="openstack/watcher-db-sync-hrgm8" podUID="0011158a-2855-4b60-9798-77badda0f40c" Apr 04 02:23:44 crc kubenswrapper[4681]: I0404 02:23:44.058340 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdc00a76-b945-4eca-98d7-1f126a78785f","Type":"ContainerStarted","Data":"60c8e97e9ed2cbbf59e2f302ff54376450e11bd043a4792e90e527aa6766cff7"} Apr 04 02:23:44 crc kubenswrapper[4681]: I0404 02:23:44.062222 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h5zv5" event={"ID":"56f8d1b1-7cb9-400a-a94b-71bf1fcf0902","Type":"ContainerStarted","Data":"fd3da4d24c0d780332428ccba21dc5cb2275193f646e44626f92a74d85cdc365"} Apr 04 02:23:45 crc kubenswrapper[4681]: I0404 02:23:45.073668 4681 generic.go:334] "Generic (PLEG): container finished" podID="56f8d1b1-7cb9-400a-a94b-71bf1fcf0902" containerID="fd3da4d24c0d780332428ccba21dc5cb2275193f646e44626f92a74d85cdc365" exitCode=0 Apr 04 02:23:45 crc kubenswrapper[4681]: I0404 02:23:45.073774 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h5zv5" event={"ID":"56f8d1b1-7cb9-400a-a94b-71bf1fcf0902","Type":"ContainerDied","Data":"fd3da4d24c0d780332428ccba21dc5cb2275193f646e44626f92a74d85cdc365"} Apr 04 02:23:45 crc kubenswrapper[4681]: I0404 02:23:45.079319 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdc00a76-b945-4eca-98d7-1f126a78785f","Type":"ContainerStarted","Data":"18a0d53d134be03a7b9298fc6b49fc8a502b186a97ad9df01a9740291365efd0"} Apr 04 02:23:45 crc kubenswrapper[4681]: I0404 02:23:45.082606 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9vwnm" event={"ID":"ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e","Type":"ContainerStarted","Data":"a915244e7d7e523425e20fe41c6eae5b7c6e80833a51146876261da7673a8ada"} Apr 04 02:23:45 crc kubenswrapper[4681]: I0404 02:23:45.132618 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-9vwnm" podStartSLOduration=3.358189906 podStartE2EDuration="1m29.13260296s" podCreationTimestamp="2026-04-04 02:22:16 +0000 UTC" firstStartedPulling="2026-04-04 02:22:17.560392626 +0000 UTC m=+1617.226167756" lastFinishedPulling="2026-04-04 02:23:43.33480567 +0000 UTC m=+1703.000580810" observedRunningTime="2026-04-04 02:23:45.114654858 +0000 UTC m=+1704.780430008" watchObservedRunningTime="2026-04-04 02:23:45.13260296 +0000 UTC m=+1704.798378080" Apr 04 02:23:48 crc kubenswrapper[4681]: I0404 02:23:48.126186 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdc00a76-b945-4eca-98d7-1f126a78785f","Type":"ContainerStarted","Data":"0647f9161dcf9ef1d59e14802307c260062022146536ff74d30af9890119b4bd"} Apr 04 02:23:50 crc kubenswrapper[4681]: I0404 02:23:50.174609 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h5zv5" event={"ID":"56f8d1b1-7cb9-400a-a94b-71bf1fcf0902","Type":"ContainerStarted","Data":"947573167cb8c13b4ed066c6d80f2a580354c4a1db92a38bc486066b53b68b46"} Apr 04 02:23:50 crc kubenswrapper[4681]: I0404 02:23:50.181064 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdc00a76-b945-4eca-98d7-1f126a78785f","Type":"ContainerStarted","Data":"c5e9b8e0dee089ae3ca620d3e0953826d5de5dce892a4ffde49b2c0160fa6b16"} Apr 04 02:23:50 crc kubenswrapper[4681]: I0404 02:23:50.197937 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h5zv5" podStartSLOduration=3.120620249 podStartE2EDuration="32.197914516s" podCreationTimestamp="2026-04-04 02:23:18 +0000 UTC" firstStartedPulling="2026-04-04 02:23:20.555825155 +0000 UTC m=+1680.221600275" lastFinishedPulling="2026-04-04 02:23:49.633119422 +0000 UTC m=+1709.298894542" observedRunningTime="2026-04-04 02:23:50.191795707 +0000 UTC m=+1709.857570837" watchObservedRunningTime="2026-04-04 02:23:50.197914516 +0000 UTC m=+1709.863689636" Apr 04 02:23:51 crc kubenswrapper[4681]: I0404 02:23:51.212881 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdc00a76-b945-4eca-98d7-1f126a78785f","Type":"ContainerStarted","Data":"c2277009982798cb08e0f0c330068cc40cb8ca565a0a59203c397e8527081b32"} Apr 04 02:23:51 crc kubenswrapper[4681]: I0404 02:23:51.213159 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdc00a76-b945-4eca-98d7-1f126a78785f","Type":"ContainerStarted","Data":"1b128d0d95cf79f775ddc1abc615baa90bac7d0f6c47d5fa7ba4d1c8ed947ebb"} Apr 04 02:23:52 crc kubenswrapper[4681]: I0404 02:23:52.218550 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdc00a76-b945-4eca-98d7-1f126a78785f","Type":"ContainerStarted","Data":"e20f2161338ab94f17ac7a74c9bca0885ea7ef4a8ff5486b52f3a17e86b90689"} Apr 04 02:23:54 crc kubenswrapper[4681]: I0404 02:23:54.238611 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdc00a76-b945-4eca-98d7-1f126a78785f","Type":"ContainerStarted","Data":"3ee84426303b46b70c194566b705810ced8fd9a01e180d7d4b6c09be6d876d9a"} Apr 04 02:23:55 crc kubenswrapper[4681]: I0404 02:23:55.251166 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdc00a76-b945-4eca-98d7-1f126a78785f","Type":"ContainerStarted","Data":"b5fe9619b9992c11e5ffd45aa88c6a73d2dc35157496ded32c4e4a69e6c835b3"} Apr 04 02:23:55 crc kubenswrapper[4681]: I0404 02:23:55.251507 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdc00a76-b945-4eca-98d7-1f126a78785f","Type":"ContainerStarted","Data":"ada96982d5cb729dcafaca3aa70bf28deec1e67aed65e5fc24367394544adbb7"} Apr 04 02:23:55 crc kubenswrapper[4681]: I0404 02:23:55.251527 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"cdc00a76-b945-4eca-98d7-1f126a78785f","Type":"ContainerStarted","Data":"e8d7349694f5ec0d95d84603a7b665dd9b1fbc4feaa963b13e3efe942e050018"} Apr 04 02:23:55 crc kubenswrapper[4681]: I0404 02:23:55.573958 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=147.732061781 podStartE2EDuration="3m2.573940591s" podCreationTimestamp="2026-04-04 02:20:53 +0000 UTC" firstStartedPulling="2026-04-04 02:23:15.569527056 +0000 UTC m=+1675.235302166" lastFinishedPulling="2026-04-04 02:23:50.411405856 +0000 UTC m=+1710.077180976" observedRunningTime="2026-04-04 02:23:55.309166742 +0000 UTC m=+1714.974941872" watchObservedRunningTime="2026-04-04 02:23:55.573940591 +0000 UTC m=+1715.239715711" Apr 04 02:23:55 crc kubenswrapper[4681]: I0404 02:23:55.581012 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67cf7bcc65-9tfw2"] Apr 04 02:23:55 crc kubenswrapper[4681]: E0404 02:23:55.581451 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b756c8-3f48-42ed-a4e4-895e2335fdb3" containerName="mariadb-account-create-update" Apr 04 02:23:55 crc kubenswrapper[4681]: I0404 02:23:55.581481 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b756c8-3f48-42ed-a4e4-895e2335fdb3" containerName="mariadb-account-create-update" Apr 04 02:23:55 crc kubenswrapper[4681]: I0404 02:23:55.581714 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b756c8-3f48-42ed-a4e4-895e2335fdb3" containerName="mariadb-account-create-update" Apr 04 02:23:55 crc kubenswrapper[4681]: I0404 02:23:55.582786 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" Apr 04 02:23:55 crc kubenswrapper[4681]: I0404 02:23:55.590402 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Apr 04 02:23:55 crc kubenswrapper[4681]: I0404 02:23:55.605350 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67cf7bcc65-9tfw2"] Apr 04 02:23:55 crc kubenswrapper[4681]: I0404 02:23:55.640064 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/957e4b64-c18f-4cee-87dc-848a0d936626-ovsdbserver-nb\") pod \"dnsmasq-dns-67cf7bcc65-9tfw2\" (UID: \"957e4b64-c18f-4cee-87dc-848a0d936626\") " pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" Apr 04 02:23:55 crc kubenswrapper[4681]: I0404 02:23:55.640136 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/957e4b64-c18f-4cee-87dc-848a0d936626-dns-swift-storage-0\") pod \"dnsmasq-dns-67cf7bcc65-9tfw2\" (UID: \"957e4b64-c18f-4cee-87dc-848a0d936626\") " pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" Apr 04 02:23:55 crc kubenswrapper[4681]: I0404 02:23:55.640162 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/957e4b64-c18f-4cee-87dc-848a0d936626-config\") pod \"dnsmasq-dns-67cf7bcc65-9tfw2\" (UID: \"957e4b64-c18f-4cee-87dc-848a0d936626\") " pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" Apr 04 02:23:55 crc kubenswrapper[4681]: I0404 02:23:55.640280 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9ql2\" (UniqueName: \"kubernetes.io/projected/957e4b64-c18f-4cee-87dc-848a0d936626-kube-api-access-l9ql2\") pod \"dnsmasq-dns-67cf7bcc65-9tfw2\" (UID: \"957e4b64-c18f-4cee-87dc-848a0d936626\") " pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" Apr 04 02:23:55 crc kubenswrapper[4681]: I0404 02:23:55.640299 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/957e4b64-c18f-4cee-87dc-848a0d936626-ovsdbserver-sb\") pod \"dnsmasq-dns-67cf7bcc65-9tfw2\" (UID: \"957e4b64-c18f-4cee-87dc-848a0d936626\") " pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" Apr 04 02:23:55 crc kubenswrapper[4681]: I0404 02:23:55.640418 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/957e4b64-c18f-4cee-87dc-848a0d936626-dns-svc\") pod \"dnsmasq-dns-67cf7bcc65-9tfw2\" (UID: \"957e4b64-c18f-4cee-87dc-848a0d936626\") " pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" Apr 04 02:23:55 crc kubenswrapper[4681]: I0404 02:23:55.741494 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/957e4b64-c18f-4cee-87dc-848a0d936626-config\") pod \"dnsmasq-dns-67cf7bcc65-9tfw2\" (UID: \"957e4b64-c18f-4cee-87dc-848a0d936626\") " pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" Apr 04 02:23:55 crc kubenswrapper[4681]: I0404 02:23:55.741604 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9ql2\" (UniqueName: \"kubernetes.io/projected/957e4b64-c18f-4cee-87dc-848a0d936626-kube-api-access-l9ql2\") pod \"dnsmasq-dns-67cf7bcc65-9tfw2\" (UID: \"957e4b64-c18f-4cee-87dc-848a0d936626\") " pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" Apr 04 02:23:55 crc kubenswrapper[4681]: I0404 02:23:55.741633 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/957e4b64-c18f-4cee-87dc-848a0d936626-ovsdbserver-sb\") pod \"dnsmasq-dns-67cf7bcc65-9tfw2\" (UID: \"957e4b64-c18f-4cee-87dc-848a0d936626\") " pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" Apr 04 02:23:55 crc kubenswrapper[4681]: I0404 02:23:55.741668 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/957e4b64-c18f-4cee-87dc-848a0d936626-dns-svc\") pod \"dnsmasq-dns-67cf7bcc65-9tfw2\" (UID: \"957e4b64-c18f-4cee-87dc-848a0d936626\") " pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" Apr 04 02:23:55 crc kubenswrapper[4681]: I0404 02:23:55.741717 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/957e4b64-c18f-4cee-87dc-848a0d936626-ovsdbserver-nb\") pod \"dnsmasq-dns-67cf7bcc65-9tfw2\" (UID: \"957e4b64-c18f-4cee-87dc-848a0d936626\") " pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" Apr 04 02:23:55 crc kubenswrapper[4681]: I0404 02:23:55.741765 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/957e4b64-c18f-4cee-87dc-848a0d936626-dns-swift-storage-0\") pod \"dnsmasq-dns-67cf7bcc65-9tfw2\" (UID: \"957e4b64-c18f-4cee-87dc-848a0d936626\") " pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" Apr 04 02:23:55 crc kubenswrapper[4681]: I0404 02:23:55.742616 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/957e4b64-c18f-4cee-87dc-848a0d936626-dns-swift-storage-0\") pod \"dnsmasq-dns-67cf7bcc65-9tfw2\" (UID: \"957e4b64-c18f-4cee-87dc-848a0d936626\") " pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" Apr 04 02:23:55 crc kubenswrapper[4681]: I0404 02:23:55.743190 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/957e4b64-c18f-4cee-87dc-848a0d936626-config\") pod \"dnsmasq-dns-67cf7bcc65-9tfw2\" (UID: \"957e4b64-c18f-4cee-87dc-848a0d936626\") " pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" Apr 04 02:23:55 crc kubenswrapper[4681]: I0404 02:23:55.744020 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/957e4b64-c18f-4cee-87dc-848a0d936626-ovsdbserver-sb\") pod \"dnsmasq-dns-67cf7bcc65-9tfw2\" (UID: \"957e4b64-c18f-4cee-87dc-848a0d936626\") " pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" Apr 04 02:23:55 crc kubenswrapper[4681]: I0404 02:23:55.744657 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/957e4b64-c18f-4cee-87dc-848a0d936626-dns-svc\") pod \"dnsmasq-dns-67cf7bcc65-9tfw2\" (UID: \"957e4b64-c18f-4cee-87dc-848a0d936626\") " pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" Apr 04 02:23:55 crc kubenswrapper[4681]: I0404 02:23:55.745312 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/957e4b64-c18f-4cee-87dc-848a0d936626-ovsdbserver-nb\") pod \"dnsmasq-dns-67cf7bcc65-9tfw2\" (UID: \"957e4b64-c18f-4cee-87dc-848a0d936626\") " pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" Apr 04 02:23:55 crc kubenswrapper[4681]: I0404 02:23:55.763983 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9ql2\" (UniqueName: \"kubernetes.io/projected/957e4b64-c18f-4cee-87dc-848a0d936626-kube-api-access-l9ql2\") pod \"dnsmasq-dns-67cf7bcc65-9tfw2\" (UID: \"957e4b64-c18f-4cee-87dc-848a0d936626\") " pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" Apr 04 02:23:55 crc kubenswrapper[4681]: I0404 02:23:55.902995 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" Apr 04 02:23:56 crc kubenswrapper[4681]: I0404 02:23:56.350473 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67cf7bcc65-9tfw2"] Apr 04 02:23:56 crc kubenswrapper[4681]: W0404 02:23:56.359399 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod957e4b64_c18f_4cee_87dc_848a0d936626.slice/crio-0388460ee3a4e4df638804e819dbae88021aec4765638fc313e36e56d05a6302 WatchSource:0}: Error finding container 0388460ee3a4e4df638804e819dbae88021aec4765638fc313e36e56d05a6302: Status 404 returned error can't find the container with id 0388460ee3a4e4df638804e819dbae88021aec4765638fc313e36e56d05a6302 Apr 04 02:23:56 crc kubenswrapper[4681]: I0404 02:23:56.523857 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:23:56 crc kubenswrapper[4681]: I0404 02:23:56.523909 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:23:57 crc kubenswrapper[4681]: I0404 02:23:57.299335 4681 generic.go:334] "Generic (PLEG): container finished" podID="957e4b64-c18f-4cee-87dc-848a0d936626" containerID="0667ee411a264f5e247fddd5b11ff7055e119980761e03859eb68a7d5f2260cb" exitCode=0 Apr 04 02:23:57 crc kubenswrapper[4681]: I0404 02:23:57.299523 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" event={"ID":"957e4b64-c18f-4cee-87dc-848a0d936626","Type":"ContainerDied","Data":"0667ee411a264f5e247fddd5b11ff7055e119980761e03859eb68a7d5f2260cb"} Apr 04 02:23:57 crc kubenswrapper[4681]: I0404 02:23:57.299652 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" event={"ID":"957e4b64-c18f-4cee-87dc-848a0d936626","Type":"ContainerStarted","Data":"0388460ee3a4e4df638804e819dbae88021aec4765638fc313e36e56d05a6302"} Apr 04 02:23:58 crc kubenswrapper[4681]: I0404 02:23:58.311173 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" event={"ID":"957e4b64-c18f-4cee-87dc-848a0d936626","Type":"ContainerStarted","Data":"a5dd28324d39eb03288061e75df5a1eedf7c8ef10f772e356f415fb16cc5d60b"} Apr 04 02:23:58 crc kubenswrapper[4681]: I0404 02:23:58.311436 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" Apr 04 02:23:58 crc kubenswrapper[4681]: I0404 02:23:58.312959 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-hrgm8" event={"ID":"0011158a-2855-4b60-9798-77badda0f40c","Type":"ContainerStarted","Data":"38cc4e3a4a4b3258af6cd176ca88aa5d76a1f8c1cb46392b6b2217526bbf2c23"} Apr 04 02:23:58 crc kubenswrapper[4681]: I0404 02:23:58.332414 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" podStartSLOduration=3.332393931 podStartE2EDuration="3.332393931s" podCreationTimestamp="2026-04-04 02:23:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:23:58.328595387 +0000 UTC m=+1717.994370527" watchObservedRunningTime="2026-04-04 02:23:58.332393931 +0000 UTC m=+1717.998169051" Apr 04 02:23:58 crc kubenswrapper[4681]: I0404 02:23:58.349434 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-hrgm8" podStartSLOduration=54.957548963 podStartE2EDuration="1m37.349415718s" podCreationTimestamp="2026-04-04 02:22:21 +0000 UTC" firstStartedPulling="2026-04-04 02:23:15.650644039 +0000 UTC m=+1675.316419179" lastFinishedPulling="2026-04-04 02:23:58.042510814 +0000 UTC m=+1717.708285934" observedRunningTime="2026-04-04 02:23:58.344471683 +0000 UTC m=+1718.010246803" watchObservedRunningTime="2026-04-04 02:23:58.349415718 +0000 UTC m=+1718.015190838" Apr 04 02:23:59 crc kubenswrapper[4681]: I0404 02:23:59.354621 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h5zv5" Apr 04 02:23:59 crc kubenswrapper[4681]: I0404 02:23:59.354968 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h5zv5" Apr 04 02:24:00 crc kubenswrapper[4681]: I0404 02:24:00.128271 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587824-jmsr7"] Apr 04 02:24:00 crc kubenswrapper[4681]: I0404 02:24:00.129930 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587824-jmsr7" Apr 04 02:24:00 crc kubenswrapper[4681]: I0404 02:24:00.132339 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 02:24:00 crc kubenswrapper[4681]: I0404 02:24:00.132419 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 02:24:00 crc kubenswrapper[4681]: I0404 02:24:00.132597 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 02:24:00 crc kubenswrapper[4681]: I0404 02:24:00.138638 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587824-jmsr7"] Apr 04 02:24:00 crc kubenswrapper[4681]: I0404 02:24:00.240212 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncs6c\" (UniqueName: \"kubernetes.io/projected/3c86d6aa-ddc5-4b18-8a91-2d4bc6f62535-kube-api-access-ncs6c\") pod \"auto-csr-approver-29587824-jmsr7\" (UID: \"3c86d6aa-ddc5-4b18-8a91-2d4bc6f62535\") " pod="openshift-infra/auto-csr-approver-29587824-jmsr7" Apr 04 02:24:00 crc kubenswrapper[4681]: I0404 02:24:00.341670 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncs6c\" (UniqueName: \"kubernetes.io/projected/3c86d6aa-ddc5-4b18-8a91-2d4bc6f62535-kube-api-access-ncs6c\") pod \"auto-csr-approver-29587824-jmsr7\" (UID: \"3c86d6aa-ddc5-4b18-8a91-2d4bc6f62535\") " pod="openshift-infra/auto-csr-approver-29587824-jmsr7" Apr 04 02:24:00 crc kubenswrapper[4681]: I0404 02:24:00.375061 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncs6c\" (UniqueName: \"kubernetes.io/projected/3c86d6aa-ddc5-4b18-8a91-2d4bc6f62535-kube-api-access-ncs6c\") pod \"auto-csr-approver-29587824-jmsr7\" (UID: \"3c86d6aa-ddc5-4b18-8a91-2d4bc6f62535\") " pod="openshift-infra/auto-csr-approver-29587824-jmsr7" Apr 04 02:24:00 crc kubenswrapper[4681]: I0404 02:24:00.413043 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h5zv5" podUID="56f8d1b1-7cb9-400a-a94b-71bf1fcf0902" containerName="registry-server" probeResult="failure" output=< Apr 04 02:24:00 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Apr 04 02:24:00 crc kubenswrapper[4681]: > Apr 04 02:24:00 crc kubenswrapper[4681]: I0404 02:24:00.448493 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587824-jmsr7" Apr 04 02:24:00 crc kubenswrapper[4681]: I0404 02:24:00.916405 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587824-jmsr7"] Apr 04 02:24:00 crc kubenswrapper[4681]: W0404 02:24:00.922327 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c86d6aa_ddc5_4b18_8a91_2d4bc6f62535.slice/crio-6ee5d16360e667b7cc76d6e9cd326ba5f6cbd3ddded673b6b646d69c81786483 WatchSource:0}: Error finding container 6ee5d16360e667b7cc76d6e9cd326ba5f6cbd3ddded673b6b646d69c81786483: Status 404 returned error can't find the container with id 6ee5d16360e667b7cc76d6e9cd326ba5f6cbd3ddded673b6b646d69c81786483 Apr 04 02:24:01 crc kubenswrapper[4681]: I0404 02:24:01.341864 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587824-jmsr7" event={"ID":"3c86d6aa-ddc5-4b18-8a91-2d4bc6f62535","Type":"ContainerStarted","Data":"6ee5d16360e667b7cc76d6e9cd326ba5f6cbd3ddded673b6b646d69c81786483"} Apr 04 02:24:04 crc kubenswrapper[4681]: I0404 02:24:04.375153 4681 generic.go:334] "Generic (PLEG): container finished" podID="3c86d6aa-ddc5-4b18-8a91-2d4bc6f62535" containerID="a41723557255d81eea98849be1108edf01eebcf5bc883e2de0770a3843dfd1ba" exitCode=0 Apr 04 02:24:04 crc kubenswrapper[4681]: I0404 02:24:04.375256 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587824-jmsr7" event={"ID":"3c86d6aa-ddc5-4b18-8a91-2d4bc6f62535","Type":"ContainerDied","Data":"a41723557255d81eea98849be1108edf01eebcf5bc883e2de0770a3843dfd1ba"} Apr 04 02:24:05 crc kubenswrapper[4681]: I0404 02:24:05.760541 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587824-jmsr7" Apr 04 02:24:05 crc kubenswrapper[4681]: I0404 02:24:05.905413 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" Apr 04 02:24:05 crc kubenswrapper[4681]: I0404 02:24:05.937449 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncs6c\" (UniqueName: \"kubernetes.io/projected/3c86d6aa-ddc5-4b18-8a91-2d4bc6f62535-kube-api-access-ncs6c\") pod \"3c86d6aa-ddc5-4b18-8a91-2d4bc6f62535\" (UID: \"3c86d6aa-ddc5-4b18-8a91-2d4bc6f62535\") " Apr 04 02:24:05 crc kubenswrapper[4681]: I0404 02:24:05.971523 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c86d6aa-ddc5-4b18-8a91-2d4bc6f62535-kube-api-access-ncs6c" (OuterVolumeSpecName: "kube-api-access-ncs6c") pod "3c86d6aa-ddc5-4b18-8a91-2d4bc6f62535" (UID: "3c86d6aa-ddc5-4b18-8a91-2d4bc6f62535"). InnerVolumeSpecName "kube-api-access-ncs6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:24:05 crc kubenswrapper[4681]: I0404 02:24:05.996284 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79d5b69897-l2rjd"] Apr 04 02:24:05 crc kubenswrapper[4681]: I0404 02:24:05.996932 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79d5b69897-l2rjd" podUID="bfe3f5f0-7263-45d4-8393-eaa1ccebfe19" containerName="dnsmasq-dns" containerID="cri-o://c61bd32edc62af352fd19573a66b63fcf8618beede8596248d09ff9bf3664577" gracePeriod=10 Apr 04 02:24:06 crc kubenswrapper[4681]: I0404 02:24:06.039435 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncs6c\" (UniqueName: \"kubernetes.io/projected/3c86d6aa-ddc5-4b18-8a91-2d4bc6f62535-kube-api-access-ncs6c\") on node \"crc\" DevicePath \"\"" Apr 04 02:24:06 crc kubenswrapper[4681]: I0404 02:24:06.402814 4681 generic.go:334] "Generic (PLEG): container finished" podID="bfe3f5f0-7263-45d4-8393-eaa1ccebfe19" containerID="c61bd32edc62af352fd19573a66b63fcf8618beede8596248d09ff9bf3664577" exitCode=0 Apr 04 02:24:06 crc kubenswrapper[4681]: I0404 02:24:06.402897 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79d5b69897-l2rjd" event={"ID":"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19","Type":"ContainerDied","Data":"c61bd32edc62af352fd19573a66b63fcf8618beede8596248d09ff9bf3664577"} Apr 04 02:24:06 crc kubenswrapper[4681]: I0404 02:24:06.404517 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587824-jmsr7" event={"ID":"3c86d6aa-ddc5-4b18-8a91-2d4bc6f62535","Type":"ContainerDied","Data":"6ee5d16360e667b7cc76d6e9cd326ba5f6cbd3ddded673b6b646d69c81786483"} Apr 04 02:24:06 crc kubenswrapper[4681]: I0404 02:24:06.404556 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587824-jmsr7" Apr 04 02:24:06 crc kubenswrapper[4681]: I0404 02:24:06.404554 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ee5d16360e667b7cc76d6e9cd326ba5f6cbd3ddded673b6b646d69c81786483" Apr 04 02:24:06 crc kubenswrapper[4681]: I0404 02:24:06.457470 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79d5b69897-l2rjd" Apr 04 02:24:06 crc kubenswrapper[4681]: I0404 02:24:06.548202 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19-dns-svc\") pod \"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19\" (UID: \"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19\") " Apr 04 02:24:06 crc kubenswrapper[4681]: I0404 02:24:06.548428 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19-ovsdbserver-sb\") pod \"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19\" (UID: \"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19\") " Apr 04 02:24:06 crc kubenswrapper[4681]: I0404 02:24:06.548465 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19-config\") pod \"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19\" (UID: \"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19\") " Apr 04 02:24:06 crc kubenswrapper[4681]: I0404 02:24:06.548513 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19-ovsdbserver-nb\") pod \"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19\" (UID: \"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19\") " Apr 04 02:24:06 crc kubenswrapper[4681]: I0404 02:24:06.548544 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ql5dj\" (UniqueName: \"kubernetes.io/projected/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19-kube-api-access-ql5dj\") pod \"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19\" (UID: \"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19\") " Apr 04 02:24:06 crc kubenswrapper[4681]: I0404 02:24:06.553023 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19-kube-api-access-ql5dj" (OuterVolumeSpecName: "kube-api-access-ql5dj") pod "bfe3f5f0-7263-45d4-8393-eaa1ccebfe19" (UID: "bfe3f5f0-7263-45d4-8393-eaa1ccebfe19"). InnerVolumeSpecName "kube-api-access-ql5dj". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:24:06 crc kubenswrapper[4681]: I0404 02:24:06.594994 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bfe3f5f0-7263-45d4-8393-eaa1ccebfe19" (UID: "bfe3f5f0-7263-45d4-8393-eaa1ccebfe19"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:24:06 crc kubenswrapper[4681]: I0404 02:24:06.595038 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bfe3f5f0-7263-45d4-8393-eaa1ccebfe19" (UID: "bfe3f5f0-7263-45d4-8393-eaa1ccebfe19"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:24:06 crc kubenswrapper[4681]: I0404 02:24:06.598593 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bfe3f5f0-7263-45d4-8393-eaa1ccebfe19" (UID: "bfe3f5f0-7263-45d4-8393-eaa1ccebfe19"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:24:06 crc kubenswrapper[4681]: I0404 02:24:06.600041 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19-config" (OuterVolumeSpecName: "config") pod "bfe3f5f0-7263-45d4-8393-eaa1ccebfe19" (UID: "bfe3f5f0-7263-45d4-8393-eaa1ccebfe19"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:24:06 crc kubenswrapper[4681]: I0404 02:24:06.650641 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Apr 04 02:24:06 crc kubenswrapper[4681]: I0404 02:24:06.650674 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:24:06 crc kubenswrapper[4681]: I0404 02:24:06.650684 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Apr 04 02:24:06 crc kubenswrapper[4681]: I0404 02:24:06.650692 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ql5dj\" (UniqueName: \"kubernetes.io/projected/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19-kube-api-access-ql5dj\") on node \"crc\" DevicePath \"\"" Apr 04 02:24:06 crc kubenswrapper[4681]: I0404 02:24:06.650703 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19-dns-svc\") on node \"crc\" DevicePath \"\"" Apr 04 02:24:06 crc kubenswrapper[4681]: I0404 02:24:06.827072 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587818-zwhsm"] Apr 04 02:24:06 crc kubenswrapper[4681]: I0404 02:24:06.835920 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587818-zwhsm"] Apr 04 02:24:07 crc kubenswrapper[4681]: I0404 02:24:07.210071 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e6e9171-5cc4-45fb-9668-dee0d4a7df22" path="/var/lib/kubelet/pods/8e6e9171-5cc4-45fb-9668-dee0d4a7df22/volumes" Apr 04 02:24:07 crc kubenswrapper[4681]: I0404 02:24:07.414413 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79d5b69897-l2rjd" event={"ID":"bfe3f5f0-7263-45d4-8393-eaa1ccebfe19","Type":"ContainerDied","Data":"6ff70907f296bb3774472a4c99aea6a6153400b98f82e38239889e8526a3702a"} Apr 04 02:24:07 crc kubenswrapper[4681]: I0404 02:24:07.414462 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79d5b69897-l2rjd" Apr 04 02:24:07 crc kubenswrapper[4681]: I0404 02:24:07.414474 4681 scope.go:117] "RemoveContainer" containerID="c61bd32edc62af352fd19573a66b63fcf8618beede8596248d09ff9bf3664577" Apr 04 02:24:07 crc kubenswrapper[4681]: I0404 02:24:07.440473 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79d5b69897-l2rjd"] Apr 04 02:24:07 crc kubenswrapper[4681]: I0404 02:24:07.441215 4681 scope.go:117] "RemoveContainer" containerID="5eececdba393d6de2eec67ccc39436bc3121546f00c5a0880b34465a1ecec016" Apr 04 02:24:07 crc kubenswrapper[4681]: I0404 02:24:07.449383 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79d5b69897-l2rjd"] Apr 04 02:24:09 crc kubenswrapper[4681]: I0404 02:24:09.227572 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfe3f5f0-7263-45d4-8393-eaa1ccebfe19" path="/var/lib/kubelet/pods/bfe3f5f0-7263-45d4-8393-eaa1ccebfe19/volumes" Apr 04 02:24:10 crc kubenswrapper[4681]: I0404 02:24:10.407164 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h5zv5" podUID="56f8d1b1-7cb9-400a-a94b-71bf1fcf0902" containerName="registry-server" probeResult="failure" output=< Apr 04 02:24:10 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Apr 04 02:24:10 crc kubenswrapper[4681]: > Apr 04 02:24:17 crc kubenswrapper[4681]: I0404 02:24:17.484095 4681 scope.go:117] "RemoveContainer" containerID="c9f853411308d27149baba2023e2f0497a3e09a26ea37fb624697009daeb04eb" Apr 04 02:24:17 crc kubenswrapper[4681]: I0404 02:24:17.528127 4681 scope.go:117] "RemoveContainer" containerID="8ec8b822824c864bd51fe804279935945c588ae5b65baf236e99310da86b67e4" Apr 04 02:24:20 crc kubenswrapper[4681]: I0404 02:24:20.409669 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h5zv5" podUID="56f8d1b1-7cb9-400a-a94b-71bf1fcf0902" containerName="registry-server" probeResult="failure" output=< Apr 04 02:24:20 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Apr 04 02:24:20 crc kubenswrapper[4681]: > Apr 04 02:24:23 crc kubenswrapper[4681]: I0404 02:24:23.622525 4681 generic.go:334] "Generic (PLEG): container finished" podID="ab5ad0f4-4c98-4351-83df-037a25fe6447" containerID="fe8eb6697cf3f151e7f12279d37ffd126e139882026bf4e41885afea76100b12" exitCode=0 Apr 04 02:24:23 crc kubenswrapper[4681]: I0404 02:24:23.622623 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ck5kg" event={"ID":"ab5ad0f4-4c98-4351-83df-037a25fe6447","Type":"ContainerDied","Data":"fe8eb6697cf3f151e7f12279d37ffd126e139882026bf4e41885afea76100b12"} Apr 04 02:24:24 crc kubenswrapper[4681]: I0404 02:24:24.966289 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ck5kg" Apr 04 02:24:25 crc kubenswrapper[4681]: I0404 02:24:25.120603 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5ad0f4-4c98-4351-83df-037a25fe6447-config-data\") pod \"ab5ad0f4-4c98-4351-83df-037a25fe6447\" (UID: \"ab5ad0f4-4c98-4351-83df-037a25fe6447\") " Apr 04 02:24:25 crc kubenswrapper[4681]: I0404 02:24:25.120681 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5ad0f4-4c98-4351-83df-037a25fe6447-combined-ca-bundle\") pod \"ab5ad0f4-4c98-4351-83df-037a25fe6447\" (UID: \"ab5ad0f4-4c98-4351-83df-037a25fe6447\") " Apr 04 02:24:25 crc kubenswrapper[4681]: I0404 02:24:25.120725 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2fd8\" (UniqueName: \"kubernetes.io/projected/ab5ad0f4-4c98-4351-83df-037a25fe6447-kube-api-access-b2fd8\") pod \"ab5ad0f4-4c98-4351-83df-037a25fe6447\" (UID: \"ab5ad0f4-4c98-4351-83df-037a25fe6447\") " Apr 04 02:24:25 crc kubenswrapper[4681]: I0404 02:24:25.126688 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab5ad0f4-4c98-4351-83df-037a25fe6447-kube-api-access-b2fd8" (OuterVolumeSpecName: "kube-api-access-b2fd8") pod "ab5ad0f4-4c98-4351-83df-037a25fe6447" (UID: "ab5ad0f4-4c98-4351-83df-037a25fe6447"). InnerVolumeSpecName "kube-api-access-b2fd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:24:25 crc kubenswrapper[4681]: I0404 02:24:25.149704 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab5ad0f4-4c98-4351-83df-037a25fe6447-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab5ad0f4-4c98-4351-83df-037a25fe6447" (UID: "ab5ad0f4-4c98-4351-83df-037a25fe6447"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:24:25 crc kubenswrapper[4681]: I0404 02:24:25.188782 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab5ad0f4-4c98-4351-83df-037a25fe6447-config-data" (OuterVolumeSpecName: "config-data") pod "ab5ad0f4-4c98-4351-83df-037a25fe6447" (UID: "ab5ad0f4-4c98-4351-83df-037a25fe6447"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:24:25 crc kubenswrapper[4681]: I0404 02:24:25.222833 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5ad0f4-4c98-4351-83df-037a25fe6447-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:24:25 crc kubenswrapper[4681]: I0404 02:24:25.222898 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5ad0f4-4c98-4351-83df-037a25fe6447-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:24:25 crc kubenswrapper[4681]: I0404 02:24:25.222918 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2fd8\" (UniqueName: \"kubernetes.io/projected/ab5ad0f4-4c98-4351-83df-037a25fe6447-kube-api-access-b2fd8\") on node \"crc\" DevicePath \"\"" Apr 04 02:24:25 crc kubenswrapper[4681]: I0404 02:24:25.642138 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ck5kg" event={"ID":"ab5ad0f4-4c98-4351-83df-037a25fe6447","Type":"ContainerDied","Data":"6ccd67023a1748e28ae39943105529e3f5b73ddb0a589cb68e4ac77548b01664"} Apr 04 02:24:25 crc kubenswrapper[4681]: I0404 02:24:25.642435 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ck5kg" Apr 04 02:24:25 crc kubenswrapper[4681]: I0404 02:24:25.642593 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ccd67023a1748e28ae39943105529e3f5b73ddb0a589cb68e4ac77548b01664" Apr 04 02:24:25 crc kubenswrapper[4681]: I0404 02:24:25.940712 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b97b8b59c-n2nzx"] Apr 04 02:24:25 crc kubenswrapper[4681]: E0404 02:24:25.954078 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c86d6aa-ddc5-4b18-8a91-2d4bc6f62535" containerName="oc" Apr 04 02:24:25 crc kubenswrapper[4681]: I0404 02:24:25.954109 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c86d6aa-ddc5-4b18-8a91-2d4bc6f62535" containerName="oc" Apr 04 02:24:25 crc kubenswrapper[4681]: E0404 02:24:25.954127 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfe3f5f0-7263-45d4-8393-eaa1ccebfe19" containerName="dnsmasq-dns" Apr 04 02:24:25 crc kubenswrapper[4681]: I0404 02:24:25.954135 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfe3f5f0-7263-45d4-8393-eaa1ccebfe19" containerName="dnsmasq-dns" Apr 04 02:24:25 crc kubenswrapper[4681]: E0404 02:24:25.954146 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfe3f5f0-7263-45d4-8393-eaa1ccebfe19" containerName="init" Apr 04 02:24:25 crc kubenswrapper[4681]: I0404 02:24:25.954152 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfe3f5f0-7263-45d4-8393-eaa1ccebfe19" containerName="init" Apr 04 02:24:25 crc kubenswrapper[4681]: E0404 02:24:25.954170 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5ad0f4-4c98-4351-83df-037a25fe6447" containerName="keystone-db-sync" Apr 04 02:24:25 crc kubenswrapper[4681]: I0404 02:24:25.954178 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5ad0f4-4c98-4351-83df-037a25fe6447" containerName="keystone-db-sync" Apr 04 02:24:25 crc kubenswrapper[4681]: I0404 02:24:25.964269 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c86d6aa-ddc5-4b18-8a91-2d4bc6f62535" containerName="oc" Apr 04 02:24:25 crc kubenswrapper[4681]: I0404 02:24:25.964865 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab5ad0f4-4c98-4351-83df-037a25fe6447" containerName="keystone-db-sync" Apr 04 02:24:25 crc kubenswrapper[4681]: I0404 02:24:25.964910 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfe3f5f0-7263-45d4-8393-eaa1ccebfe19" containerName="dnsmasq-dns" Apr 04 02:24:25 crc kubenswrapper[4681]: I0404 02:24:25.966005 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-5bshl"] Apr 04 02:24:25 crc kubenswrapper[4681]: I0404 02:24:25.966746 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5bshl" Apr 04 02:24:25 crc kubenswrapper[4681]: I0404 02:24:25.967221 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b97b8b59c-n2nzx" Apr 04 02:24:25 crc kubenswrapper[4681]: I0404 02:24:25.978861 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b97b8b59c-n2nzx"] Apr 04 02:24:25 crc kubenswrapper[4681]: I0404 02:24:25.983740 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Apr 04 02:24:25 crc kubenswrapper[4681]: I0404 02:24:25.984309 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Apr 04 02:24:25 crc kubenswrapper[4681]: I0404 02:24:25.998071 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5bshl"] Apr 04 02:24:25 crc kubenswrapper[4681]: I0404 02:24:25.998432 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Apr 04 02:24:25 crc kubenswrapper[4681]: I0404 02:24:25.998585 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Apr 04 02:24:25 crc kubenswrapper[4681]: I0404 02:24:25.998621 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-lv9fh" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.142040 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-ovsdbserver-sb\") pod \"dnsmasq-dns-6b97b8b59c-n2nzx\" (UID: \"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0\") " pod="openstack/dnsmasq-dns-6b97b8b59c-n2nzx" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.142096 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-dns-svc\") pod \"dnsmasq-dns-6b97b8b59c-n2nzx\" (UID: \"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0\") " pod="openstack/dnsmasq-dns-6b97b8b59c-n2nzx" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.142130 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7d9m\" (UniqueName: \"kubernetes.io/projected/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-kube-api-access-z7d9m\") pod \"dnsmasq-dns-6b97b8b59c-n2nzx\" (UID: \"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0\") " pod="openstack/dnsmasq-dns-6b97b8b59c-n2nzx" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.142199 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfb29995-ab75-4836-80c2-83a27ed076c6-scripts\") pod \"keystone-bootstrap-5bshl\" (UID: \"cfb29995-ab75-4836-80c2-83a27ed076c6\") " pod="openstack/keystone-bootstrap-5bshl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.142231 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cfb29995-ab75-4836-80c2-83a27ed076c6-fernet-keys\") pod \"keystone-bootstrap-5bshl\" (UID: \"cfb29995-ab75-4836-80c2-83a27ed076c6\") " pod="openstack/keystone-bootstrap-5bshl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.142290 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfb29995-ab75-4836-80c2-83a27ed076c6-combined-ca-bundle\") pod \"keystone-bootstrap-5bshl\" (UID: \"cfb29995-ab75-4836-80c2-83a27ed076c6\") " pod="openstack/keystone-bootstrap-5bshl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.142320 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjj4z\" (UniqueName: \"kubernetes.io/projected/cfb29995-ab75-4836-80c2-83a27ed076c6-kube-api-access-mjj4z\") pod \"keystone-bootstrap-5bshl\" (UID: \"cfb29995-ab75-4836-80c2-83a27ed076c6\") " pod="openstack/keystone-bootstrap-5bshl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.142364 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-config\") pod \"dnsmasq-dns-6b97b8b59c-n2nzx\" (UID: \"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0\") " pod="openstack/dnsmasq-dns-6b97b8b59c-n2nzx" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.142405 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cfb29995-ab75-4836-80c2-83a27ed076c6-credential-keys\") pod \"keystone-bootstrap-5bshl\" (UID: \"cfb29995-ab75-4836-80c2-83a27ed076c6\") " pod="openstack/keystone-bootstrap-5bshl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.142429 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfb29995-ab75-4836-80c2-83a27ed076c6-config-data\") pod \"keystone-bootstrap-5bshl\" (UID: \"cfb29995-ab75-4836-80c2-83a27ed076c6\") " pod="openstack/keystone-bootstrap-5bshl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.142453 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-ovsdbserver-nb\") pod \"dnsmasq-dns-6b97b8b59c-n2nzx\" (UID: \"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0\") " pod="openstack/dnsmasq-dns-6b97b8b59c-n2nzx" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.142483 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-dns-swift-storage-0\") pod \"dnsmasq-dns-6b97b8b59c-n2nzx\" (UID: \"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0\") " pod="openstack/dnsmasq-dns-6b97b8b59c-n2nzx" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.246122 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-ovsdbserver-nb\") pod \"dnsmasq-dns-6b97b8b59c-n2nzx\" (UID: \"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0\") " pod="openstack/dnsmasq-dns-6b97b8b59c-n2nzx" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.246192 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-dns-swift-storage-0\") pod \"dnsmasq-dns-6b97b8b59c-n2nzx\" (UID: \"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0\") " pod="openstack/dnsmasq-dns-6b97b8b59c-n2nzx" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.246239 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-ovsdbserver-sb\") pod \"dnsmasq-dns-6b97b8b59c-n2nzx\" (UID: \"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0\") " pod="openstack/dnsmasq-dns-6b97b8b59c-n2nzx" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.246279 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-dns-svc\") pod \"dnsmasq-dns-6b97b8b59c-n2nzx\" (UID: \"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0\") " pod="openstack/dnsmasq-dns-6b97b8b59c-n2nzx" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.246306 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7d9m\" (UniqueName: \"kubernetes.io/projected/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-kube-api-access-z7d9m\") pod \"dnsmasq-dns-6b97b8b59c-n2nzx\" (UID: \"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0\") " pod="openstack/dnsmasq-dns-6b97b8b59c-n2nzx" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.246352 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfb29995-ab75-4836-80c2-83a27ed076c6-scripts\") pod \"keystone-bootstrap-5bshl\" (UID: \"cfb29995-ab75-4836-80c2-83a27ed076c6\") " pod="openstack/keystone-bootstrap-5bshl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.246378 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cfb29995-ab75-4836-80c2-83a27ed076c6-fernet-keys\") pod \"keystone-bootstrap-5bshl\" (UID: \"cfb29995-ab75-4836-80c2-83a27ed076c6\") " pod="openstack/keystone-bootstrap-5bshl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.246406 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfb29995-ab75-4836-80c2-83a27ed076c6-combined-ca-bundle\") pod \"keystone-bootstrap-5bshl\" (UID: \"cfb29995-ab75-4836-80c2-83a27ed076c6\") " pod="openstack/keystone-bootstrap-5bshl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.246424 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjj4z\" (UniqueName: \"kubernetes.io/projected/cfb29995-ab75-4836-80c2-83a27ed076c6-kube-api-access-mjj4z\") pod \"keystone-bootstrap-5bshl\" (UID: \"cfb29995-ab75-4836-80c2-83a27ed076c6\") " pod="openstack/keystone-bootstrap-5bshl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.246459 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-config\") pod \"dnsmasq-dns-6b97b8b59c-n2nzx\" (UID: \"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0\") " pod="openstack/dnsmasq-dns-6b97b8b59c-n2nzx" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.246492 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cfb29995-ab75-4836-80c2-83a27ed076c6-credential-keys\") pod \"keystone-bootstrap-5bshl\" (UID: \"cfb29995-ab75-4836-80c2-83a27ed076c6\") " pod="openstack/keystone-bootstrap-5bshl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.246511 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfb29995-ab75-4836-80c2-83a27ed076c6-config-data\") pod \"keystone-bootstrap-5bshl\" (UID: \"cfb29995-ab75-4836-80c2-83a27ed076c6\") " pod="openstack/keystone-bootstrap-5bshl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.247217 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-dns-swift-storage-0\") pod \"dnsmasq-dns-6b97b8b59c-n2nzx\" (UID: \"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0\") " pod="openstack/dnsmasq-dns-6b97b8b59c-n2nzx" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.247905 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-ovsdbserver-nb\") pod \"dnsmasq-dns-6b97b8b59c-n2nzx\" (UID: \"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0\") " pod="openstack/dnsmasq-dns-6b97b8b59c-n2nzx" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.262321 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cfb29995-ab75-4836-80c2-83a27ed076c6-fernet-keys\") pod \"keystone-bootstrap-5bshl\" (UID: \"cfb29995-ab75-4836-80c2-83a27ed076c6\") " pod="openstack/keystone-bootstrap-5bshl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.263071 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-ovsdbserver-sb\") pod \"dnsmasq-dns-6b97b8b59c-n2nzx\" (UID: \"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0\") " pod="openstack/dnsmasq-dns-6b97b8b59c-n2nzx" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.263777 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-dns-svc\") pod \"dnsmasq-dns-6b97b8b59c-n2nzx\" (UID: \"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0\") " pod="openstack/dnsmasq-dns-6b97b8b59c-n2nzx" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.265721 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfb29995-ab75-4836-80c2-83a27ed076c6-config-data\") pod \"keystone-bootstrap-5bshl\" (UID: \"cfb29995-ab75-4836-80c2-83a27ed076c6\") " pod="openstack/keystone-bootstrap-5bshl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.273374 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfb29995-ab75-4836-80c2-83a27ed076c6-combined-ca-bundle\") pod \"keystone-bootstrap-5bshl\" (UID: \"cfb29995-ab75-4836-80c2-83a27ed076c6\") " pod="openstack/keystone-bootstrap-5bshl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.274214 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-config\") pod \"dnsmasq-dns-6b97b8b59c-n2nzx\" (UID: \"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0\") " pod="openstack/dnsmasq-dns-6b97b8b59c-n2nzx" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.276346 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cfb29995-ab75-4836-80c2-83a27ed076c6-credential-keys\") pod \"keystone-bootstrap-5bshl\" (UID: \"cfb29995-ab75-4836-80c2-83a27ed076c6\") " pod="openstack/keystone-bootstrap-5bshl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.285296 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-qdpnl"] Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.286599 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qdpnl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.329122 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-7b9pr" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.330141 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfb29995-ab75-4836-80c2-83a27ed076c6-scripts\") pod \"keystone-bootstrap-5bshl\" (UID: \"cfb29995-ab75-4836-80c2-83a27ed076c6\") " pod="openstack/keystone-bootstrap-5bshl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.338574 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.339790 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.365547 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7d9m\" (UniqueName: \"kubernetes.io/projected/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-kube-api-access-z7d9m\") pod \"dnsmasq-dns-6b97b8b59c-n2nzx\" (UID: \"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0\") " pod="openstack/dnsmasq-dns-6b97b8b59c-n2nzx" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.367847 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjj4z\" (UniqueName: \"kubernetes.io/projected/cfb29995-ab75-4836-80c2-83a27ed076c6-kube-api-access-mjj4z\") pod \"keystone-bootstrap-5bshl\" (UID: \"cfb29995-ab75-4836-80c2-83a27ed076c6\") " pod="openstack/keystone-bootstrap-5bshl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.456737 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-qdpnl"] Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.485220 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96da315f-3451-45c0-b1fc-687c9d18dccf-scripts\") pod \"placement-db-sync-qdpnl\" (UID: \"96da315f-3451-45c0-b1fc-687c9d18dccf\") " pod="openstack/placement-db-sync-qdpnl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.485308 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96da315f-3451-45c0-b1fc-687c9d18dccf-config-data\") pod \"placement-db-sync-qdpnl\" (UID: \"96da315f-3451-45c0-b1fc-687c9d18dccf\") " pod="openstack/placement-db-sync-qdpnl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.485342 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96da315f-3451-45c0-b1fc-687c9d18dccf-logs\") pod \"placement-db-sync-qdpnl\" (UID: \"96da315f-3451-45c0-b1fc-687c9d18dccf\") " pod="openstack/placement-db-sync-qdpnl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.485373 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96da315f-3451-45c0-b1fc-687c9d18dccf-combined-ca-bundle\") pod \"placement-db-sync-qdpnl\" (UID: \"96da315f-3451-45c0-b1fc-687c9d18dccf\") " pod="openstack/placement-db-sync-qdpnl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.485398 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fjtt\" (UniqueName: \"kubernetes.io/projected/96da315f-3451-45c0-b1fc-687c9d18dccf-kube-api-access-9fjtt\") pod \"placement-db-sync-qdpnl\" (UID: \"96da315f-3451-45c0-b1fc-687c9d18dccf\") " pod="openstack/placement-db-sync-qdpnl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.530676 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.530753 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.534722 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b97b8b59c-n2nzx"] Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.535626 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b97b8b59c-n2nzx" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.586625 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96da315f-3451-45c0-b1fc-687c9d18dccf-combined-ca-bundle\") pod \"placement-db-sync-qdpnl\" (UID: \"96da315f-3451-45c0-b1fc-687c9d18dccf\") " pod="openstack/placement-db-sync-qdpnl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.586697 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fjtt\" (UniqueName: \"kubernetes.io/projected/96da315f-3451-45c0-b1fc-687c9d18dccf-kube-api-access-9fjtt\") pod \"placement-db-sync-qdpnl\" (UID: \"96da315f-3451-45c0-b1fc-687c9d18dccf\") " pod="openstack/placement-db-sync-qdpnl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.586830 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96da315f-3451-45c0-b1fc-687c9d18dccf-scripts\") pod \"placement-db-sync-qdpnl\" (UID: \"96da315f-3451-45c0-b1fc-687c9d18dccf\") " pod="openstack/placement-db-sync-qdpnl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.586888 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96da315f-3451-45c0-b1fc-687c9d18dccf-config-data\") pod \"placement-db-sync-qdpnl\" (UID: \"96da315f-3451-45c0-b1fc-687c9d18dccf\") " pod="openstack/placement-db-sync-qdpnl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.586923 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96da315f-3451-45c0-b1fc-687c9d18dccf-logs\") pod \"placement-db-sync-qdpnl\" (UID: \"96da315f-3451-45c0-b1fc-687c9d18dccf\") " pod="openstack/placement-db-sync-qdpnl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.587489 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96da315f-3451-45c0-b1fc-687c9d18dccf-logs\") pod \"placement-db-sync-qdpnl\" (UID: \"96da315f-3451-45c0-b1fc-687c9d18dccf\") " pod="openstack/placement-db-sync-qdpnl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.592355 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96da315f-3451-45c0-b1fc-687c9d18dccf-config-data\") pod \"placement-db-sync-qdpnl\" (UID: \"96da315f-3451-45c0-b1fc-687c9d18dccf\") " pod="openstack/placement-db-sync-qdpnl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.600735 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96da315f-3451-45c0-b1fc-687c9d18dccf-scripts\") pod \"placement-db-sync-qdpnl\" (UID: \"96da315f-3451-45c0-b1fc-687c9d18dccf\") " pod="openstack/placement-db-sync-qdpnl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.600981 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5bshl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.607631 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96da315f-3451-45c0-b1fc-687c9d18dccf-combined-ca-bundle\") pod \"placement-db-sync-qdpnl\" (UID: \"96da315f-3451-45c0-b1fc-687c9d18dccf\") " pod="openstack/placement-db-sync-qdpnl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.623367 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fjtt\" (UniqueName: \"kubernetes.io/projected/96da315f-3451-45c0-b1fc-687c9d18dccf-kube-api-access-9fjtt\") pod \"placement-db-sync-qdpnl\" (UID: \"96da315f-3451-45c0-b1fc-687c9d18dccf\") " pod="openstack/placement-db-sync-qdpnl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.647650 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-x6pg5"] Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.649189 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-x6pg5" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.654969 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-5sd54"] Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.663641 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5sd54" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.664673 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-x6pg5"] Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.668941 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-7fbt5" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.671279 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.671377 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.671416 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.671515 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.671586 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-5gjfq" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.676379 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-27c9s"] Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.678463 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-27c9s" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.682311 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.686736 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-5sd54"] Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.688777 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b185d1fc-0c71-44ee-bb6d-915189acc4d8-combined-ca-bundle\") pod \"cinder-db-sync-5sd54\" (UID: \"b185d1fc-0c71-44ee-bb6d-915189acc4d8\") " pod="openstack/cinder-db-sync-5sd54" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.688889 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dtb4\" (UniqueName: \"kubernetes.io/projected/0f2f493b-34f1-492d-834d-50b24313791c-kube-api-access-6dtb4\") pod \"neutron-db-sync-x6pg5\" (UID: \"0f2f493b-34f1-492d-834d-50b24313791c\") " pod="openstack/neutron-db-sync-x6pg5" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.688924 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b185d1fc-0c71-44ee-bb6d-915189acc4d8-scripts\") pod \"cinder-db-sync-5sd54\" (UID: \"b185d1fc-0c71-44ee-bb6d-915189acc4d8\") " pod="openstack/cinder-db-sync-5sd54" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.688981 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhhls\" (UniqueName: \"kubernetes.io/projected/b185d1fc-0c71-44ee-bb6d-915189acc4d8-kube-api-access-xhhls\") pod \"cinder-db-sync-5sd54\" (UID: \"b185d1fc-0c71-44ee-bb6d-915189acc4d8\") " pod="openstack/cinder-db-sync-5sd54" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.689013 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0f2f493b-34f1-492d-834d-50b24313791c-config\") pod \"neutron-db-sync-x6pg5\" (UID: \"0f2f493b-34f1-492d-834d-50b24313791c\") " pod="openstack/neutron-db-sync-x6pg5" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.689049 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b185d1fc-0c71-44ee-bb6d-915189acc4d8-etc-machine-id\") pod \"cinder-db-sync-5sd54\" (UID: \"b185d1fc-0c71-44ee-bb6d-915189acc4d8\") " pod="openstack/cinder-db-sync-5sd54" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.689079 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b185d1fc-0c71-44ee-bb6d-915189acc4d8-config-data\") pod \"cinder-db-sync-5sd54\" (UID: \"b185d1fc-0c71-44ee-bb6d-915189acc4d8\") " pod="openstack/cinder-db-sync-5sd54" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.689139 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b185d1fc-0c71-44ee-bb6d-915189acc4d8-db-sync-config-data\") pod \"cinder-db-sync-5sd54\" (UID: \"b185d1fc-0c71-44ee-bb6d-915189acc4d8\") " pod="openstack/cinder-db-sync-5sd54" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.689250 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f2f493b-34f1-492d-834d-50b24313791c-combined-ca-bundle\") pod \"neutron-db-sync-x6pg5\" (UID: \"0f2f493b-34f1-492d-834d-50b24313791c\") " pod="openstack/neutron-db-sync-x6pg5" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.690171 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-zmh52" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.706479 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-c9c6dbd5-6ghst"] Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.714516 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.730574 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-27c9s"] Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.743626 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c9c6dbd5-6ghst"] Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.757094 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.759098 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.765979 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.766220 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.773989 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.776138 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qdpnl" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.791234 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhhls\" (UniqueName: \"kubernetes.io/projected/b185d1fc-0c71-44ee-bb6d-915189acc4d8-kube-api-access-xhhls\") pod \"cinder-db-sync-5sd54\" (UID: \"b185d1fc-0c71-44ee-bb6d-915189acc4d8\") " pod="openstack/cinder-db-sync-5sd54" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.791309 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0f2f493b-34f1-492d-834d-50b24313791c-config\") pod \"neutron-db-sync-x6pg5\" (UID: \"0f2f493b-34f1-492d-834d-50b24313791c\") " pod="openstack/neutron-db-sync-x6pg5" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.791333 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/04422fbb-2a81-4627-9df3-dce05665ec03-db-sync-config-data\") pod \"barbican-db-sync-27c9s\" (UID: \"04422fbb-2a81-4627-9df3-dce05665ec03\") " pod="openstack/barbican-db-sync-27c9s" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.791359 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\") " pod="openstack/ceilometer-0" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.791379 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b185d1fc-0c71-44ee-bb6d-915189acc4d8-etc-machine-id\") pod \"cinder-db-sync-5sd54\" (UID: \"b185d1fc-0c71-44ee-bb6d-915189acc4d8\") " pod="openstack/cinder-db-sync-5sd54" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.791402 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b185d1fc-0c71-44ee-bb6d-915189acc4d8-config-data\") pod \"cinder-db-sync-5sd54\" (UID: \"b185d1fc-0c71-44ee-bb6d-915189acc4d8\") " pod="openstack/cinder-db-sync-5sd54" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.791429 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-config-data\") pod \"ceilometer-0\" (UID: \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\") " pod="openstack/ceilometer-0" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.791451 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/729d6f58-b3b5-4f01-a602-714dffa40001-ovsdbserver-nb\") pod \"dnsmasq-dns-c9c6dbd5-6ghst\" (UID: \"729d6f58-b3b5-4f01-a602-714dffa40001\") " pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.791471 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b185d1fc-0c71-44ee-bb6d-915189acc4d8-db-sync-config-data\") pod \"cinder-db-sync-5sd54\" (UID: \"b185d1fc-0c71-44ee-bb6d-915189acc4d8\") " pod="openstack/cinder-db-sync-5sd54" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.791488 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rrz8\" (UniqueName: \"kubernetes.io/projected/04422fbb-2a81-4627-9df3-dce05665ec03-kube-api-access-7rrz8\") pod \"barbican-db-sync-27c9s\" (UID: \"04422fbb-2a81-4627-9df3-dce05665ec03\") " pod="openstack/barbican-db-sync-27c9s" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.791503 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkvrh\" (UniqueName: \"kubernetes.io/projected/729d6f58-b3b5-4f01-a602-714dffa40001-kube-api-access-zkvrh\") pod \"dnsmasq-dns-c9c6dbd5-6ghst\" (UID: \"729d6f58-b3b5-4f01-a602-714dffa40001\") " pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.791520 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04422fbb-2a81-4627-9df3-dce05665ec03-combined-ca-bundle\") pod \"barbican-db-sync-27c9s\" (UID: \"04422fbb-2a81-4627-9df3-dce05665ec03\") " pod="openstack/barbican-db-sync-27c9s" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.791549 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\") " pod="openstack/ceilometer-0" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.791574 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/729d6f58-b3b5-4f01-a602-714dffa40001-config\") pod \"dnsmasq-dns-c9c6dbd5-6ghst\" (UID: \"729d6f58-b3b5-4f01-a602-714dffa40001\") " pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.791592 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f2f493b-34f1-492d-834d-50b24313791c-combined-ca-bundle\") pod \"neutron-db-sync-x6pg5\" (UID: \"0f2f493b-34f1-492d-834d-50b24313791c\") " pod="openstack/neutron-db-sync-x6pg5" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.791619 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b185d1fc-0c71-44ee-bb6d-915189acc4d8-combined-ca-bundle\") pod \"cinder-db-sync-5sd54\" (UID: \"b185d1fc-0c71-44ee-bb6d-915189acc4d8\") " pod="openstack/cinder-db-sync-5sd54" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.791633 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-scripts\") pod \"ceilometer-0\" (UID: \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\") " pod="openstack/ceilometer-0" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.791654 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-log-httpd\") pod \"ceilometer-0\" (UID: \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\") " pod="openstack/ceilometer-0" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.791685 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/729d6f58-b3b5-4f01-a602-714dffa40001-dns-svc\") pod \"dnsmasq-dns-c9c6dbd5-6ghst\" (UID: \"729d6f58-b3b5-4f01-a602-714dffa40001\") " pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.791700 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4m27\" (UniqueName: \"kubernetes.io/projected/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-kube-api-access-p4m27\") pod \"ceilometer-0\" (UID: \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\") " pod="openstack/ceilometer-0" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.791745 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-run-httpd\") pod \"ceilometer-0\" (UID: \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\") " pod="openstack/ceilometer-0" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.791763 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/729d6f58-b3b5-4f01-a602-714dffa40001-ovsdbserver-sb\") pod \"dnsmasq-dns-c9c6dbd5-6ghst\" (UID: \"729d6f58-b3b5-4f01-a602-714dffa40001\") " pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.791785 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/729d6f58-b3b5-4f01-a602-714dffa40001-dns-swift-storage-0\") pod \"dnsmasq-dns-c9c6dbd5-6ghst\" (UID: \"729d6f58-b3b5-4f01-a602-714dffa40001\") " pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.791803 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dtb4\" (UniqueName: \"kubernetes.io/projected/0f2f493b-34f1-492d-834d-50b24313791c-kube-api-access-6dtb4\") pod \"neutron-db-sync-x6pg5\" (UID: \"0f2f493b-34f1-492d-834d-50b24313791c\") " pod="openstack/neutron-db-sync-x6pg5" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.791821 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b185d1fc-0c71-44ee-bb6d-915189acc4d8-scripts\") pod \"cinder-db-sync-5sd54\" (UID: \"b185d1fc-0c71-44ee-bb6d-915189acc4d8\") " pod="openstack/cinder-db-sync-5sd54" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.793483 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b185d1fc-0c71-44ee-bb6d-915189acc4d8-etc-machine-id\") pod \"cinder-db-sync-5sd54\" (UID: \"b185d1fc-0c71-44ee-bb6d-915189acc4d8\") " pod="openstack/cinder-db-sync-5sd54" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.800063 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b185d1fc-0c71-44ee-bb6d-915189acc4d8-scripts\") pod \"cinder-db-sync-5sd54\" (UID: \"b185d1fc-0c71-44ee-bb6d-915189acc4d8\") " pod="openstack/cinder-db-sync-5sd54" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.800891 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0f2f493b-34f1-492d-834d-50b24313791c-config\") pod \"neutron-db-sync-x6pg5\" (UID: \"0f2f493b-34f1-492d-834d-50b24313791c\") " pod="openstack/neutron-db-sync-x6pg5" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.803520 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b185d1fc-0c71-44ee-bb6d-915189acc4d8-combined-ca-bundle\") pod \"cinder-db-sync-5sd54\" (UID: \"b185d1fc-0c71-44ee-bb6d-915189acc4d8\") " pod="openstack/cinder-db-sync-5sd54" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.803585 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b185d1fc-0c71-44ee-bb6d-915189acc4d8-db-sync-config-data\") pod \"cinder-db-sync-5sd54\" (UID: \"b185d1fc-0c71-44ee-bb6d-915189acc4d8\") " pod="openstack/cinder-db-sync-5sd54" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.808674 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b185d1fc-0c71-44ee-bb6d-915189acc4d8-config-data\") pod \"cinder-db-sync-5sd54\" (UID: \"b185d1fc-0c71-44ee-bb6d-915189acc4d8\") " pod="openstack/cinder-db-sync-5sd54" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.809671 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f2f493b-34f1-492d-834d-50b24313791c-combined-ca-bundle\") pod \"neutron-db-sync-x6pg5\" (UID: \"0f2f493b-34f1-492d-834d-50b24313791c\") " pod="openstack/neutron-db-sync-x6pg5" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.826334 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dtb4\" (UniqueName: \"kubernetes.io/projected/0f2f493b-34f1-492d-834d-50b24313791c-kube-api-access-6dtb4\") pod \"neutron-db-sync-x6pg5\" (UID: \"0f2f493b-34f1-492d-834d-50b24313791c\") " pod="openstack/neutron-db-sync-x6pg5" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.827779 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhhls\" (UniqueName: \"kubernetes.io/projected/b185d1fc-0c71-44ee-bb6d-915189acc4d8-kube-api-access-xhhls\") pod \"cinder-db-sync-5sd54\" (UID: \"b185d1fc-0c71-44ee-bb6d-915189acc4d8\") " pod="openstack/cinder-db-sync-5sd54" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.909727 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/729d6f58-b3b5-4f01-a602-714dffa40001-dns-svc\") pod \"dnsmasq-dns-c9c6dbd5-6ghst\" (UID: \"729d6f58-b3b5-4f01-a602-714dffa40001\") " pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.909780 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4m27\" (UniqueName: \"kubernetes.io/projected/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-kube-api-access-p4m27\") pod \"ceilometer-0\" (UID: \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\") " pod="openstack/ceilometer-0" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.909811 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-run-httpd\") pod \"ceilometer-0\" (UID: \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\") " pod="openstack/ceilometer-0" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.909841 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/729d6f58-b3b5-4f01-a602-714dffa40001-ovsdbserver-sb\") pod \"dnsmasq-dns-c9c6dbd5-6ghst\" (UID: \"729d6f58-b3b5-4f01-a602-714dffa40001\") " pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.909873 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/729d6f58-b3b5-4f01-a602-714dffa40001-dns-swift-storage-0\") pod \"dnsmasq-dns-c9c6dbd5-6ghst\" (UID: \"729d6f58-b3b5-4f01-a602-714dffa40001\") " pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.909938 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/04422fbb-2a81-4627-9df3-dce05665ec03-db-sync-config-data\") pod \"barbican-db-sync-27c9s\" (UID: \"04422fbb-2a81-4627-9df3-dce05665ec03\") " pod="openstack/barbican-db-sync-27c9s" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.909963 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\") " pod="openstack/ceilometer-0" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.909998 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-config-data\") pod \"ceilometer-0\" (UID: \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\") " pod="openstack/ceilometer-0" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.910017 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/729d6f58-b3b5-4f01-a602-714dffa40001-ovsdbserver-nb\") pod \"dnsmasq-dns-c9c6dbd5-6ghst\" (UID: \"729d6f58-b3b5-4f01-a602-714dffa40001\") " pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.910046 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rrz8\" (UniqueName: \"kubernetes.io/projected/04422fbb-2a81-4627-9df3-dce05665ec03-kube-api-access-7rrz8\") pod \"barbican-db-sync-27c9s\" (UID: \"04422fbb-2a81-4627-9df3-dce05665ec03\") " pod="openstack/barbican-db-sync-27c9s" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.910070 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkvrh\" (UniqueName: \"kubernetes.io/projected/729d6f58-b3b5-4f01-a602-714dffa40001-kube-api-access-zkvrh\") pod \"dnsmasq-dns-c9c6dbd5-6ghst\" (UID: \"729d6f58-b3b5-4f01-a602-714dffa40001\") " pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.910098 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04422fbb-2a81-4627-9df3-dce05665ec03-combined-ca-bundle\") pod \"barbican-db-sync-27c9s\" (UID: \"04422fbb-2a81-4627-9df3-dce05665ec03\") " pod="openstack/barbican-db-sync-27c9s" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.910137 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\") " pod="openstack/ceilometer-0" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.910169 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/729d6f58-b3b5-4f01-a602-714dffa40001-config\") pod \"dnsmasq-dns-c9c6dbd5-6ghst\" (UID: \"729d6f58-b3b5-4f01-a602-714dffa40001\") " pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.910212 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-scripts\") pod \"ceilometer-0\" (UID: \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\") " pod="openstack/ceilometer-0" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.910244 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-log-httpd\") pod \"ceilometer-0\" (UID: \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\") " pod="openstack/ceilometer-0" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.911909 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-log-httpd\") pod \"ceilometer-0\" (UID: \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\") " pod="openstack/ceilometer-0" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.917131 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-config-data\") pod \"ceilometer-0\" (UID: \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\") " pod="openstack/ceilometer-0" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.917957 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/729d6f58-b3b5-4f01-a602-714dffa40001-dns-svc\") pod \"dnsmasq-dns-c9c6dbd5-6ghst\" (UID: \"729d6f58-b3b5-4f01-a602-714dffa40001\") " pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.918049 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/729d6f58-b3b5-4f01-a602-714dffa40001-ovsdbserver-sb\") pod \"dnsmasq-dns-c9c6dbd5-6ghst\" (UID: \"729d6f58-b3b5-4f01-a602-714dffa40001\") " pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.918694 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/729d6f58-b3b5-4f01-a602-714dffa40001-dns-swift-storage-0\") pod \"dnsmasq-dns-c9c6dbd5-6ghst\" (UID: \"729d6f58-b3b5-4f01-a602-714dffa40001\") " pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.919560 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/729d6f58-b3b5-4f01-a602-714dffa40001-ovsdbserver-nb\") pod \"dnsmasq-dns-c9c6dbd5-6ghst\" (UID: \"729d6f58-b3b5-4f01-a602-714dffa40001\") " pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.919688 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-run-httpd\") pod \"ceilometer-0\" (UID: \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\") " pod="openstack/ceilometer-0" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.922191 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/04422fbb-2a81-4627-9df3-dce05665ec03-db-sync-config-data\") pod \"barbican-db-sync-27c9s\" (UID: \"04422fbb-2a81-4627-9df3-dce05665ec03\") " pod="openstack/barbican-db-sync-27c9s" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.926630 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04422fbb-2a81-4627-9df3-dce05665ec03-combined-ca-bundle\") pod \"barbican-db-sync-27c9s\" (UID: \"04422fbb-2a81-4627-9df3-dce05665ec03\") " pod="openstack/barbican-db-sync-27c9s" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.927423 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\") " pod="openstack/ceilometer-0" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.927922 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/729d6f58-b3b5-4f01-a602-714dffa40001-config\") pod \"dnsmasq-dns-c9c6dbd5-6ghst\" (UID: \"729d6f58-b3b5-4f01-a602-714dffa40001\") " pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.929623 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-scripts\") pod \"ceilometer-0\" (UID: \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\") " pod="openstack/ceilometer-0" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.930157 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\") " pod="openstack/ceilometer-0" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.939961 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkvrh\" (UniqueName: \"kubernetes.io/projected/729d6f58-b3b5-4f01-a602-714dffa40001-kube-api-access-zkvrh\") pod \"dnsmasq-dns-c9c6dbd5-6ghst\" (UID: \"729d6f58-b3b5-4f01-a602-714dffa40001\") " pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.941756 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rrz8\" (UniqueName: \"kubernetes.io/projected/04422fbb-2a81-4627-9df3-dce05665ec03-kube-api-access-7rrz8\") pod \"barbican-db-sync-27c9s\" (UID: \"04422fbb-2a81-4627-9df3-dce05665ec03\") " pod="openstack/barbican-db-sync-27c9s" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.943145 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4m27\" (UniqueName: \"kubernetes.io/projected/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-kube-api-access-p4m27\") pod \"ceilometer-0\" (UID: \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\") " pod="openstack/ceilometer-0" Apr 04 02:24:26 crc kubenswrapper[4681]: I0404 02:24:26.988957 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-x6pg5" Apr 04 02:24:27 crc kubenswrapper[4681]: I0404 02:24:27.019375 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5sd54" Apr 04 02:24:27 crc kubenswrapper[4681]: I0404 02:24:27.041436 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-27c9s" Apr 04 02:24:27 crc kubenswrapper[4681]: I0404 02:24:27.055808 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" Apr 04 02:24:27 crc kubenswrapper[4681]: I0404 02:24:27.093402 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:24:27 crc kubenswrapper[4681]: I0404 02:24:27.312480 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b97b8b59c-n2nzx"] Apr 04 02:24:27 crc kubenswrapper[4681]: I0404 02:24:27.424610 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-qdpnl"] Apr 04 02:24:27 crc kubenswrapper[4681]: I0404 02:24:27.458077 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5bshl"] Apr 04 02:24:27 crc kubenswrapper[4681]: I0404 02:24:27.676525 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qdpnl" event={"ID":"96da315f-3451-45c0-b1fc-687c9d18dccf","Type":"ContainerStarted","Data":"f9d1f92be97bb4d6bf5ed4b01cceb1759d4964a0114f6ecb7862725ed0ad48d2"} Apr 04 02:24:27 crc kubenswrapper[4681]: I0404 02:24:27.678105 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b97b8b59c-n2nzx" event={"ID":"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0","Type":"ContainerStarted","Data":"5a630ec5a7e43a9712b4bc39cd76aed62eea1ccac4796e469c7f5632ea9020c8"} Apr 04 02:24:27 crc kubenswrapper[4681]: I0404 02:24:27.679157 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5bshl" event={"ID":"cfb29995-ab75-4836-80c2-83a27ed076c6","Type":"ContainerStarted","Data":"47f0bc0e9a3ce0e147c702d856e0da757e2e32abc8eeadeee374a52f80692a90"} Apr 04 02:24:27 crc kubenswrapper[4681]: I0404 02:24:27.710991 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-5sd54"] Apr 04 02:24:27 crc kubenswrapper[4681]: I0404 02:24:27.757371 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-c9c6dbd5-6ghst"] Apr 04 02:24:27 crc kubenswrapper[4681]: I0404 02:24:27.909752 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-27c9s"] Apr 04 02:24:27 crc kubenswrapper[4681]: I0404 02:24:27.921303 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-x6pg5"] Apr 04 02:24:28 crc kubenswrapper[4681]: I0404 02:24:28.069311 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:24:28 crc kubenswrapper[4681]: I0404 02:24:28.300605 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:24:28 crc kubenswrapper[4681]: I0404 02:24:28.700523 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5sd54" event={"ID":"b185d1fc-0c71-44ee-bb6d-915189acc4d8","Type":"ContainerStarted","Data":"66dc9e92ec9e1366ac8c48892b9d73eec5af8954b4cfb518a32d83a4f99a8f1a"} Apr 04 02:24:28 crc kubenswrapper[4681]: I0404 02:24:28.704571 4681 generic.go:334] "Generic (PLEG): container finished" podID="4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0" containerID="629caa7cb631831247c1cee36270c50f7e1333eb06900405f75b0e649002062e" exitCode=0 Apr 04 02:24:28 crc kubenswrapper[4681]: I0404 02:24:28.704628 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b97b8b59c-n2nzx" event={"ID":"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0","Type":"ContainerDied","Data":"629caa7cb631831247c1cee36270c50f7e1333eb06900405f75b0e649002062e"} Apr 04 02:24:28 crc kubenswrapper[4681]: I0404 02:24:28.706739 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-x6pg5" event={"ID":"0f2f493b-34f1-492d-834d-50b24313791c","Type":"ContainerStarted","Data":"d6503b4cf5a6450cce71c8e8e8835f62ed80d92a0688c8b72802cd1862e2f541"} Apr 04 02:24:28 crc kubenswrapper[4681]: I0404 02:24:28.706769 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-x6pg5" event={"ID":"0f2f493b-34f1-492d-834d-50b24313791c","Type":"ContainerStarted","Data":"4c8f6d122f10b3d61bb3e230f57dc4526c5c165cf1ba772e1a8b5d52af70b6e8"} Apr 04 02:24:28 crc kubenswrapper[4681]: I0404 02:24:28.716163 4681 generic.go:334] "Generic (PLEG): container finished" podID="729d6f58-b3b5-4f01-a602-714dffa40001" containerID="7fb8c7a686056708c5c1b8a4e1d17ba2105bb74443a85cfe5d247dc52c7f8e52" exitCode=0 Apr 04 02:24:28 crc kubenswrapper[4681]: I0404 02:24:28.716386 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" event={"ID":"729d6f58-b3b5-4f01-a602-714dffa40001","Type":"ContainerDied","Data":"7fb8c7a686056708c5c1b8a4e1d17ba2105bb74443a85cfe5d247dc52c7f8e52"} Apr 04 02:24:28 crc kubenswrapper[4681]: I0404 02:24:28.716419 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" event={"ID":"729d6f58-b3b5-4f01-a602-714dffa40001","Type":"ContainerStarted","Data":"6fed4d3fa0b544f7342ae385d38ef944d0a24c4d86f75d0ebb6bb891118d5082"} Apr 04 02:24:28 crc kubenswrapper[4681]: I0404 02:24:28.720371 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5bshl" event={"ID":"cfb29995-ab75-4836-80c2-83a27ed076c6","Type":"ContainerStarted","Data":"74a88161c23f2dc24c5f4ba47501b13f8a0bb231bd5d732d3c6886ea8fb12991"} Apr 04 02:24:28 crc kubenswrapper[4681]: I0404 02:24:28.726223 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-27c9s" event={"ID":"04422fbb-2a81-4627-9df3-dce05665ec03","Type":"ContainerStarted","Data":"e957a8c7489e6688f04564352379986913ef8758d43a1fa9edccd4940ceb87ea"} Apr 04 02:24:28 crc kubenswrapper[4681]: I0404 02:24:28.752759 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d56f3f04-51e5-4b48-a4bb-88fe0b9be711","Type":"ContainerStarted","Data":"6a997ab174b32de037fec0a617bcf1cae3de0426517769e3e8da2f30fdd842a1"} Apr 04 02:24:28 crc kubenswrapper[4681]: I0404 02:24:28.822366 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-x6pg5" podStartSLOduration=2.8223450530000003 podStartE2EDuration="2.822345053s" podCreationTimestamp="2026-04-04 02:24:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:24:28.79930967 +0000 UTC m=+1748.465084790" watchObservedRunningTime="2026-04-04 02:24:28.822345053 +0000 UTC m=+1748.488120173" Apr 04 02:24:28 crc kubenswrapper[4681]: I0404 02:24:28.845854 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-5bshl" podStartSLOduration=3.845836618 podStartE2EDuration="3.845836618s" podCreationTimestamp="2026-04-04 02:24:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:24:28.836765278 +0000 UTC m=+1748.502540398" watchObservedRunningTime="2026-04-04 02:24:28.845836618 +0000 UTC m=+1748.511611738" Apr 04 02:24:29 crc kubenswrapper[4681]: I0404 02:24:29.172482 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b97b8b59c-n2nzx" Apr 04 02:24:29 crc kubenswrapper[4681]: I0404 02:24:29.282880 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7d9m\" (UniqueName: \"kubernetes.io/projected/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-kube-api-access-z7d9m\") pod \"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0\" (UID: \"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0\") " Apr 04 02:24:29 crc kubenswrapper[4681]: I0404 02:24:29.282940 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-dns-swift-storage-0\") pod \"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0\" (UID: \"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0\") " Apr 04 02:24:29 crc kubenswrapper[4681]: I0404 02:24:29.283038 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-ovsdbserver-nb\") pod \"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0\" (UID: \"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0\") " Apr 04 02:24:29 crc kubenswrapper[4681]: I0404 02:24:29.283091 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-config\") pod \"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0\" (UID: \"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0\") " Apr 04 02:24:29 crc kubenswrapper[4681]: I0404 02:24:29.283199 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-ovsdbserver-sb\") pod \"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0\" (UID: \"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0\") " Apr 04 02:24:29 crc kubenswrapper[4681]: I0404 02:24:29.283282 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-dns-svc\") pod \"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0\" (UID: \"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0\") " Apr 04 02:24:29 crc kubenswrapper[4681]: I0404 02:24:29.303877 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-kube-api-access-z7d9m" (OuterVolumeSpecName: "kube-api-access-z7d9m") pod "4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0" (UID: "4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0"). InnerVolumeSpecName "kube-api-access-z7d9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:24:29 crc kubenswrapper[4681]: I0404 02:24:29.312431 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0" (UID: "4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:24:29 crc kubenswrapper[4681]: I0404 02:24:29.320941 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0" (UID: "4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:24:29 crc kubenswrapper[4681]: I0404 02:24:29.329941 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0" (UID: "4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:24:29 crc kubenswrapper[4681]: I0404 02:24:29.347924 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-config" (OuterVolumeSpecName: "config") pod "4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0" (UID: "4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:24:29 crc kubenswrapper[4681]: I0404 02:24:29.350238 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0" (UID: "4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:24:29 crc kubenswrapper[4681]: I0404 02:24:29.386093 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Apr 04 02:24:29 crc kubenswrapper[4681]: I0404 02:24:29.386132 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:24:29 crc kubenswrapper[4681]: I0404 02:24:29.386144 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Apr 04 02:24:29 crc kubenswrapper[4681]: I0404 02:24:29.386156 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-dns-svc\") on node \"crc\" DevicePath \"\"" Apr 04 02:24:29 crc kubenswrapper[4681]: I0404 02:24:29.386168 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7d9m\" (UniqueName: \"kubernetes.io/projected/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-kube-api-access-z7d9m\") on node \"crc\" DevicePath \"\"" Apr 04 02:24:29 crc kubenswrapper[4681]: I0404 02:24:29.386181 4681 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Apr 04 02:24:29 crc kubenswrapper[4681]: I0404 02:24:29.769823 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" event={"ID":"729d6f58-b3b5-4f01-a602-714dffa40001","Type":"ContainerStarted","Data":"9ca95f147963f3f05f7b633b1aa66e9c1a6744c05024dcd7852e1e0d4624443a"} Apr 04 02:24:29 crc kubenswrapper[4681]: I0404 02:24:29.769912 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" Apr 04 02:24:29 crc kubenswrapper[4681]: I0404 02:24:29.775462 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b97b8b59c-n2nzx" event={"ID":"4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0","Type":"ContainerDied","Data":"5a630ec5a7e43a9712b4bc39cd76aed62eea1ccac4796e469c7f5632ea9020c8"} Apr 04 02:24:29 crc kubenswrapper[4681]: I0404 02:24:29.775507 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b97b8b59c-n2nzx" Apr 04 02:24:29 crc kubenswrapper[4681]: I0404 02:24:29.775520 4681 scope.go:117] "RemoveContainer" containerID="629caa7cb631831247c1cee36270c50f7e1333eb06900405f75b0e649002062e" Apr 04 02:24:29 crc kubenswrapper[4681]: I0404 02:24:29.793978 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" podStartSLOduration=3.793955384 podStartE2EDuration="3.793955384s" podCreationTimestamp="2026-04-04 02:24:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:24:29.789607505 +0000 UTC m=+1749.455382625" watchObservedRunningTime="2026-04-04 02:24:29.793955384 +0000 UTC m=+1749.459730504" Apr 04 02:24:29 crc kubenswrapper[4681]: I0404 02:24:29.866370 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b97b8b59c-n2nzx"] Apr 04 02:24:29 crc kubenswrapper[4681]: I0404 02:24:29.876576 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b97b8b59c-n2nzx"] Apr 04 02:24:30 crc kubenswrapper[4681]: I0404 02:24:30.413389 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h5zv5" podUID="56f8d1b1-7cb9-400a-a94b-71bf1fcf0902" containerName="registry-server" probeResult="failure" output=< Apr 04 02:24:30 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Apr 04 02:24:30 crc kubenswrapper[4681]: > Apr 04 02:24:30 crc kubenswrapper[4681]: I0404 02:24:30.801695 4681 generic.go:334] "Generic (PLEG): container finished" podID="0011158a-2855-4b60-9798-77badda0f40c" containerID="38cc4e3a4a4b3258af6cd176ca88aa5d76a1f8c1cb46392b6b2217526bbf2c23" exitCode=0 Apr 04 02:24:30 crc kubenswrapper[4681]: I0404 02:24:30.801777 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-hrgm8" event={"ID":"0011158a-2855-4b60-9798-77badda0f40c","Type":"ContainerDied","Data":"38cc4e3a4a4b3258af6cd176ca88aa5d76a1f8c1cb46392b6b2217526bbf2c23"} Apr 04 02:24:31 crc kubenswrapper[4681]: I0404 02:24:31.305622 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0" path="/var/lib/kubelet/pods/4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0/volumes" Apr 04 02:24:34 crc kubenswrapper[4681]: I0404 02:24:34.846983 4681 generic.go:334] "Generic (PLEG): container finished" podID="cfb29995-ab75-4836-80c2-83a27ed076c6" containerID="74a88161c23f2dc24c5f4ba47501b13f8a0bb231bd5d732d3c6886ea8fb12991" exitCode=0 Apr 04 02:24:34 crc kubenswrapper[4681]: I0404 02:24:34.847042 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5bshl" event={"ID":"cfb29995-ab75-4836-80c2-83a27ed076c6","Type":"ContainerDied","Data":"74a88161c23f2dc24c5f4ba47501b13f8a0bb231bd5d732d3c6886ea8fb12991"} Apr 04 02:24:37 crc kubenswrapper[4681]: I0404 02:24:37.057443 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" Apr 04 02:24:37 crc kubenswrapper[4681]: I0404 02:24:37.126947 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67cf7bcc65-9tfw2"] Apr 04 02:24:37 crc kubenswrapper[4681]: I0404 02:24:37.127205 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" podUID="957e4b64-c18f-4cee-87dc-848a0d936626" containerName="dnsmasq-dns" containerID="cri-o://a5dd28324d39eb03288061e75df5a1eedf7c8ef10f772e356f415fb16cc5d60b" gracePeriod=10 Apr 04 02:24:37 crc kubenswrapper[4681]: I0404 02:24:37.880776 4681 generic.go:334] "Generic (PLEG): container finished" podID="957e4b64-c18f-4cee-87dc-848a0d936626" containerID="a5dd28324d39eb03288061e75df5a1eedf7c8ef10f772e356f415fb16cc5d60b" exitCode=0 Apr 04 02:24:37 crc kubenswrapper[4681]: I0404 02:24:37.880930 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" event={"ID":"957e4b64-c18f-4cee-87dc-848a0d936626","Type":"ContainerDied","Data":"a5dd28324d39eb03288061e75df5a1eedf7c8ef10f772e356f415fb16cc5d60b"} Apr 04 02:24:39 crc kubenswrapper[4681]: I0404 02:24:39.405877 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h5zv5" Apr 04 02:24:39 crc kubenswrapper[4681]: I0404 02:24:39.453513 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h5zv5" Apr 04 02:24:39 crc kubenswrapper[4681]: I0404 02:24:39.641746 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h5zv5"] Apr 04 02:24:40 crc kubenswrapper[4681]: I0404 02:24:40.905097 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" podUID="957e4b64-c18f-4cee-87dc-848a0d936626" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.153:5353: connect: connection refused" Apr 04 02:24:40 crc kubenswrapper[4681]: I0404 02:24:40.908554 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h5zv5" podUID="56f8d1b1-7cb9-400a-a94b-71bf1fcf0902" containerName="registry-server" containerID="cri-o://947573167cb8c13b4ed066c6d80f2a580354c4a1db92a38bc486066b53b68b46" gracePeriod=2 Apr 04 02:24:42 crc kubenswrapper[4681]: I0404 02:24:42.930493 4681 generic.go:334] "Generic (PLEG): container finished" podID="56f8d1b1-7cb9-400a-a94b-71bf1fcf0902" containerID="947573167cb8c13b4ed066c6d80f2a580354c4a1db92a38bc486066b53b68b46" exitCode=0 Apr 04 02:24:42 crc kubenswrapper[4681]: I0404 02:24:42.930570 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h5zv5" event={"ID":"56f8d1b1-7cb9-400a-a94b-71bf1fcf0902","Type":"ContainerDied","Data":"947573167cb8c13b4ed066c6d80f2a580354c4a1db92a38bc486066b53b68b46"} Apr 04 02:24:43 crc kubenswrapper[4681]: I0404 02:24:43.627514 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5bshl" Apr 04 02:24:43 crc kubenswrapper[4681]: I0404 02:24:43.779643 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cfb29995-ab75-4836-80c2-83a27ed076c6-credential-keys\") pod \"cfb29995-ab75-4836-80c2-83a27ed076c6\" (UID: \"cfb29995-ab75-4836-80c2-83a27ed076c6\") " Apr 04 02:24:43 crc kubenswrapper[4681]: I0404 02:24:43.779718 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cfb29995-ab75-4836-80c2-83a27ed076c6-fernet-keys\") pod \"cfb29995-ab75-4836-80c2-83a27ed076c6\" (UID: \"cfb29995-ab75-4836-80c2-83a27ed076c6\") " Apr 04 02:24:43 crc kubenswrapper[4681]: I0404 02:24:43.779833 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfb29995-ab75-4836-80c2-83a27ed076c6-scripts\") pod \"cfb29995-ab75-4836-80c2-83a27ed076c6\" (UID: \"cfb29995-ab75-4836-80c2-83a27ed076c6\") " Apr 04 02:24:43 crc kubenswrapper[4681]: I0404 02:24:43.779883 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjj4z\" (UniqueName: \"kubernetes.io/projected/cfb29995-ab75-4836-80c2-83a27ed076c6-kube-api-access-mjj4z\") pod \"cfb29995-ab75-4836-80c2-83a27ed076c6\" (UID: \"cfb29995-ab75-4836-80c2-83a27ed076c6\") " Apr 04 02:24:43 crc kubenswrapper[4681]: I0404 02:24:43.779935 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfb29995-ab75-4836-80c2-83a27ed076c6-config-data\") pod \"cfb29995-ab75-4836-80c2-83a27ed076c6\" (UID: \"cfb29995-ab75-4836-80c2-83a27ed076c6\") " Apr 04 02:24:43 crc kubenswrapper[4681]: I0404 02:24:43.780029 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfb29995-ab75-4836-80c2-83a27ed076c6-combined-ca-bundle\") pod \"cfb29995-ab75-4836-80c2-83a27ed076c6\" (UID: \"cfb29995-ab75-4836-80c2-83a27ed076c6\") " Apr 04 02:24:43 crc kubenswrapper[4681]: I0404 02:24:43.790819 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfb29995-ab75-4836-80c2-83a27ed076c6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "cfb29995-ab75-4836-80c2-83a27ed076c6" (UID: "cfb29995-ab75-4836-80c2-83a27ed076c6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:24:43 crc kubenswrapper[4681]: I0404 02:24:43.790855 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfb29995-ab75-4836-80c2-83a27ed076c6-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "cfb29995-ab75-4836-80c2-83a27ed076c6" (UID: "cfb29995-ab75-4836-80c2-83a27ed076c6"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:24:43 crc kubenswrapper[4681]: I0404 02:24:43.793130 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfb29995-ab75-4836-80c2-83a27ed076c6-kube-api-access-mjj4z" (OuterVolumeSpecName: "kube-api-access-mjj4z") pod "cfb29995-ab75-4836-80c2-83a27ed076c6" (UID: "cfb29995-ab75-4836-80c2-83a27ed076c6"). InnerVolumeSpecName "kube-api-access-mjj4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:24:43 crc kubenswrapper[4681]: I0404 02:24:43.796546 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfb29995-ab75-4836-80c2-83a27ed076c6-scripts" (OuterVolumeSpecName: "scripts") pod "cfb29995-ab75-4836-80c2-83a27ed076c6" (UID: "cfb29995-ab75-4836-80c2-83a27ed076c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:24:43 crc kubenswrapper[4681]: I0404 02:24:43.815753 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfb29995-ab75-4836-80c2-83a27ed076c6-config-data" (OuterVolumeSpecName: "config-data") pod "cfb29995-ab75-4836-80c2-83a27ed076c6" (UID: "cfb29995-ab75-4836-80c2-83a27ed076c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:24:43 crc kubenswrapper[4681]: I0404 02:24:43.823788 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfb29995-ab75-4836-80c2-83a27ed076c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cfb29995-ab75-4836-80c2-83a27ed076c6" (UID: "cfb29995-ab75-4836-80c2-83a27ed076c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:24:43 crc kubenswrapper[4681]: I0404 02:24:43.883043 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfb29995-ab75-4836-80c2-83a27ed076c6-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:24:43 crc kubenswrapper[4681]: I0404 02:24:43.883099 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjj4z\" (UniqueName: \"kubernetes.io/projected/cfb29995-ab75-4836-80c2-83a27ed076c6-kube-api-access-mjj4z\") on node \"crc\" DevicePath \"\"" Apr 04 02:24:43 crc kubenswrapper[4681]: I0404 02:24:43.883112 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfb29995-ab75-4836-80c2-83a27ed076c6-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:24:43 crc kubenswrapper[4681]: I0404 02:24:43.883124 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfb29995-ab75-4836-80c2-83a27ed076c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:24:43 crc kubenswrapper[4681]: I0404 02:24:43.883134 4681 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/cfb29995-ab75-4836-80c2-83a27ed076c6-credential-keys\") on node \"crc\" DevicePath \"\"" Apr 04 02:24:43 crc kubenswrapper[4681]: I0404 02:24:43.883142 4681 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/cfb29995-ab75-4836-80c2-83a27ed076c6-fernet-keys\") on node \"crc\" DevicePath \"\"" Apr 04 02:24:43 crc kubenswrapper[4681]: I0404 02:24:43.960831 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5bshl" event={"ID":"cfb29995-ab75-4836-80c2-83a27ed076c6","Type":"ContainerDied","Data":"47f0bc0e9a3ce0e147c702d856e0da757e2e32abc8eeadeee374a52f80692a90"} Apr 04 02:24:43 crc kubenswrapper[4681]: I0404 02:24:43.960883 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47f0bc0e9a3ce0e147c702d856e0da757e2e32abc8eeadeee374a52f80692a90" Apr 04 02:24:43 crc kubenswrapper[4681]: I0404 02:24:43.960912 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5bshl" Apr 04 02:24:44 crc kubenswrapper[4681]: E0404 02:24:44.612933 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.110:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Apr 04 02:24:44 crc kubenswrapper[4681]: E0404 02:24:44.613013 4681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.110:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Apr 04 02:24:44 crc kubenswrapper[4681]: E0404 02:24:44.613151 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:38.102.83.110:5001/podified-master-centos10/openstack-barbican-api:watcher_latest,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7rrz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-27c9s_openstack(04422fbb-2a81-4627-9df3-dce05665ec03): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Apr 04 02:24:44 crc kubenswrapper[4681]: E0404 02:24:44.614388 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-27c9s" podUID="04422fbb-2a81-4627-9df3-dce05665ec03" Apr 04 02:24:44 crc kubenswrapper[4681]: I0404 02:24:44.725325 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-5bshl"] Apr 04 02:24:44 crc kubenswrapper[4681]: I0404 02:24:44.735025 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-5bshl"] Apr 04 02:24:44 crc kubenswrapper[4681]: I0404 02:24:44.821537 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xmmq7"] Apr 04 02:24:44 crc kubenswrapper[4681]: E0404 02:24:44.822032 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfb29995-ab75-4836-80c2-83a27ed076c6" containerName="keystone-bootstrap" Apr 04 02:24:44 crc kubenswrapper[4681]: I0404 02:24:44.822055 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfb29995-ab75-4836-80c2-83a27ed076c6" containerName="keystone-bootstrap" Apr 04 02:24:44 crc kubenswrapper[4681]: E0404 02:24:44.822070 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0" containerName="init" Apr 04 02:24:44 crc kubenswrapper[4681]: I0404 02:24:44.822078 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0" containerName="init" Apr 04 02:24:44 crc kubenswrapper[4681]: I0404 02:24:44.822323 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bccb9f5-44c9-4d2f-bb12-3ddae9f5eeb0" containerName="init" Apr 04 02:24:44 crc kubenswrapper[4681]: I0404 02:24:44.822349 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfb29995-ab75-4836-80c2-83a27ed076c6" containerName="keystone-bootstrap" Apr 04 02:24:44 crc kubenswrapper[4681]: I0404 02:24:44.823099 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xmmq7" Apr 04 02:24:44 crc kubenswrapper[4681]: I0404 02:24:44.825919 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Apr 04 02:24:44 crc kubenswrapper[4681]: I0404 02:24:44.826596 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Apr 04 02:24:44 crc kubenswrapper[4681]: I0404 02:24:44.826635 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Apr 04 02:24:44 crc kubenswrapper[4681]: I0404 02:24:44.826759 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-lv9fh" Apr 04 02:24:44 crc kubenswrapper[4681]: I0404 02:24:44.826965 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Apr 04 02:24:44 crc kubenswrapper[4681]: I0404 02:24:44.838949 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xmmq7"] Apr 04 02:24:44 crc kubenswrapper[4681]: I0404 02:24:44.902761 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf297ed4-229b-492f-bd27-5ea5e2279816-credential-keys\") pod \"keystone-bootstrap-xmmq7\" (UID: \"bf297ed4-229b-492f-bd27-5ea5e2279816\") " pod="openstack/keystone-bootstrap-xmmq7" Apr 04 02:24:44 crc kubenswrapper[4681]: I0404 02:24:44.902821 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf297ed4-229b-492f-bd27-5ea5e2279816-scripts\") pod \"keystone-bootstrap-xmmq7\" (UID: \"bf297ed4-229b-492f-bd27-5ea5e2279816\") " pod="openstack/keystone-bootstrap-xmmq7" Apr 04 02:24:44 crc kubenswrapper[4681]: I0404 02:24:44.902880 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf297ed4-229b-492f-bd27-5ea5e2279816-fernet-keys\") pod \"keystone-bootstrap-xmmq7\" (UID: \"bf297ed4-229b-492f-bd27-5ea5e2279816\") " pod="openstack/keystone-bootstrap-xmmq7" Apr 04 02:24:44 crc kubenswrapper[4681]: I0404 02:24:44.902907 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dqf9\" (UniqueName: \"kubernetes.io/projected/bf297ed4-229b-492f-bd27-5ea5e2279816-kube-api-access-9dqf9\") pod \"keystone-bootstrap-xmmq7\" (UID: \"bf297ed4-229b-492f-bd27-5ea5e2279816\") " pod="openstack/keystone-bootstrap-xmmq7" Apr 04 02:24:44 crc kubenswrapper[4681]: I0404 02:24:44.903045 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf297ed4-229b-492f-bd27-5ea5e2279816-config-data\") pod \"keystone-bootstrap-xmmq7\" (UID: \"bf297ed4-229b-492f-bd27-5ea5e2279816\") " pod="openstack/keystone-bootstrap-xmmq7" Apr 04 02:24:44 crc kubenswrapper[4681]: I0404 02:24:44.903178 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf297ed4-229b-492f-bd27-5ea5e2279816-combined-ca-bundle\") pod \"keystone-bootstrap-xmmq7\" (UID: \"bf297ed4-229b-492f-bd27-5ea5e2279816\") " pod="openstack/keystone-bootstrap-xmmq7" Apr 04 02:24:44 crc kubenswrapper[4681]: E0404 02:24:44.972283 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.110:5001/podified-master-centos10/openstack-barbican-api:watcher_latest\\\"\"" pod="openstack/barbican-db-sync-27c9s" podUID="04422fbb-2a81-4627-9df3-dce05665ec03" Apr 04 02:24:45 crc kubenswrapper[4681]: I0404 02:24:45.006211 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf297ed4-229b-492f-bd27-5ea5e2279816-combined-ca-bundle\") pod \"keystone-bootstrap-xmmq7\" (UID: \"bf297ed4-229b-492f-bd27-5ea5e2279816\") " pod="openstack/keystone-bootstrap-xmmq7" Apr 04 02:24:45 crc kubenswrapper[4681]: I0404 02:24:45.006414 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf297ed4-229b-492f-bd27-5ea5e2279816-credential-keys\") pod \"keystone-bootstrap-xmmq7\" (UID: \"bf297ed4-229b-492f-bd27-5ea5e2279816\") " pod="openstack/keystone-bootstrap-xmmq7" Apr 04 02:24:45 crc kubenswrapper[4681]: I0404 02:24:45.006469 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf297ed4-229b-492f-bd27-5ea5e2279816-scripts\") pod \"keystone-bootstrap-xmmq7\" (UID: \"bf297ed4-229b-492f-bd27-5ea5e2279816\") " pod="openstack/keystone-bootstrap-xmmq7" Apr 04 02:24:45 crc kubenswrapper[4681]: I0404 02:24:45.006550 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf297ed4-229b-492f-bd27-5ea5e2279816-fernet-keys\") pod \"keystone-bootstrap-xmmq7\" (UID: \"bf297ed4-229b-492f-bd27-5ea5e2279816\") " pod="openstack/keystone-bootstrap-xmmq7" Apr 04 02:24:45 crc kubenswrapper[4681]: I0404 02:24:45.006594 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dqf9\" (UniqueName: \"kubernetes.io/projected/bf297ed4-229b-492f-bd27-5ea5e2279816-kube-api-access-9dqf9\") pod \"keystone-bootstrap-xmmq7\" (UID: \"bf297ed4-229b-492f-bd27-5ea5e2279816\") " pod="openstack/keystone-bootstrap-xmmq7" Apr 04 02:24:45 crc kubenswrapper[4681]: I0404 02:24:45.006653 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf297ed4-229b-492f-bd27-5ea5e2279816-config-data\") pod \"keystone-bootstrap-xmmq7\" (UID: \"bf297ed4-229b-492f-bd27-5ea5e2279816\") " pod="openstack/keystone-bootstrap-xmmq7" Apr 04 02:24:45 crc kubenswrapper[4681]: I0404 02:24:45.014364 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf297ed4-229b-492f-bd27-5ea5e2279816-config-data\") pod \"keystone-bootstrap-xmmq7\" (UID: \"bf297ed4-229b-492f-bd27-5ea5e2279816\") " pod="openstack/keystone-bootstrap-xmmq7" Apr 04 02:24:45 crc kubenswrapper[4681]: I0404 02:24:45.014783 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf297ed4-229b-492f-bd27-5ea5e2279816-scripts\") pod \"keystone-bootstrap-xmmq7\" (UID: \"bf297ed4-229b-492f-bd27-5ea5e2279816\") " pod="openstack/keystone-bootstrap-xmmq7" Apr 04 02:24:45 crc kubenswrapper[4681]: I0404 02:24:45.016743 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf297ed4-229b-492f-bd27-5ea5e2279816-combined-ca-bundle\") pod \"keystone-bootstrap-xmmq7\" (UID: \"bf297ed4-229b-492f-bd27-5ea5e2279816\") " pod="openstack/keystone-bootstrap-xmmq7" Apr 04 02:24:45 crc kubenswrapper[4681]: I0404 02:24:45.021358 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf297ed4-229b-492f-bd27-5ea5e2279816-credential-keys\") pod \"keystone-bootstrap-xmmq7\" (UID: \"bf297ed4-229b-492f-bd27-5ea5e2279816\") " pod="openstack/keystone-bootstrap-xmmq7" Apr 04 02:24:45 crc kubenswrapper[4681]: I0404 02:24:45.029221 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf297ed4-229b-492f-bd27-5ea5e2279816-fernet-keys\") pod \"keystone-bootstrap-xmmq7\" (UID: \"bf297ed4-229b-492f-bd27-5ea5e2279816\") " pod="openstack/keystone-bootstrap-xmmq7" Apr 04 02:24:45 crc kubenswrapper[4681]: I0404 02:24:45.032053 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dqf9\" (UniqueName: \"kubernetes.io/projected/bf297ed4-229b-492f-bd27-5ea5e2279816-kube-api-access-9dqf9\") pod \"keystone-bootstrap-xmmq7\" (UID: \"bf297ed4-229b-492f-bd27-5ea5e2279816\") " pod="openstack/keystone-bootstrap-xmmq7" Apr 04 02:24:45 crc kubenswrapper[4681]: I0404 02:24:45.180018 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xmmq7" Apr 04 02:24:45 crc kubenswrapper[4681]: I0404 02:24:45.215938 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfb29995-ab75-4836-80c2-83a27ed076c6" path="/var/lib/kubelet/pods/cfb29995-ab75-4836-80c2-83a27ed076c6/volumes" Apr 04 02:24:45 crc kubenswrapper[4681]: I0404 02:24:45.904206 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" podUID="957e4b64-c18f-4cee-87dc-848a0d936626" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.153:5353: connect: connection refused" Apr 04 02:24:49 crc kubenswrapper[4681]: E0404 02:24:49.352442 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 947573167cb8c13b4ed066c6d80f2a580354c4a1db92a38bc486066b53b68b46 is running failed: container process not found" containerID="947573167cb8c13b4ed066c6d80f2a580354c4a1db92a38bc486066b53b68b46" cmd=["grpc_health_probe","-addr=:50051"] Apr 04 02:24:49 crc kubenswrapper[4681]: E0404 02:24:49.353376 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 947573167cb8c13b4ed066c6d80f2a580354c4a1db92a38bc486066b53b68b46 is running failed: container process not found" containerID="947573167cb8c13b4ed066c6d80f2a580354c4a1db92a38bc486066b53b68b46" cmd=["grpc_health_probe","-addr=:50051"] Apr 04 02:24:49 crc kubenswrapper[4681]: E0404 02:24:49.353616 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 947573167cb8c13b4ed066c6d80f2a580354c4a1db92a38bc486066b53b68b46 is running failed: container process not found" containerID="947573167cb8c13b4ed066c6d80f2a580354c4a1db92a38bc486066b53b68b46" cmd=["grpc_health_probe","-addr=:50051"] Apr 04 02:24:49 crc kubenswrapper[4681]: E0404 02:24:49.353648 4681 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 947573167cb8c13b4ed066c6d80f2a580354c4a1db92a38bc486066b53b68b46 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-h5zv5" podUID="56f8d1b1-7cb9-400a-a94b-71bf1fcf0902" containerName="registry-server" Apr 04 02:24:50 crc kubenswrapper[4681]: I0404 02:24:50.904381 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" podUID="957e4b64-c18f-4cee-87dc-848a0d936626" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.153:5353: connect: connection refused" Apr 04 02:24:50 crc kubenswrapper[4681]: I0404 02:24:50.904802 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" Apr 04 02:24:55 crc kubenswrapper[4681]: I0404 02:24:55.905076 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" podUID="957e4b64-c18f-4cee-87dc-848a0d936626" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.153:5353: connect: connection refused" Apr 04 02:24:56 crc kubenswrapper[4681]: E0404 02:24:56.255552 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.110:5001/podified-master-centos10/openstack-placement-api:watcher_latest" Apr 04 02:24:56 crc kubenswrapper[4681]: E0404 02:24:56.255862 4681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.110:5001/podified-master-centos10/openstack-placement-api:watcher_latest" Apr 04 02:24:56 crc kubenswrapper[4681]: E0404 02:24:56.256010 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:38.102.83.110:5001/podified-master-centos10/openstack-placement-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9fjtt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-qdpnl_openstack(96da315f-3451-45c0-b1fc-687c9d18dccf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Apr 04 02:24:56 crc kubenswrapper[4681]: E0404 02:24:56.257242 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-qdpnl" podUID="96da315f-3451-45c0-b1fc-687c9d18dccf" Apr 04 02:24:56 crc kubenswrapper[4681]: I0404 02:24:56.316097 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-hrgm8" Apr 04 02:24:56 crc kubenswrapper[4681]: I0404 02:24:56.501769 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0011158a-2855-4b60-9798-77badda0f40c-config-data\") pod \"0011158a-2855-4b60-9798-77badda0f40c\" (UID: \"0011158a-2855-4b60-9798-77badda0f40c\") " Apr 04 02:24:56 crc kubenswrapper[4681]: I0404 02:24:56.502831 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5swv\" (UniqueName: \"kubernetes.io/projected/0011158a-2855-4b60-9798-77badda0f40c-kube-api-access-f5swv\") pod \"0011158a-2855-4b60-9798-77badda0f40c\" (UID: \"0011158a-2855-4b60-9798-77badda0f40c\") " Apr 04 02:24:56 crc kubenswrapper[4681]: I0404 02:24:56.503123 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0011158a-2855-4b60-9798-77badda0f40c-db-sync-config-data\") pod \"0011158a-2855-4b60-9798-77badda0f40c\" (UID: \"0011158a-2855-4b60-9798-77badda0f40c\") " Apr 04 02:24:56 crc kubenswrapper[4681]: I0404 02:24:56.503155 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0011158a-2855-4b60-9798-77badda0f40c-combined-ca-bundle\") pod \"0011158a-2855-4b60-9798-77badda0f40c\" (UID: \"0011158a-2855-4b60-9798-77badda0f40c\") " Apr 04 02:24:56 crc kubenswrapper[4681]: I0404 02:24:56.507806 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0011158a-2855-4b60-9798-77badda0f40c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "0011158a-2855-4b60-9798-77badda0f40c" (UID: "0011158a-2855-4b60-9798-77badda0f40c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:24:56 crc kubenswrapper[4681]: I0404 02:24:56.510925 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0011158a-2855-4b60-9798-77badda0f40c-kube-api-access-f5swv" (OuterVolumeSpecName: "kube-api-access-f5swv") pod "0011158a-2855-4b60-9798-77badda0f40c" (UID: "0011158a-2855-4b60-9798-77badda0f40c"). InnerVolumeSpecName "kube-api-access-f5swv". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:24:56 crc kubenswrapper[4681]: I0404 02:24:56.524336 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:24:56 crc kubenswrapper[4681]: I0404 02:24:56.524443 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:24:56 crc kubenswrapper[4681]: I0404 02:24:56.524505 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 02:24:56 crc kubenswrapper[4681]: I0404 02:24:56.525359 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a115e63e478f2dcd96d6a1ea5579c4abd8544184899aee1395f08415f92823da"} pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Apr 04 02:24:56 crc kubenswrapper[4681]: I0404 02:24:56.525420 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" containerID="cri-o://a115e63e478f2dcd96d6a1ea5579c4abd8544184899aee1395f08415f92823da" gracePeriod=600 Apr 04 02:24:56 crc kubenswrapper[4681]: I0404 02:24:56.560648 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0011158a-2855-4b60-9798-77badda0f40c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0011158a-2855-4b60-9798-77badda0f40c" (UID: "0011158a-2855-4b60-9798-77badda0f40c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:24:56 crc kubenswrapper[4681]: I0404 02:24:56.572946 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0011158a-2855-4b60-9798-77badda0f40c-config-data" (OuterVolumeSpecName: "config-data") pod "0011158a-2855-4b60-9798-77badda0f40c" (UID: "0011158a-2855-4b60-9798-77badda0f40c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:24:56 crc kubenswrapper[4681]: I0404 02:24:56.605383 4681 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/0011158a-2855-4b60-9798-77badda0f40c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:24:56 crc kubenswrapper[4681]: I0404 02:24:56.605420 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0011158a-2855-4b60-9798-77badda0f40c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:24:56 crc kubenswrapper[4681]: I0404 02:24:56.605440 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0011158a-2855-4b60-9798-77badda0f40c-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:24:56 crc kubenswrapper[4681]: I0404 02:24:56.605499 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5swv\" (UniqueName: \"kubernetes.io/projected/0011158a-2855-4b60-9798-77badda0f40c-kube-api-access-f5swv\") on node \"crc\" DevicePath \"\"" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.078116 4681 generic.go:334] "Generic (PLEG): container finished" podID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerID="a115e63e478f2dcd96d6a1ea5579c4abd8544184899aee1395f08415f92823da" exitCode=0 Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.078183 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerDied","Data":"a115e63e478f2dcd96d6a1ea5579c4abd8544184899aee1395f08415f92823da"} Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.078233 4681 scope.go:117] "RemoveContainer" containerID="3654cfff66d5807945bfb8fd6cd5a2240bc45afc78ca743b318542d8aeaa09d5" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.082320 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-hrgm8" event={"ID":"0011158a-2855-4b60-9798-77badda0f40c","Type":"ContainerDied","Data":"d8c99c0157e0422493241afb8a05976a44726be003d4dd357e787d7b6fcc34e7"} Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.082366 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8c99c0157e0422493241afb8a05976a44726be003d4dd357e787d7b6fcc34e7" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.082411 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-hrgm8" Apr 04 02:24:57 crc kubenswrapper[4681]: E0404 02:24:57.087223 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.110:5001/podified-master-centos10/openstack-placement-api:watcher_latest\\\"\"" pod="openstack/placement-db-sync-qdpnl" podUID="96da315f-3451-45c0-b1fc-687c9d18dccf" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.610897 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Apr 04 02:24:57 crc kubenswrapper[4681]: E0404 02:24:57.611613 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0011158a-2855-4b60-9798-77badda0f40c" containerName="watcher-db-sync" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.611634 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="0011158a-2855-4b60-9798-77badda0f40c" containerName="watcher-db-sync" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.611874 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="0011158a-2855-4b60-9798-77badda0f40c" containerName="watcher-db-sync" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.612987 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.615541 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-txcwh" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.615594 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.627462 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6hb4\" (UniqueName: \"kubernetes.io/projected/265dfced-e437-45c0-89bc-ff70885395f0-kube-api-access-b6hb4\") pod \"watcher-api-0\" (UID: \"265dfced-e437-45c0-89bc-ff70885395f0\") " pod="openstack/watcher-api-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.627543 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/265dfced-e437-45c0-89bc-ff70885395f0-config-data\") pod \"watcher-api-0\" (UID: \"265dfced-e437-45c0-89bc-ff70885395f0\") " pod="openstack/watcher-api-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.627586 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/265dfced-e437-45c0-89bc-ff70885395f0-logs\") pod \"watcher-api-0\" (UID: \"265dfced-e437-45c0-89bc-ff70885395f0\") " pod="openstack/watcher-api-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.627658 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/265dfced-e437-45c0-89bc-ff70885395f0-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"265dfced-e437-45c0-89bc-ff70885395f0\") " pod="openstack/watcher-api-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.627722 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/265dfced-e437-45c0-89bc-ff70885395f0-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"265dfced-e437-45c0-89bc-ff70885395f0\") " pod="openstack/watcher-api-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.640023 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.691466 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.693372 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.698840 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.729132 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.729460 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6hb4\" (UniqueName: \"kubernetes.io/projected/265dfced-e437-45c0-89bc-ff70885395f0-kube-api-access-b6hb4\") pod \"watcher-api-0\" (UID: \"265dfced-e437-45c0-89bc-ff70885395f0\") " pod="openstack/watcher-api-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.729521 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/265dfced-e437-45c0-89bc-ff70885395f0-config-data\") pod \"watcher-api-0\" (UID: \"265dfced-e437-45c0-89bc-ff70885395f0\") " pod="openstack/watcher-api-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.729556 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/265dfced-e437-45c0-89bc-ff70885395f0-logs\") pod \"watcher-api-0\" (UID: \"265dfced-e437-45c0-89bc-ff70885395f0\") " pod="openstack/watcher-api-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.729581 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b96bc277-5d81-4864-86b1-aeeab7142a0b-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"b96bc277-5d81-4864-86b1-aeeab7142a0b\") " pod="openstack/watcher-applier-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.729620 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b96bc277-5d81-4864-86b1-aeeab7142a0b-config-data\") pod \"watcher-applier-0\" (UID: \"b96bc277-5d81-4864-86b1-aeeab7142a0b\") " pod="openstack/watcher-applier-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.729644 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xl6d\" (UniqueName: \"kubernetes.io/projected/b96bc277-5d81-4864-86b1-aeeab7142a0b-kube-api-access-7xl6d\") pod \"watcher-applier-0\" (UID: \"b96bc277-5d81-4864-86b1-aeeab7142a0b\") " pod="openstack/watcher-applier-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.729673 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/265dfced-e437-45c0-89bc-ff70885395f0-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"265dfced-e437-45c0-89bc-ff70885395f0\") " pod="openstack/watcher-api-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.729865 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/265dfced-e437-45c0-89bc-ff70885395f0-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"265dfced-e437-45c0-89bc-ff70885395f0\") " pod="openstack/watcher-api-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.729983 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b96bc277-5d81-4864-86b1-aeeab7142a0b-logs\") pod \"watcher-applier-0\" (UID: \"b96bc277-5d81-4864-86b1-aeeab7142a0b\") " pod="openstack/watcher-applier-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.730258 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/265dfced-e437-45c0-89bc-ff70885395f0-logs\") pod \"watcher-api-0\" (UID: \"265dfced-e437-45c0-89bc-ff70885395f0\") " pod="openstack/watcher-api-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.739584 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/265dfced-e437-45c0-89bc-ff70885395f0-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"265dfced-e437-45c0-89bc-ff70885395f0\") " pod="openstack/watcher-api-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.749344 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/265dfced-e437-45c0-89bc-ff70885395f0-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"265dfced-e437-45c0-89bc-ff70885395f0\") " pod="openstack/watcher-api-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.751166 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/265dfced-e437-45c0-89bc-ff70885395f0-config-data\") pod \"watcher-api-0\" (UID: \"265dfced-e437-45c0-89bc-ff70885395f0\") " pod="openstack/watcher-api-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.763071 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.764381 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.766641 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.767024 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6hb4\" (UniqueName: \"kubernetes.io/projected/265dfced-e437-45c0-89bc-ff70885395f0-kube-api-access-b6hb4\") pod \"watcher-api-0\" (UID: \"265dfced-e437-45c0-89bc-ff70885395f0\") " pod="openstack/watcher-api-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.775011 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.832026 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b96bc277-5d81-4864-86b1-aeeab7142a0b-config-data\") pod \"watcher-applier-0\" (UID: \"b96bc277-5d81-4864-86b1-aeeab7142a0b\") " pod="openstack/watcher-applier-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.832100 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xl6d\" (UniqueName: \"kubernetes.io/projected/b96bc277-5d81-4864-86b1-aeeab7142a0b-kube-api-access-7xl6d\") pod \"watcher-applier-0\" (UID: \"b96bc277-5d81-4864-86b1-aeeab7142a0b\") " pod="openstack/watcher-applier-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.832147 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/442b54de-22a7-4121-aab3-5365d4e0872d-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"442b54de-22a7-4121-aab3-5365d4e0872d\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.832189 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzlrx\" (UniqueName: \"kubernetes.io/projected/442b54de-22a7-4121-aab3-5365d4e0872d-kube-api-access-tzlrx\") pod \"watcher-decision-engine-0\" (UID: \"442b54de-22a7-4121-aab3-5365d4e0872d\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.832357 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b96bc277-5d81-4864-86b1-aeeab7142a0b-logs\") pod \"watcher-applier-0\" (UID: \"b96bc277-5d81-4864-86b1-aeeab7142a0b\") " pod="openstack/watcher-applier-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.832452 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/442b54de-22a7-4121-aab3-5365d4e0872d-logs\") pod \"watcher-decision-engine-0\" (UID: \"442b54de-22a7-4121-aab3-5365d4e0872d\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.832493 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b96bc277-5d81-4864-86b1-aeeab7142a0b-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"b96bc277-5d81-4864-86b1-aeeab7142a0b\") " pod="openstack/watcher-applier-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.832520 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/442b54de-22a7-4121-aab3-5365d4e0872d-config-data\") pod \"watcher-decision-engine-0\" (UID: \"442b54de-22a7-4121-aab3-5365d4e0872d\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.832550 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/442b54de-22a7-4121-aab3-5365d4e0872d-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"442b54de-22a7-4121-aab3-5365d4e0872d\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.833221 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b96bc277-5d81-4864-86b1-aeeab7142a0b-logs\") pod \"watcher-applier-0\" (UID: \"b96bc277-5d81-4864-86b1-aeeab7142a0b\") " pod="openstack/watcher-applier-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.837607 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b96bc277-5d81-4864-86b1-aeeab7142a0b-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"b96bc277-5d81-4864-86b1-aeeab7142a0b\") " pod="openstack/watcher-applier-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.837849 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b96bc277-5d81-4864-86b1-aeeab7142a0b-config-data\") pod \"watcher-applier-0\" (UID: \"b96bc277-5d81-4864-86b1-aeeab7142a0b\") " pod="openstack/watcher-applier-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.849956 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xl6d\" (UniqueName: \"kubernetes.io/projected/b96bc277-5d81-4864-86b1-aeeab7142a0b-kube-api-access-7xl6d\") pod \"watcher-applier-0\" (UID: \"b96bc277-5d81-4864-86b1-aeeab7142a0b\") " pod="openstack/watcher-applier-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.931488 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.933912 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/442b54de-22a7-4121-aab3-5365d4e0872d-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"442b54de-22a7-4121-aab3-5365d4e0872d\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.933967 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzlrx\" (UniqueName: \"kubernetes.io/projected/442b54de-22a7-4121-aab3-5365d4e0872d-kube-api-access-tzlrx\") pod \"watcher-decision-engine-0\" (UID: \"442b54de-22a7-4121-aab3-5365d4e0872d\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.934106 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/442b54de-22a7-4121-aab3-5365d4e0872d-logs\") pod \"watcher-decision-engine-0\" (UID: \"442b54de-22a7-4121-aab3-5365d4e0872d\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.934141 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/442b54de-22a7-4121-aab3-5365d4e0872d-config-data\") pod \"watcher-decision-engine-0\" (UID: \"442b54de-22a7-4121-aab3-5365d4e0872d\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.934170 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/442b54de-22a7-4121-aab3-5365d4e0872d-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"442b54de-22a7-4121-aab3-5365d4e0872d\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.935456 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/442b54de-22a7-4121-aab3-5365d4e0872d-logs\") pod \"watcher-decision-engine-0\" (UID: \"442b54de-22a7-4121-aab3-5365d4e0872d\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.938012 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/442b54de-22a7-4121-aab3-5365d4e0872d-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"442b54de-22a7-4121-aab3-5365d4e0872d\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.938689 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/442b54de-22a7-4121-aab3-5365d4e0872d-config-data\") pod \"watcher-decision-engine-0\" (UID: \"442b54de-22a7-4121-aab3-5365d4e0872d\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.939309 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/442b54de-22a7-4121-aab3-5365d4e0872d-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"442b54de-22a7-4121-aab3-5365d4e0872d\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:24:57 crc kubenswrapper[4681]: I0404 02:24:57.953143 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzlrx\" (UniqueName: \"kubernetes.io/projected/442b54de-22a7-4121-aab3-5365d4e0872d-kube-api-access-tzlrx\") pod \"watcher-decision-engine-0\" (UID: \"442b54de-22a7-4121-aab3-5365d4e0872d\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:24:58 crc kubenswrapper[4681]: I0404 02:24:58.025135 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Apr 04 02:24:58 crc kubenswrapper[4681]: I0404 02:24:58.237530 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Apr 04 02:24:59 crc kubenswrapper[4681]: E0404 02:24:59.352603 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 947573167cb8c13b4ed066c6d80f2a580354c4a1db92a38bc486066b53b68b46 is running failed: container process not found" containerID="947573167cb8c13b4ed066c6d80f2a580354c4a1db92a38bc486066b53b68b46" cmd=["grpc_health_probe","-addr=:50051"] Apr 04 02:24:59 crc kubenswrapper[4681]: E0404 02:24:59.353481 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 947573167cb8c13b4ed066c6d80f2a580354c4a1db92a38bc486066b53b68b46 is running failed: container process not found" containerID="947573167cb8c13b4ed066c6d80f2a580354c4a1db92a38bc486066b53b68b46" cmd=["grpc_health_probe","-addr=:50051"] Apr 04 02:24:59 crc kubenswrapper[4681]: E0404 02:24:59.353994 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 947573167cb8c13b4ed066c6d80f2a580354c4a1db92a38bc486066b53b68b46 is running failed: container process not found" containerID="947573167cb8c13b4ed066c6d80f2a580354c4a1db92a38bc486066b53b68b46" cmd=["grpc_health_probe","-addr=:50051"] Apr 04 02:24:59 crc kubenswrapper[4681]: E0404 02:24:59.354038 4681 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 947573167cb8c13b4ed066c6d80f2a580354c4a1db92a38bc486066b53b68b46 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-h5zv5" podUID="56f8d1b1-7cb9-400a-a94b-71bf1fcf0902" containerName="registry-server" Apr 04 02:25:05 crc kubenswrapper[4681]: I0404 02:25:05.904654 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" podUID="957e4b64-c18f-4cee-87dc-848a0d936626" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.153:5353: i/o timeout" Apr 04 02:25:14 crc kubenswrapper[4681]: E0404 02:25:09.352698 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 947573167cb8c13b4ed066c6d80f2a580354c4a1db92a38bc486066b53b68b46 is running failed: container process not found" containerID="947573167cb8c13b4ed066c6d80f2a580354c4a1db92a38bc486066b53b68b46" cmd=["grpc_health_probe","-addr=:50051"] Apr 04 02:25:14 crc kubenswrapper[4681]: E0404 02:25:09.353678 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 947573167cb8c13b4ed066c6d80f2a580354c4a1db92a38bc486066b53b68b46 is running failed: container process not found" containerID="947573167cb8c13b4ed066c6d80f2a580354c4a1db92a38bc486066b53b68b46" cmd=["grpc_health_probe","-addr=:50051"] Apr 04 02:25:14 crc kubenswrapper[4681]: E0404 02:25:09.354036 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 947573167cb8c13b4ed066c6d80f2a580354c4a1db92a38bc486066b53b68b46 is running failed: container process not found" containerID="947573167cb8c13b4ed066c6d80f2a580354c4a1db92a38bc486066b53b68b46" cmd=["grpc_health_probe","-addr=:50051"] Apr 04 02:25:14 crc kubenswrapper[4681]: E0404 02:25:09.354071 4681 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 947573167cb8c13b4ed066c6d80f2a580354c4a1db92a38bc486066b53b68b46 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-h5zv5" podUID="56f8d1b1-7cb9-400a-a94b-71bf1fcf0902" containerName="registry-server" Apr 04 02:25:14 crc kubenswrapper[4681]: I0404 02:25:10.905458 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" podUID="957e4b64-c18f-4cee-87dc-848a0d936626" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.153:5353: i/o timeout" Apr 04 02:25:15 crc kubenswrapper[4681]: I0404 02:25:15.905985 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" podUID="957e4b64-c18f-4cee-87dc-848a0d936626" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.153:5353: i/o timeout" Apr 04 02:25:17 crc kubenswrapper[4681]: I0404 02:25:17.619180 4681 scope.go:117] "RemoveContainer" containerID="bae30a9c8f9008340e80437202b1a37deeb64b1ca587aca5c7bfac9ec418144a" Apr 04 02:25:18 crc kubenswrapper[4681]: E0404 02:25:18.366051 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:25:18 crc kubenswrapper[4681]: I0404 02:25:18.409371 4681 scope.go:117] "RemoveContainer" containerID="a08da319b1adb8c73595ab79416fd2ad1b62d1a5ad788992d80fc4351c63101a" Apr 04 02:25:18 crc kubenswrapper[4681]: E0404 02:25:18.419994 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.110:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Apr 04 02:25:18 crc kubenswrapper[4681]: E0404 02:25:18.420079 4681 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.110:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Apr 04 02:25:18 crc kubenswrapper[4681]: E0404 02:25:18.420331 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.110:5001/podified-master-centos10/openstack-cinder-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xhhls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-5sd54_openstack(b185d1fc-0c71-44ee-bb6d-915189acc4d8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Apr 04 02:25:18 crc kubenswrapper[4681]: E0404 02:25:18.421550 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-5sd54" podUID="b185d1fc-0c71-44ee-bb6d-915189acc4d8" Apr 04 02:25:18 crc kubenswrapper[4681]: I0404 02:25:18.723633 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" Apr 04 02:25:18 crc kubenswrapper[4681]: I0404 02:25:18.734411 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h5zv5" Apr 04 02:25:18 crc kubenswrapper[4681]: I0404 02:25:18.832189 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/957e4b64-c18f-4cee-87dc-848a0d936626-dns-svc\") pod \"957e4b64-c18f-4cee-87dc-848a0d936626\" (UID: \"957e4b64-c18f-4cee-87dc-848a0d936626\") " Apr 04 02:25:18 crc kubenswrapper[4681]: I0404 02:25:18.832302 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/957e4b64-c18f-4cee-87dc-848a0d936626-config\") pod \"957e4b64-c18f-4cee-87dc-848a0d936626\" (UID: \"957e4b64-c18f-4cee-87dc-848a0d936626\") " Apr 04 02:25:18 crc kubenswrapper[4681]: I0404 02:25:18.832333 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/957e4b64-c18f-4cee-87dc-848a0d936626-ovsdbserver-nb\") pod \"957e4b64-c18f-4cee-87dc-848a0d936626\" (UID: \"957e4b64-c18f-4cee-87dc-848a0d936626\") " Apr 04 02:25:18 crc kubenswrapper[4681]: I0404 02:25:18.832385 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/957e4b64-c18f-4cee-87dc-848a0d936626-ovsdbserver-sb\") pod \"957e4b64-c18f-4cee-87dc-848a0d936626\" (UID: \"957e4b64-c18f-4cee-87dc-848a0d936626\") " Apr 04 02:25:18 crc kubenswrapper[4681]: I0404 02:25:18.832420 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/957e4b64-c18f-4cee-87dc-848a0d936626-dns-swift-storage-0\") pod \"957e4b64-c18f-4cee-87dc-848a0d936626\" (UID: \"957e4b64-c18f-4cee-87dc-848a0d936626\") " Apr 04 02:25:18 crc kubenswrapper[4681]: I0404 02:25:18.832452 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9ql2\" (UniqueName: \"kubernetes.io/projected/957e4b64-c18f-4cee-87dc-848a0d936626-kube-api-access-l9ql2\") pod \"957e4b64-c18f-4cee-87dc-848a0d936626\" (UID: \"957e4b64-c18f-4cee-87dc-848a0d936626\") " Apr 04 02:25:18 crc kubenswrapper[4681]: I0404 02:25:18.852740 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/957e4b64-c18f-4cee-87dc-848a0d936626-kube-api-access-l9ql2" (OuterVolumeSpecName: "kube-api-access-l9ql2") pod "957e4b64-c18f-4cee-87dc-848a0d936626" (UID: "957e4b64-c18f-4cee-87dc-848a0d936626"). InnerVolumeSpecName "kube-api-access-l9ql2". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:25:18 crc kubenswrapper[4681]: I0404 02:25:18.934509 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56f8d1b1-7cb9-400a-a94b-71bf1fcf0902-catalog-content\") pod \"56f8d1b1-7cb9-400a-a94b-71bf1fcf0902\" (UID: \"56f8d1b1-7cb9-400a-a94b-71bf1fcf0902\") " Apr 04 02:25:18 crc kubenswrapper[4681]: I0404 02:25:18.937213 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56f8d1b1-7cb9-400a-a94b-71bf1fcf0902-utilities" (OuterVolumeSpecName: "utilities") pod "56f8d1b1-7cb9-400a-a94b-71bf1fcf0902" (UID: "56f8d1b1-7cb9-400a-a94b-71bf1fcf0902"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:25:18 crc kubenswrapper[4681]: I0404 02:25:18.937654 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56f8d1b1-7cb9-400a-a94b-71bf1fcf0902-utilities\") pod \"56f8d1b1-7cb9-400a-a94b-71bf1fcf0902\" (UID: \"56f8d1b1-7cb9-400a-a94b-71bf1fcf0902\") " Apr 04 02:25:18 crc kubenswrapper[4681]: I0404 02:25:18.937710 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lphdq\" (UniqueName: \"kubernetes.io/projected/56f8d1b1-7cb9-400a-a94b-71bf1fcf0902-kube-api-access-lphdq\") pod \"56f8d1b1-7cb9-400a-a94b-71bf1fcf0902\" (UID: \"56f8d1b1-7cb9-400a-a94b-71bf1fcf0902\") " Apr 04 02:25:18 crc kubenswrapper[4681]: I0404 02:25:18.938136 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9ql2\" (UniqueName: \"kubernetes.io/projected/957e4b64-c18f-4cee-87dc-848a0d936626-kube-api-access-l9ql2\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:18 crc kubenswrapper[4681]: I0404 02:25:18.942775 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56f8d1b1-7cb9-400a-a94b-71bf1fcf0902-kube-api-access-lphdq" (OuterVolumeSpecName: "kube-api-access-lphdq") pod "56f8d1b1-7cb9-400a-a94b-71bf1fcf0902" (UID: "56f8d1b1-7cb9-400a-a94b-71bf1fcf0902"). InnerVolumeSpecName "kube-api-access-lphdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:25:18 crc kubenswrapper[4681]: I0404 02:25:18.955488 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/957e4b64-c18f-4cee-87dc-848a0d936626-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "957e4b64-c18f-4cee-87dc-848a0d936626" (UID: "957e4b64-c18f-4cee-87dc-848a0d936626"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:25:18 crc kubenswrapper[4681]: I0404 02:25:18.967168 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/957e4b64-c18f-4cee-87dc-848a0d936626-config" (OuterVolumeSpecName: "config") pod "957e4b64-c18f-4cee-87dc-848a0d936626" (UID: "957e4b64-c18f-4cee-87dc-848a0d936626"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:25:18 crc kubenswrapper[4681]: I0404 02:25:18.973394 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/957e4b64-c18f-4cee-87dc-848a0d936626-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "957e4b64-c18f-4cee-87dc-848a0d936626" (UID: "957e4b64-c18f-4cee-87dc-848a0d936626"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:25:18 crc kubenswrapper[4681]: I0404 02:25:18.973931 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/957e4b64-c18f-4cee-87dc-848a0d936626-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "957e4b64-c18f-4cee-87dc-848a0d936626" (UID: "957e4b64-c18f-4cee-87dc-848a0d936626"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:25:18 crc kubenswrapper[4681]: I0404 02:25:18.974912 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/957e4b64-c18f-4cee-87dc-848a0d936626-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "957e4b64-c18f-4cee-87dc-848a0d936626" (UID: "957e4b64-c18f-4cee-87dc-848a0d936626"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.040600 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/957e4b64-c18f-4cee-87dc-848a0d936626-dns-svc\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.040637 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56f8d1b1-7cb9-400a-a94b-71bf1fcf0902-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.040654 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lphdq\" (UniqueName: \"kubernetes.io/projected/56f8d1b1-7cb9-400a-a94b-71bf1fcf0902-kube-api-access-lphdq\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.040667 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/957e4b64-c18f-4cee-87dc-848a0d936626-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.040681 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/957e4b64-c18f-4cee-87dc-848a0d936626-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.040691 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/957e4b64-c18f-4cee-87dc-848a0d936626-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.040704 4681 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/957e4b64-c18f-4cee-87dc-848a0d936626-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.078754 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xmmq7"] Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.110755 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56f8d1b1-7cb9-400a-a94b-71bf1fcf0902-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56f8d1b1-7cb9-400a-a94b-71bf1fcf0902" (UID: "56f8d1b1-7cb9-400a-a94b-71bf1fcf0902"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.142553 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56f8d1b1-7cb9-400a-a94b-71bf1fcf0902-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.151683 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Apr 04 02:25:19 crc kubenswrapper[4681]: W0404 02:25:19.160466 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod265dfced_e437_45c0_89bc_ff70885395f0.slice/crio-7448faba130879c884a34568d9e529f8307994d139b40f24d977bc1ff991fb46 WatchSource:0}: Error finding container 7448faba130879c884a34568d9e529f8307994d139b40f24d977bc1ff991fb46: Status 404 returned error can't find the container with id 7448faba130879c884a34568d9e529f8307994d139b40f24d977bc1ff991fb46 Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.162763 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Apr 04 02:25:19 crc kubenswrapper[4681]: W0404 02:25:19.163891 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod442b54de_22a7_4121_aab3_5365d4e0872d.slice/crio-ff4a8cadd09b16a4640f57a7e0f139b567cc4e1ba22a5bed7b60d64e0d95d0c8 WatchSource:0}: Error finding container ff4a8cadd09b16a4640f57a7e0f139b567cc4e1ba22a5bed7b60d64e0d95d0c8: Status 404 returned error can't find the container with id ff4a8cadd09b16a4640f57a7e0f139b567cc4e1ba22a5bed7b60d64e0d95d0c8 Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.171663 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Apr 04 02:25:19 crc kubenswrapper[4681]: W0404 02:25:19.178841 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb96bc277_5d81_4864_86b1_aeeab7142a0b.slice/crio-0af659ced1fc306ce817fc64cc24269fac2b32876a5cc1e4399717ec793a845a WatchSource:0}: Error finding container 0af659ced1fc306ce817fc64cc24269fac2b32876a5cc1e4399717ec793a845a: Status 404 returned error can't find the container with id 0af659ced1fc306ce817fc64cc24269fac2b32876a5cc1e4399717ec793a845a Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.304348 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xmmq7" event={"ID":"bf297ed4-229b-492f-bd27-5ea5e2279816","Type":"ContainerStarted","Data":"8229141681ecc07a19f839c7acec322417be94f11887bb96ea0c7ea1ebe8263a"} Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.304388 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xmmq7" event={"ID":"bf297ed4-229b-492f-bd27-5ea5e2279816","Type":"ContainerStarted","Data":"19c914a6c8cf4e59b4363d96467b3c256f547f474c0acbd2534d9a22f4adad0e"} Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.310002 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" event={"ID":"957e4b64-c18f-4cee-87dc-848a0d936626","Type":"ContainerDied","Data":"0388460ee3a4e4df638804e819dbae88021aec4765638fc313e36e56d05a6302"} Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.310053 4681 scope.go:117] "RemoveContainer" containerID="a5dd28324d39eb03288061e75df5a1eedf7c8ef10f772e356f415fb16cc5d60b" Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.310176 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.317471 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"442b54de-22a7-4121-aab3-5365d4e0872d","Type":"ContainerStarted","Data":"ff4a8cadd09b16a4640f57a7e0f139b567cc4e1ba22a5bed7b60d64e0d95d0c8"} Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.329557 4681 scope.go:117] "RemoveContainer" containerID="a115e63e478f2dcd96d6a1ea5579c4abd8544184899aee1395f08415f92823da" Apr 04 02:25:19 crc kubenswrapper[4681]: E0404 02:25:19.329819 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.330770 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xmmq7" podStartSLOduration=35.330757793 podStartE2EDuration="35.330757793s" podCreationTimestamp="2026-04-04 02:24:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:25:19.325252952 +0000 UTC m=+1798.991028072" watchObservedRunningTime="2026-04-04 02:25:19.330757793 +0000 UTC m=+1798.996532923" Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.346908 4681 scope.go:117] "RemoveContainer" containerID="0667ee411a264f5e247fddd5b11ff7055e119980761e03859eb68a7d5f2260cb" Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.352889 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"b96bc277-5d81-4864-86b1-aeeab7142a0b","Type":"ContainerStarted","Data":"0af659ced1fc306ce817fc64cc24269fac2b32876a5cc1e4399717ec793a845a"} Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.371312 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"265dfced-e437-45c0-89bc-ff70885395f0","Type":"ContainerStarted","Data":"7448faba130879c884a34568d9e529f8307994d139b40f24d977bc1ff991fb46"} Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.386294 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-27c9s" event={"ID":"04422fbb-2a81-4627-9df3-dce05665ec03","Type":"ContainerStarted","Data":"2c4572453ef291c8bd6015810304578098b778e473aa7ccdb187df0cb8d69cdc"} Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.411989 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67cf7bcc65-9tfw2"] Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.412044 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qdpnl" event={"ID":"96da315f-3451-45c0-b1fc-687c9d18dccf","Type":"ContainerStarted","Data":"3594b47fea9f64289499cbf6739da54b8e258e9170155f1eee69bfeb43a9f8d9"} Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.437734 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d56f3f04-51e5-4b48-a4bb-88fe0b9be711","Type":"ContainerStarted","Data":"ceee4573b358f1ca9a688f529ac7b6118eb053a9ec5924129a1dd9f1be1c8b60"} Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.444892 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67cf7bcc65-9tfw2"] Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.460477 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h5zv5" Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.461452 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h5zv5" event={"ID":"56f8d1b1-7cb9-400a-a94b-71bf1fcf0902","Type":"ContainerDied","Data":"5749ac5909873847d332dc6e5debb4de1613a169e07ef94519d894b0acec6d44"} Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.461500 4681 scope.go:117] "RemoveContainer" containerID="947573167cb8c13b4ed066c6d80f2a580354c4a1db92a38bc486066b53b68b46" Apr 04 02:25:19 crc kubenswrapper[4681]: E0404 02:25:19.467788 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.110:5001/podified-master-centos10/openstack-cinder-api:watcher_latest\\\"\"" pod="openstack/cinder-db-sync-5sd54" podUID="b185d1fc-0c71-44ee-bb6d-915189acc4d8" Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.481188 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-27c9s" podStartSLOduration=2.855576605 podStartE2EDuration="53.480802532s" podCreationTimestamp="2026-04-04 02:24:26 +0000 UTC" firstStartedPulling="2026-04-04 02:24:27.931765856 +0000 UTC m=+1747.597540976" lastFinishedPulling="2026-04-04 02:25:18.556991783 +0000 UTC m=+1798.222766903" observedRunningTime="2026-04-04 02:25:19.411662464 +0000 UTC m=+1799.077437594" watchObservedRunningTime="2026-04-04 02:25:19.480802532 +0000 UTC m=+1799.146577652" Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.494670 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-qdpnl" podStartSLOduration=2.38367537 podStartE2EDuration="53.494647001s" podCreationTimestamp="2026-04-04 02:24:26 +0000 UTC" firstStartedPulling="2026-04-04 02:24:27.452474529 +0000 UTC m=+1747.118249649" lastFinishedPulling="2026-04-04 02:25:18.56344616 +0000 UTC m=+1798.229221280" observedRunningTime="2026-04-04 02:25:19.4530284 +0000 UTC m=+1799.118803520" watchObservedRunningTime="2026-04-04 02:25:19.494647001 +0000 UTC m=+1799.160422121" Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.603232 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h5zv5"] Apr 04 02:25:19 crc kubenswrapper[4681]: I0404 02:25:19.623691 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h5zv5"] Apr 04 02:25:20 crc kubenswrapper[4681]: I0404 02:25:20.480011 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"265dfced-e437-45c0-89bc-ff70885395f0","Type":"ContainerStarted","Data":"04f29e7d25764a31953e8fb51fda7f59406af1dcc14ef3e659dfc251f9eb6dac"} Apr 04 02:25:20 crc kubenswrapper[4681]: I0404 02:25:20.692314 4681 scope.go:117] "RemoveContainer" containerID="fd3da4d24c0d780332428ccba21dc5cb2275193f646e44626f92a74d85cdc365" Apr 04 02:25:20 crc kubenswrapper[4681]: I0404 02:25:20.801483 4681 scope.go:117] "RemoveContainer" containerID="e5b50429033c9d35bc05c8e935f01f9273eba046a67ebbe1892e67cdcff6a59e" Apr 04 02:25:20 crc kubenswrapper[4681]: I0404 02:25:20.907523 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-67cf7bcc65-9tfw2" podUID="957e4b64-c18f-4cee-87dc-848a0d936626" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.153:5353: i/o timeout" Apr 04 02:25:21 crc kubenswrapper[4681]: I0404 02:25:21.220981 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56f8d1b1-7cb9-400a-a94b-71bf1fcf0902" path="/var/lib/kubelet/pods/56f8d1b1-7cb9-400a-a94b-71bf1fcf0902/volumes" Apr 04 02:25:21 crc kubenswrapper[4681]: I0404 02:25:21.222480 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="957e4b64-c18f-4cee-87dc-848a0d936626" path="/var/lib/kubelet/pods/957e4b64-c18f-4cee-87dc-848a0d936626/volumes" Apr 04 02:25:21 crc kubenswrapper[4681]: I0404 02:25:21.495877 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d56f3f04-51e5-4b48-a4bb-88fe0b9be711","Type":"ContainerStarted","Data":"8b63eab707cd330468951cdf271ac0a95bcd43320b17bc8881c7a76e6d644d30"} Apr 04 02:25:21 crc kubenswrapper[4681]: I0404 02:25:21.505666 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"b96bc277-5d81-4864-86b1-aeeab7142a0b","Type":"ContainerStarted","Data":"716c1d6e5658216496cfad2d22c5716f62293c9a2ecd1e5f2ced72aaee406e87"} Apr 04 02:25:21 crc kubenswrapper[4681]: I0404 02:25:21.507825 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"265dfced-e437-45c0-89bc-ff70885395f0","Type":"ContainerStarted","Data":"6a57770a6ce58cb4bf37f62b51db36cbd29a79d068c2b5ef35ee765de5ea857f"} Apr 04 02:25:21 crc kubenswrapper[4681]: I0404 02:25:21.508557 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Apr 04 02:25:21 crc kubenswrapper[4681]: I0404 02:25:21.510982 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"442b54de-22a7-4121-aab3-5365d4e0872d","Type":"ContainerStarted","Data":"596d58724c4139b2b8f51ced98fde4359adf5f591dbff7adc473e65110e87d1d"} Apr 04 02:25:21 crc kubenswrapper[4681]: I0404 02:25:21.530469 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=22.916173043 podStartE2EDuration="24.530445615s" podCreationTimestamp="2026-04-04 02:24:57 +0000 UTC" firstStartedPulling="2026-04-04 02:25:19.193386292 +0000 UTC m=+1798.859161412" lastFinishedPulling="2026-04-04 02:25:20.807658864 +0000 UTC m=+1800.473433984" observedRunningTime="2026-04-04 02:25:21.521475179 +0000 UTC m=+1801.187250309" watchObservedRunningTime="2026-04-04 02:25:21.530445615 +0000 UTC m=+1801.196220735" Apr 04 02:25:21 crc kubenswrapper[4681]: I0404 02:25:21.561851 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=24.561832676 podStartE2EDuration="24.561832676s" podCreationTimestamp="2026-04-04 02:24:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:25:21.548078099 +0000 UTC m=+1801.213853229" watchObservedRunningTime="2026-04-04 02:25:21.561832676 +0000 UTC m=+1801.227607796" Apr 04 02:25:21 crc kubenswrapper[4681]: I0404 02:25:21.576323 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=22.935637878 podStartE2EDuration="24.576304384s" podCreationTimestamp="2026-04-04 02:24:57 +0000 UTC" firstStartedPulling="2026-04-04 02:25:19.166996988 +0000 UTC m=+1798.832772108" lastFinishedPulling="2026-04-04 02:25:20.807663494 +0000 UTC m=+1800.473438614" observedRunningTime="2026-04-04 02:25:21.575116571 +0000 UTC m=+1801.240891711" watchObservedRunningTime="2026-04-04 02:25:21.576304384 +0000 UTC m=+1801.242079504" Apr 04 02:25:22 crc kubenswrapper[4681]: I0404 02:25:22.932424 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Apr 04 02:25:23 crc kubenswrapper[4681]: I0404 02:25:23.026229 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Apr 04 02:25:23 crc kubenswrapper[4681]: I0404 02:25:23.531505 4681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 04 02:25:23 crc kubenswrapper[4681]: I0404 02:25:23.868084 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Apr 04 02:25:27 crc kubenswrapper[4681]: I0404 02:25:27.932734 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Apr 04 02:25:27 crc kubenswrapper[4681]: I0404 02:25:27.936762 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Apr 04 02:25:28 crc kubenswrapper[4681]: I0404 02:25:28.025574 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Apr 04 02:25:28 crc kubenswrapper[4681]: I0404 02:25:28.053887 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Apr 04 02:25:28 crc kubenswrapper[4681]: I0404 02:25:28.240237 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Apr 04 02:25:28 crc kubenswrapper[4681]: I0404 02:25:28.240371 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Apr 04 02:25:28 crc kubenswrapper[4681]: I0404 02:25:28.269589 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Apr 04 02:25:28 crc kubenswrapper[4681]: I0404 02:25:28.580693 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Apr 04 02:25:28 crc kubenswrapper[4681]: I0404 02:25:28.605304 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Apr 04 02:25:28 crc kubenswrapper[4681]: I0404 02:25:28.615605 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Apr 04 02:25:31 crc kubenswrapper[4681]: I0404 02:25:31.198708 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Apr 04 02:25:31 crc kubenswrapper[4681]: I0404 02:25:31.199152 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="265dfced-e437-45c0-89bc-ff70885395f0" containerName="watcher-api-log" containerID="cri-o://04f29e7d25764a31953e8fb51fda7f59406af1dcc14ef3e659dfc251f9eb6dac" gracePeriod=30 Apr 04 02:25:31 crc kubenswrapper[4681]: I0404 02:25:31.199296 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="265dfced-e437-45c0-89bc-ff70885395f0" containerName="watcher-api" containerID="cri-o://6a57770a6ce58cb4bf37f62b51db36cbd29a79d068c2b5ef35ee765de5ea857f" gracePeriod=30 Apr 04 02:25:31 crc kubenswrapper[4681]: I0404 02:25:31.610803 4681 generic.go:334] "Generic (PLEG): container finished" podID="265dfced-e437-45c0-89bc-ff70885395f0" containerID="04f29e7d25764a31953e8fb51fda7f59406af1dcc14ef3e659dfc251f9eb6dac" exitCode=143 Apr 04 02:25:31 crc kubenswrapper[4681]: I0404 02:25:31.610855 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"265dfced-e437-45c0-89bc-ff70885395f0","Type":"ContainerDied","Data":"04f29e7d25764a31953e8fb51fda7f59406af1dcc14ef3e659dfc251f9eb6dac"} Apr 04 02:25:32 crc kubenswrapper[4681]: I0404 02:25:32.622478 4681 generic.go:334] "Generic (PLEG): container finished" podID="bf297ed4-229b-492f-bd27-5ea5e2279816" containerID="8229141681ecc07a19f839c7acec322417be94f11887bb96ea0c7ea1ebe8263a" exitCode=0 Apr 04 02:25:32 crc kubenswrapper[4681]: I0404 02:25:32.622570 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xmmq7" event={"ID":"bf297ed4-229b-492f-bd27-5ea5e2279816","Type":"ContainerDied","Data":"8229141681ecc07a19f839c7acec322417be94f11887bb96ea0c7ea1ebe8263a"} Apr 04 02:25:32 crc kubenswrapper[4681]: I0404 02:25:32.625666 4681 generic.go:334] "Generic (PLEG): container finished" podID="265dfced-e437-45c0-89bc-ff70885395f0" containerID="6a57770a6ce58cb4bf37f62b51db36cbd29a79d068c2b5ef35ee765de5ea857f" exitCode=0 Apr 04 02:25:32 crc kubenswrapper[4681]: I0404 02:25:32.625716 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"265dfced-e437-45c0-89bc-ff70885395f0","Type":"ContainerDied","Data":"6a57770a6ce58cb4bf37f62b51db36cbd29a79d068c2b5ef35ee765de5ea857f"} Apr 04 02:25:32 crc kubenswrapper[4681]: I0404 02:25:32.932641 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="265dfced-e437-45c0-89bc-ff70885395f0" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.164:9322/\": dial tcp 10.217.0.164:9322: connect: connection refused" Apr 04 02:25:32 crc kubenswrapper[4681]: I0404 02:25:32.932723 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="265dfced-e437-45c0-89bc-ff70885395f0" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.164:9322/\": dial tcp 10.217.0.164:9322: connect: connection refused" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.358863 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.427943 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6hb4\" (UniqueName: \"kubernetes.io/projected/265dfced-e437-45c0-89bc-ff70885395f0-kube-api-access-b6hb4\") pod \"265dfced-e437-45c0-89bc-ff70885395f0\" (UID: \"265dfced-e437-45c0-89bc-ff70885395f0\") " Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.428037 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/265dfced-e437-45c0-89bc-ff70885395f0-custom-prometheus-ca\") pod \"265dfced-e437-45c0-89bc-ff70885395f0\" (UID: \"265dfced-e437-45c0-89bc-ff70885395f0\") " Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.428129 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/265dfced-e437-45c0-89bc-ff70885395f0-combined-ca-bundle\") pod \"265dfced-e437-45c0-89bc-ff70885395f0\" (UID: \"265dfced-e437-45c0-89bc-ff70885395f0\") " Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.428188 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/265dfced-e437-45c0-89bc-ff70885395f0-config-data\") pod \"265dfced-e437-45c0-89bc-ff70885395f0\" (UID: \"265dfced-e437-45c0-89bc-ff70885395f0\") " Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.428334 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/265dfced-e437-45c0-89bc-ff70885395f0-logs\") pod \"265dfced-e437-45c0-89bc-ff70885395f0\" (UID: \"265dfced-e437-45c0-89bc-ff70885395f0\") " Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.428811 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/265dfced-e437-45c0-89bc-ff70885395f0-logs" (OuterVolumeSpecName: "logs") pod "265dfced-e437-45c0-89bc-ff70885395f0" (UID: "265dfced-e437-45c0-89bc-ff70885395f0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.434555 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/265dfced-e437-45c0-89bc-ff70885395f0-kube-api-access-b6hb4" (OuterVolumeSpecName: "kube-api-access-b6hb4") pod "265dfced-e437-45c0-89bc-ff70885395f0" (UID: "265dfced-e437-45c0-89bc-ff70885395f0"). InnerVolumeSpecName "kube-api-access-b6hb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.462525 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/265dfced-e437-45c0-89bc-ff70885395f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "265dfced-e437-45c0-89bc-ff70885395f0" (UID: "265dfced-e437-45c0-89bc-ff70885395f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.462585 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/265dfced-e437-45c0-89bc-ff70885395f0-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "265dfced-e437-45c0-89bc-ff70885395f0" (UID: "265dfced-e437-45c0-89bc-ff70885395f0"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.482662 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/265dfced-e437-45c0-89bc-ff70885395f0-config-data" (OuterVolumeSpecName: "config-data") pod "265dfced-e437-45c0-89bc-ff70885395f0" (UID: "265dfced-e437-45c0-89bc-ff70885395f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.529980 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/265dfced-e437-45c0-89bc-ff70885395f0-logs\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.530008 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6hb4\" (UniqueName: \"kubernetes.io/projected/265dfced-e437-45c0-89bc-ff70885395f0-kube-api-access-b6hb4\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.530018 4681 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/265dfced-e437-45c0-89bc-ff70885395f0-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.530026 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/265dfced-e437-45c0-89bc-ff70885395f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.530034 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/265dfced-e437-45c0-89bc-ff70885395f0-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.636357 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d56f3f04-51e5-4b48-a4bb-88fe0b9be711","Type":"ContainerStarted","Data":"eefc3a63314e5a2815171b5855800cfd3e778f94f71f55e799d20a984520e16d"} Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.638281 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"265dfced-e437-45c0-89bc-ff70885395f0","Type":"ContainerDied","Data":"7448faba130879c884a34568d9e529f8307994d139b40f24d977bc1ff991fb46"} Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.638326 4681 scope.go:117] "RemoveContainer" containerID="6a57770a6ce58cb4bf37f62b51db36cbd29a79d068c2b5ef35ee765de5ea857f" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.638292 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.674633 4681 scope.go:117] "RemoveContainer" containerID="04f29e7d25764a31953e8fb51fda7f59406af1dcc14ef3e659dfc251f9eb6dac" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.675475 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.688053 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.712591 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Apr 04 02:25:33 crc kubenswrapper[4681]: E0404 02:25:33.714886 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f8d1b1-7cb9-400a-a94b-71bf1fcf0902" containerName="registry-server" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.714921 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f8d1b1-7cb9-400a-a94b-71bf1fcf0902" containerName="registry-server" Apr 04 02:25:33 crc kubenswrapper[4681]: E0404 02:25:33.714951 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="265dfced-e437-45c0-89bc-ff70885395f0" containerName="watcher-api-log" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.714960 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="265dfced-e437-45c0-89bc-ff70885395f0" containerName="watcher-api-log" Apr 04 02:25:33 crc kubenswrapper[4681]: E0404 02:25:33.714994 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="957e4b64-c18f-4cee-87dc-848a0d936626" containerName="init" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.715002 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="957e4b64-c18f-4cee-87dc-848a0d936626" containerName="init" Apr 04 02:25:33 crc kubenswrapper[4681]: E0404 02:25:33.715022 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="265dfced-e437-45c0-89bc-ff70885395f0" containerName="watcher-api" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.715030 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="265dfced-e437-45c0-89bc-ff70885395f0" containerName="watcher-api" Apr 04 02:25:33 crc kubenswrapper[4681]: E0404 02:25:33.715058 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f8d1b1-7cb9-400a-a94b-71bf1fcf0902" containerName="extract-content" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.715065 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f8d1b1-7cb9-400a-a94b-71bf1fcf0902" containerName="extract-content" Apr 04 02:25:33 crc kubenswrapper[4681]: E0404 02:25:33.715095 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="957e4b64-c18f-4cee-87dc-848a0d936626" containerName="dnsmasq-dns" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.715103 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="957e4b64-c18f-4cee-87dc-848a0d936626" containerName="dnsmasq-dns" Apr 04 02:25:33 crc kubenswrapper[4681]: E0404 02:25:33.715133 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f8d1b1-7cb9-400a-a94b-71bf1fcf0902" containerName="extract-utilities" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.715144 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f8d1b1-7cb9-400a-a94b-71bf1fcf0902" containerName="extract-utilities" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.715789 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="56f8d1b1-7cb9-400a-a94b-71bf1fcf0902" containerName="registry-server" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.715827 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="957e4b64-c18f-4cee-87dc-848a0d936626" containerName="dnsmasq-dns" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.715857 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="265dfced-e437-45c0-89bc-ff70885395f0" containerName="watcher-api" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.715868 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="265dfced-e437-45c0-89bc-ff70885395f0" containerName="watcher-api-log" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.717783 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.731546 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.732730 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.734059 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.734522 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.740098 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4edd5da9-fac6-4908-9036-dca43081ea71-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"4edd5da9-fac6-4908-9036-dca43081ea71\") " pod="openstack/watcher-api-0" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.740228 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4edd5da9-fac6-4908-9036-dca43081ea71-logs\") pod \"watcher-api-0\" (UID: \"4edd5da9-fac6-4908-9036-dca43081ea71\") " pod="openstack/watcher-api-0" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.740366 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4edd5da9-fac6-4908-9036-dca43081ea71-config-data\") pod \"watcher-api-0\" (UID: \"4edd5da9-fac6-4908-9036-dca43081ea71\") " pod="openstack/watcher-api-0" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.740391 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5vhj\" (UniqueName: \"kubernetes.io/projected/4edd5da9-fac6-4908-9036-dca43081ea71-kube-api-access-q5vhj\") pod \"watcher-api-0\" (UID: \"4edd5da9-fac6-4908-9036-dca43081ea71\") " pod="openstack/watcher-api-0" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.740524 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4edd5da9-fac6-4908-9036-dca43081ea71-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"4edd5da9-fac6-4908-9036-dca43081ea71\") " pod="openstack/watcher-api-0" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.740705 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4edd5da9-fac6-4908-9036-dca43081ea71-public-tls-certs\") pod \"watcher-api-0\" (UID: \"4edd5da9-fac6-4908-9036-dca43081ea71\") " pod="openstack/watcher-api-0" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.740902 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4edd5da9-fac6-4908-9036-dca43081ea71-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"4edd5da9-fac6-4908-9036-dca43081ea71\") " pod="openstack/watcher-api-0" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.841953 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4edd5da9-fac6-4908-9036-dca43081ea71-config-data\") pod \"watcher-api-0\" (UID: \"4edd5da9-fac6-4908-9036-dca43081ea71\") " pod="openstack/watcher-api-0" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.842009 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5vhj\" (UniqueName: \"kubernetes.io/projected/4edd5da9-fac6-4908-9036-dca43081ea71-kube-api-access-q5vhj\") pod \"watcher-api-0\" (UID: \"4edd5da9-fac6-4908-9036-dca43081ea71\") " pod="openstack/watcher-api-0" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.842061 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4edd5da9-fac6-4908-9036-dca43081ea71-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"4edd5da9-fac6-4908-9036-dca43081ea71\") " pod="openstack/watcher-api-0" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.842137 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4edd5da9-fac6-4908-9036-dca43081ea71-public-tls-certs\") pod \"watcher-api-0\" (UID: \"4edd5da9-fac6-4908-9036-dca43081ea71\") " pod="openstack/watcher-api-0" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.842235 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4edd5da9-fac6-4908-9036-dca43081ea71-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"4edd5da9-fac6-4908-9036-dca43081ea71\") " pod="openstack/watcher-api-0" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.842298 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4edd5da9-fac6-4908-9036-dca43081ea71-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"4edd5da9-fac6-4908-9036-dca43081ea71\") " pod="openstack/watcher-api-0" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.842331 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4edd5da9-fac6-4908-9036-dca43081ea71-logs\") pod \"watcher-api-0\" (UID: \"4edd5da9-fac6-4908-9036-dca43081ea71\") " pod="openstack/watcher-api-0" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.842754 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4edd5da9-fac6-4908-9036-dca43081ea71-logs\") pod \"watcher-api-0\" (UID: \"4edd5da9-fac6-4908-9036-dca43081ea71\") " pod="openstack/watcher-api-0" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.846538 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4edd5da9-fac6-4908-9036-dca43081ea71-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"4edd5da9-fac6-4908-9036-dca43081ea71\") " pod="openstack/watcher-api-0" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.846791 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4edd5da9-fac6-4908-9036-dca43081ea71-config-data\") pod \"watcher-api-0\" (UID: \"4edd5da9-fac6-4908-9036-dca43081ea71\") " pod="openstack/watcher-api-0" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.846840 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4edd5da9-fac6-4908-9036-dca43081ea71-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"4edd5da9-fac6-4908-9036-dca43081ea71\") " pod="openstack/watcher-api-0" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.847129 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4edd5da9-fac6-4908-9036-dca43081ea71-public-tls-certs\") pod \"watcher-api-0\" (UID: \"4edd5da9-fac6-4908-9036-dca43081ea71\") " pod="openstack/watcher-api-0" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.847790 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4edd5da9-fac6-4908-9036-dca43081ea71-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"4edd5da9-fac6-4908-9036-dca43081ea71\") " pod="openstack/watcher-api-0" Apr 04 02:25:33 crc kubenswrapper[4681]: I0404 02:25:33.867770 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5vhj\" (UniqueName: \"kubernetes.io/projected/4edd5da9-fac6-4908-9036-dca43081ea71-kube-api-access-q5vhj\") pod \"watcher-api-0\" (UID: \"4edd5da9-fac6-4908-9036-dca43081ea71\") " pod="openstack/watcher-api-0" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.025059 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xmmq7" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.057033 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.150506 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dqf9\" (UniqueName: \"kubernetes.io/projected/bf297ed4-229b-492f-bd27-5ea5e2279816-kube-api-access-9dqf9\") pod \"bf297ed4-229b-492f-bd27-5ea5e2279816\" (UID: \"bf297ed4-229b-492f-bd27-5ea5e2279816\") " Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.150701 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf297ed4-229b-492f-bd27-5ea5e2279816-credential-keys\") pod \"bf297ed4-229b-492f-bd27-5ea5e2279816\" (UID: \"bf297ed4-229b-492f-bd27-5ea5e2279816\") " Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.150783 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf297ed4-229b-492f-bd27-5ea5e2279816-scripts\") pod \"bf297ed4-229b-492f-bd27-5ea5e2279816\" (UID: \"bf297ed4-229b-492f-bd27-5ea5e2279816\") " Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.150840 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf297ed4-229b-492f-bd27-5ea5e2279816-config-data\") pod \"bf297ed4-229b-492f-bd27-5ea5e2279816\" (UID: \"bf297ed4-229b-492f-bd27-5ea5e2279816\") " Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.150965 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf297ed4-229b-492f-bd27-5ea5e2279816-fernet-keys\") pod \"bf297ed4-229b-492f-bd27-5ea5e2279816\" (UID: \"bf297ed4-229b-492f-bd27-5ea5e2279816\") " Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.151027 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf297ed4-229b-492f-bd27-5ea5e2279816-combined-ca-bundle\") pod \"bf297ed4-229b-492f-bd27-5ea5e2279816\" (UID: \"bf297ed4-229b-492f-bd27-5ea5e2279816\") " Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.156784 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf297ed4-229b-492f-bd27-5ea5e2279816-kube-api-access-9dqf9" (OuterVolumeSpecName: "kube-api-access-9dqf9") pod "bf297ed4-229b-492f-bd27-5ea5e2279816" (UID: "bf297ed4-229b-492f-bd27-5ea5e2279816"). InnerVolumeSpecName "kube-api-access-9dqf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.158204 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf297ed4-229b-492f-bd27-5ea5e2279816-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "bf297ed4-229b-492f-bd27-5ea5e2279816" (UID: "bf297ed4-229b-492f-bd27-5ea5e2279816"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.171053 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf297ed4-229b-492f-bd27-5ea5e2279816-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "bf297ed4-229b-492f-bd27-5ea5e2279816" (UID: "bf297ed4-229b-492f-bd27-5ea5e2279816"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.176405 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf297ed4-229b-492f-bd27-5ea5e2279816-scripts" (OuterVolumeSpecName: "scripts") pod "bf297ed4-229b-492f-bd27-5ea5e2279816" (UID: "bf297ed4-229b-492f-bd27-5ea5e2279816"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.180955 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf297ed4-229b-492f-bd27-5ea5e2279816-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf297ed4-229b-492f-bd27-5ea5e2279816" (UID: "bf297ed4-229b-492f-bd27-5ea5e2279816"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.208427 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf297ed4-229b-492f-bd27-5ea5e2279816-config-data" (OuterVolumeSpecName: "config-data") pod "bf297ed4-229b-492f-bd27-5ea5e2279816" (UID: "bf297ed4-229b-492f-bd27-5ea5e2279816"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.254909 4681 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf297ed4-229b-492f-bd27-5ea5e2279816-fernet-keys\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.254952 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf297ed4-229b-492f-bd27-5ea5e2279816-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.254966 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dqf9\" (UniqueName: \"kubernetes.io/projected/bf297ed4-229b-492f-bd27-5ea5e2279816-kube-api-access-9dqf9\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.254976 4681 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf297ed4-229b-492f-bd27-5ea5e2279816-credential-keys\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.254983 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf297ed4-229b-492f-bd27-5ea5e2279816-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.254992 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf297ed4-229b-492f-bd27-5ea5e2279816-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.522893 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.661081 4681 generic.go:334] "Generic (PLEG): container finished" podID="96da315f-3451-45c0-b1fc-687c9d18dccf" containerID="3594b47fea9f64289499cbf6739da54b8e258e9170155f1eee69bfeb43a9f8d9" exitCode=0 Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.661145 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qdpnl" event={"ID":"96da315f-3451-45c0-b1fc-687c9d18dccf","Type":"ContainerDied","Data":"3594b47fea9f64289499cbf6739da54b8e258e9170155f1eee69bfeb43a9f8d9"} Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.675745 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"4edd5da9-fac6-4908-9036-dca43081ea71","Type":"ContainerStarted","Data":"26c152499389a5c46d55e752b35647d987efcfc3ce8bd799faba70b2a2d8765c"} Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.691353 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xmmq7" event={"ID":"bf297ed4-229b-492f-bd27-5ea5e2279816","Type":"ContainerDied","Data":"19c914a6c8cf4e59b4363d96467b3c256f547f474c0acbd2534d9a22f4adad0e"} Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.691691 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19c914a6c8cf4e59b4363d96467b3c256f547f474c0acbd2534d9a22f4adad0e" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.691402 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xmmq7" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.757168 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-5cdf6cfbdd-xgxdx"] Apr 04 02:25:34 crc kubenswrapper[4681]: E0404 02:25:34.757701 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf297ed4-229b-492f-bd27-5ea5e2279816" containerName="keystone-bootstrap" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.757726 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf297ed4-229b-492f-bd27-5ea5e2279816" containerName="keystone-bootstrap" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.757943 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf297ed4-229b-492f-bd27-5ea5e2279816" containerName="keystone-bootstrap" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.763625 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5cdf6cfbdd-xgxdx" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.765941 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-lv9fh" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.767830 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.768091 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.768288 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.768440 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.769217 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.770985 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5cdf6cfbdd-xgxdx"] Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.865878 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85cc490e-cee8-405f-b498-41415aae210e-public-tls-certs\") pod \"keystone-5cdf6cfbdd-xgxdx\" (UID: \"85cc490e-cee8-405f-b498-41415aae210e\") " pod="openstack/keystone-5cdf6cfbdd-xgxdx" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.865980 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/85cc490e-cee8-405f-b498-41415aae210e-fernet-keys\") pod \"keystone-5cdf6cfbdd-xgxdx\" (UID: \"85cc490e-cee8-405f-b498-41415aae210e\") " pod="openstack/keystone-5cdf6cfbdd-xgxdx" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.866007 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85cc490e-cee8-405f-b498-41415aae210e-config-data\") pod \"keystone-5cdf6cfbdd-xgxdx\" (UID: \"85cc490e-cee8-405f-b498-41415aae210e\") " pod="openstack/keystone-5cdf6cfbdd-xgxdx" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.866056 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85cc490e-cee8-405f-b498-41415aae210e-internal-tls-certs\") pod \"keystone-5cdf6cfbdd-xgxdx\" (UID: \"85cc490e-cee8-405f-b498-41415aae210e\") " pod="openstack/keystone-5cdf6cfbdd-xgxdx" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.866083 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85cc490e-cee8-405f-b498-41415aae210e-scripts\") pod \"keystone-5cdf6cfbdd-xgxdx\" (UID: \"85cc490e-cee8-405f-b498-41415aae210e\") " pod="openstack/keystone-5cdf6cfbdd-xgxdx" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.866104 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85cc490e-cee8-405f-b498-41415aae210e-combined-ca-bundle\") pod \"keystone-5cdf6cfbdd-xgxdx\" (UID: \"85cc490e-cee8-405f-b498-41415aae210e\") " pod="openstack/keystone-5cdf6cfbdd-xgxdx" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.866122 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/85cc490e-cee8-405f-b498-41415aae210e-credential-keys\") pod \"keystone-5cdf6cfbdd-xgxdx\" (UID: \"85cc490e-cee8-405f-b498-41415aae210e\") " pod="openstack/keystone-5cdf6cfbdd-xgxdx" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.866145 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btnld\" (UniqueName: \"kubernetes.io/projected/85cc490e-cee8-405f-b498-41415aae210e-kube-api-access-btnld\") pod \"keystone-5cdf6cfbdd-xgxdx\" (UID: \"85cc490e-cee8-405f-b498-41415aae210e\") " pod="openstack/keystone-5cdf6cfbdd-xgxdx" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.968159 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85cc490e-cee8-405f-b498-41415aae210e-public-tls-certs\") pod \"keystone-5cdf6cfbdd-xgxdx\" (UID: \"85cc490e-cee8-405f-b498-41415aae210e\") " pod="openstack/keystone-5cdf6cfbdd-xgxdx" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.968303 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/85cc490e-cee8-405f-b498-41415aae210e-fernet-keys\") pod \"keystone-5cdf6cfbdd-xgxdx\" (UID: \"85cc490e-cee8-405f-b498-41415aae210e\") " pod="openstack/keystone-5cdf6cfbdd-xgxdx" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.968335 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85cc490e-cee8-405f-b498-41415aae210e-config-data\") pod \"keystone-5cdf6cfbdd-xgxdx\" (UID: \"85cc490e-cee8-405f-b498-41415aae210e\") " pod="openstack/keystone-5cdf6cfbdd-xgxdx" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.968397 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85cc490e-cee8-405f-b498-41415aae210e-internal-tls-certs\") pod \"keystone-5cdf6cfbdd-xgxdx\" (UID: \"85cc490e-cee8-405f-b498-41415aae210e\") " pod="openstack/keystone-5cdf6cfbdd-xgxdx" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.968417 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85cc490e-cee8-405f-b498-41415aae210e-scripts\") pod \"keystone-5cdf6cfbdd-xgxdx\" (UID: \"85cc490e-cee8-405f-b498-41415aae210e\") " pod="openstack/keystone-5cdf6cfbdd-xgxdx" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.968445 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85cc490e-cee8-405f-b498-41415aae210e-combined-ca-bundle\") pod \"keystone-5cdf6cfbdd-xgxdx\" (UID: \"85cc490e-cee8-405f-b498-41415aae210e\") " pod="openstack/keystone-5cdf6cfbdd-xgxdx" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.968470 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/85cc490e-cee8-405f-b498-41415aae210e-credential-keys\") pod \"keystone-5cdf6cfbdd-xgxdx\" (UID: \"85cc490e-cee8-405f-b498-41415aae210e\") " pod="openstack/keystone-5cdf6cfbdd-xgxdx" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.968492 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btnld\" (UniqueName: \"kubernetes.io/projected/85cc490e-cee8-405f-b498-41415aae210e-kube-api-access-btnld\") pod \"keystone-5cdf6cfbdd-xgxdx\" (UID: \"85cc490e-cee8-405f-b498-41415aae210e\") " pod="openstack/keystone-5cdf6cfbdd-xgxdx" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.972719 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/85cc490e-cee8-405f-b498-41415aae210e-internal-tls-certs\") pod \"keystone-5cdf6cfbdd-xgxdx\" (UID: \"85cc490e-cee8-405f-b498-41415aae210e\") " pod="openstack/keystone-5cdf6cfbdd-xgxdx" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.973809 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85cc490e-cee8-405f-b498-41415aae210e-combined-ca-bundle\") pod \"keystone-5cdf6cfbdd-xgxdx\" (UID: \"85cc490e-cee8-405f-b498-41415aae210e\") " pod="openstack/keystone-5cdf6cfbdd-xgxdx" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.975015 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/85cc490e-cee8-405f-b498-41415aae210e-public-tls-certs\") pod \"keystone-5cdf6cfbdd-xgxdx\" (UID: \"85cc490e-cee8-405f-b498-41415aae210e\") " pod="openstack/keystone-5cdf6cfbdd-xgxdx" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.975157 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/85cc490e-cee8-405f-b498-41415aae210e-scripts\") pod \"keystone-5cdf6cfbdd-xgxdx\" (UID: \"85cc490e-cee8-405f-b498-41415aae210e\") " pod="openstack/keystone-5cdf6cfbdd-xgxdx" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.976904 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/85cc490e-cee8-405f-b498-41415aae210e-fernet-keys\") pod \"keystone-5cdf6cfbdd-xgxdx\" (UID: \"85cc490e-cee8-405f-b498-41415aae210e\") " pod="openstack/keystone-5cdf6cfbdd-xgxdx" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.977366 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/85cc490e-cee8-405f-b498-41415aae210e-config-data\") pod \"keystone-5cdf6cfbdd-xgxdx\" (UID: \"85cc490e-cee8-405f-b498-41415aae210e\") " pod="openstack/keystone-5cdf6cfbdd-xgxdx" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.977462 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/85cc490e-cee8-405f-b498-41415aae210e-credential-keys\") pod \"keystone-5cdf6cfbdd-xgxdx\" (UID: \"85cc490e-cee8-405f-b498-41415aae210e\") " pod="openstack/keystone-5cdf6cfbdd-xgxdx" Apr 04 02:25:34 crc kubenswrapper[4681]: I0404 02:25:34.984936 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btnld\" (UniqueName: \"kubernetes.io/projected/85cc490e-cee8-405f-b498-41415aae210e-kube-api-access-btnld\") pod \"keystone-5cdf6cfbdd-xgxdx\" (UID: \"85cc490e-cee8-405f-b498-41415aae210e\") " pod="openstack/keystone-5cdf6cfbdd-xgxdx" Apr 04 02:25:35 crc kubenswrapper[4681]: I0404 02:25:35.085357 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-5cdf6cfbdd-xgxdx" Apr 04 02:25:35 crc kubenswrapper[4681]: I0404 02:25:35.203000 4681 scope.go:117] "RemoveContainer" containerID="a115e63e478f2dcd96d6a1ea5579c4abd8544184899aee1395f08415f92823da" Apr 04 02:25:35 crc kubenswrapper[4681]: E0404 02:25:35.203634 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:25:35 crc kubenswrapper[4681]: I0404 02:25:35.219642 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="265dfced-e437-45c0-89bc-ff70885395f0" path="/var/lib/kubelet/pods/265dfced-e437-45c0-89bc-ff70885395f0/volumes" Apr 04 02:25:35 crc kubenswrapper[4681]: I0404 02:25:35.547770 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-5cdf6cfbdd-xgxdx"] Apr 04 02:25:35 crc kubenswrapper[4681]: W0404 02:25:35.551294 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod85cc490e_cee8_405f_b498_41415aae210e.slice/crio-442fd5caa3e6bd8d38e9a8f46f3f1ef1bad8cf61b696a1bdbd3b92b6d0742ab6 WatchSource:0}: Error finding container 442fd5caa3e6bd8d38e9a8f46f3f1ef1bad8cf61b696a1bdbd3b92b6d0742ab6: Status 404 returned error can't find the container with id 442fd5caa3e6bd8d38e9a8f46f3f1ef1bad8cf61b696a1bdbd3b92b6d0742ab6 Apr 04 02:25:35 crc kubenswrapper[4681]: I0404 02:25:35.715205 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5cdf6cfbdd-xgxdx" event={"ID":"85cc490e-cee8-405f-b498-41415aae210e","Type":"ContainerStarted","Data":"442fd5caa3e6bd8d38e9a8f46f3f1ef1bad8cf61b696a1bdbd3b92b6d0742ab6"} Apr 04 02:25:35 crc kubenswrapper[4681]: I0404 02:25:35.719502 4681 generic.go:334] "Generic (PLEG): container finished" podID="442b54de-22a7-4121-aab3-5365d4e0872d" containerID="596d58724c4139b2b8f51ced98fde4359adf5f591dbff7adc473e65110e87d1d" exitCode=1 Apr 04 02:25:35 crc kubenswrapper[4681]: I0404 02:25:35.719580 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"442b54de-22a7-4121-aab3-5365d4e0872d","Type":"ContainerDied","Data":"596d58724c4139b2b8f51ced98fde4359adf5f591dbff7adc473e65110e87d1d"} Apr 04 02:25:35 crc kubenswrapper[4681]: I0404 02:25:35.720291 4681 scope.go:117] "RemoveContainer" containerID="596d58724c4139b2b8f51ced98fde4359adf5f591dbff7adc473e65110e87d1d" Apr 04 02:25:35 crc kubenswrapper[4681]: I0404 02:25:35.722224 4681 generic.go:334] "Generic (PLEG): container finished" podID="04422fbb-2a81-4627-9df3-dce05665ec03" containerID="2c4572453ef291c8bd6015810304578098b778e473aa7ccdb187df0cb8d69cdc" exitCode=0 Apr 04 02:25:35 crc kubenswrapper[4681]: I0404 02:25:35.722301 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-27c9s" event={"ID":"04422fbb-2a81-4627-9df3-dce05665ec03","Type":"ContainerDied","Data":"2c4572453ef291c8bd6015810304578098b778e473aa7ccdb187df0cb8d69cdc"} Apr 04 02:25:35 crc kubenswrapper[4681]: I0404 02:25:35.732428 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5sd54" event={"ID":"b185d1fc-0c71-44ee-bb6d-915189acc4d8","Type":"ContainerStarted","Data":"48562f62ef66c90bddf853a7d75b62412d3429566b2fc45fddf86e672ea4d924"} Apr 04 02:25:35 crc kubenswrapper[4681]: I0404 02:25:35.738620 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"4edd5da9-fac6-4908-9036-dca43081ea71","Type":"ContainerStarted","Data":"9c4d8c127d615e4b91cccd2399a7ae10bbe006ecd4f7e254d1c65c1445b9fbcb"} Apr 04 02:25:35 crc kubenswrapper[4681]: I0404 02:25:35.738660 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"4edd5da9-fac6-4908-9036-dca43081ea71","Type":"ContainerStarted","Data":"8fa9d5cdca90012952f0955ee8036a245dcd2d90f3de93af8a23434c9c6823b1"} Apr 04 02:25:35 crc kubenswrapper[4681]: I0404 02:25:35.738679 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Apr 04 02:25:35 crc kubenswrapper[4681]: I0404 02:25:35.769955 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-5sd54" podStartSLOduration=3.164773002 podStartE2EDuration="1m9.769932655s" podCreationTimestamp="2026-04-04 02:24:26 +0000 UTC" firstStartedPulling="2026-04-04 02:24:27.714679747 +0000 UTC m=+1747.380454867" lastFinishedPulling="2026-04-04 02:25:34.3198394 +0000 UTC m=+1813.985614520" observedRunningTime="2026-04-04 02:25:35.756005922 +0000 UTC m=+1815.421781052" watchObservedRunningTime="2026-04-04 02:25:35.769932655 +0000 UTC m=+1815.435707775" Apr 04 02:25:35 crc kubenswrapper[4681]: I0404 02:25:35.810979 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.810954551 podStartE2EDuration="2.810954551s" podCreationTimestamp="2026-04-04 02:25:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:25:35.805273665 +0000 UTC m=+1815.471048805" watchObservedRunningTime="2026-04-04 02:25:35.810954551 +0000 UTC m=+1815.476729671" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.169154 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qdpnl" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.297055 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96da315f-3451-45c0-b1fc-687c9d18dccf-scripts\") pod \"96da315f-3451-45c0-b1fc-687c9d18dccf\" (UID: \"96da315f-3451-45c0-b1fc-687c9d18dccf\") " Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.297325 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96da315f-3451-45c0-b1fc-687c9d18dccf-config-data\") pod \"96da315f-3451-45c0-b1fc-687c9d18dccf\" (UID: \"96da315f-3451-45c0-b1fc-687c9d18dccf\") " Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.297398 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fjtt\" (UniqueName: \"kubernetes.io/projected/96da315f-3451-45c0-b1fc-687c9d18dccf-kube-api-access-9fjtt\") pod \"96da315f-3451-45c0-b1fc-687c9d18dccf\" (UID: \"96da315f-3451-45c0-b1fc-687c9d18dccf\") " Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.297488 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96da315f-3451-45c0-b1fc-687c9d18dccf-logs\") pod \"96da315f-3451-45c0-b1fc-687c9d18dccf\" (UID: \"96da315f-3451-45c0-b1fc-687c9d18dccf\") " Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.297580 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96da315f-3451-45c0-b1fc-687c9d18dccf-combined-ca-bundle\") pod \"96da315f-3451-45c0-b1fc-687c9d18dccf\" (UID: \"96da315f-3451-45c0-b1fc-687c9d18dccf\") " Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.316001 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96da315f-3451-45c0-b1fc-687c9d18dccf-logs" (OuterVolumeSpecName: "logs") pod "96da315f-3451-45c0-b1fc-687c9d18dccf" (UID: "96da315f-3451-45c0-b1fc-687c9d18dccf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.337409 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96da315f-3451-45c0-b1fc-687c9d18dccf-kube-api-access-9fjtt" (OuterVolumeSpecName: "kube-api-access-9fjtt") pod "96da315f-3451-45c0-b1fc-687c9d18dccf" (UID: "96da315f-3451-45c0-b1fc-687c9d18dccf"). InnerVolumeSpecName "kube-api-access-9fjtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.345420 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96da315f-3451-45c0-b1fc-687c9d18dccf-scripts" (OuterVolumeSpecName: "scripts") pod "96da315f-3451-45c0-b1fc-687c9d18dccf" (UID: "96da315f-3451-45c0-b1fc-687c9d18dccf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.346612 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96da315f-3451-45c0-b1fc-687c9d18dccf-config-data" (OuterVolumeSpecName: "config-data") pod "96da315f-3451-45c0-b1fc-687c9d18dccf" (UID: "96da315f-3451-45c0-b1fc-687c9d18dccf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.382445 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96da315f-3451-45c0-b1fc-687c9d18dccf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96da315f-3451-45c0-b1fc-687c9d18dccf" (UID: "96da315f-3451-45c0-b1fc-687c9d18dccf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.400419 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96da315f-3451-45c0-b1fc-687c9d18dccf-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.400452 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96da315f-3451-45c0-b1fc-687c9d18dccf-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.400466 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fjtt\" (UniqueName: \"kubernetes.io/projected/96da315f-3451-45c0-b1fc-687c9d18dccf-kube-api-access-9fjtt\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.400481 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96da315f-3451-45c0-b1fc-687c9d18dccf-logs\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.400492 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96da315f-3451-45c0-b1fc-687c9d18dccf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.758502 4681 generic.go:334] "Generic (PLEG): container finished" podID="ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e" containerID="a915244e7d7e523425e20fe41c6eae5b7c6e80833a51146876261da7673a8ada" exitCode=0 Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.758593 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9vwnm" event={"ID":"ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e","Type":"ContainerDied","Data":"a915244e7d7e523425e20fe41c6eae5b7c6e80833a51146876261da7673a8ada"} Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.762601 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-5cdf6cfbdd-xgxdx" event={"ID":"85cc490e-cee8-405f-b498-41415aae210e","Type":"ContainerStarted","Data":"586c19fb7235085556a78964bd1b33ec20111fe89a93ed097c7c55554bd07f7b"} Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.762762 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-5cdf6cfbdd-xgxdx" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.768194 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"442b54de-22a7-4121-aab3-5365d4e0872d","Type":"ContainerStarted","Data":"5c2e20ac3012a965348100f0100e8c91268dde1c0d1be74ad54fe599de278148"} Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.772193 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-qdpnl" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.773077 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-qdpnl" event={"ID":"96da315f-3451-45c0-b1fc-687c9d18dccf","Type":"ContainerDied","Data":"f9d1f92be97bb4d6bf5ed4b01cceb1759d4964a0114f6ecb7862725ed0ad48d2"} Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.773112 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9d1f92be97bb4d6bf5ed4b01cceb1759d4964a0114f6ecb7862725ed0ad48d2" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.801110 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-5cdf6cfbdd-xgxdx" podStartSLOduration=2.80108831 podStartE2EDuration="2.80108831s" podCreationTimestamp="2026-04-04 02:25:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:25:36.800511605 +0000 UTC m=+1816.466286735" watchObservedRunningTime="2026-04-04 02:25:36.80108831 +0000 UTC m=+1816.466863430" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.840349 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-674794d9f6-5s9ps"] Apr 04 02:25:36 crc kubenswrapper[4681]: E0404 02:25:36.840876 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96da315f-3451-45c0-b1fc-687c9d18dccf" containerName="placement-db-sync" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.840903 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="96da315f-3451-45c0-b1fc-687c9d18dccf" containerName="placement-db-sync" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.841138 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="96da315f-3451-45c0-b1fc-687c9d18dccf" containerName="placement-db-sync" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.842366 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-674794d9f6-5s9ps" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.849572 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.849808 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.849934 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-7b9pr" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.850058 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.850232 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.884593 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-674794d9f6-5s9ps"] Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.912972 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb799\" (UniqueName: \"kubernetes.io/projected/b5b3ede0-d5ce-41d0-a320-ee0e732c8f86-kube-api-access-wb799\") pod \"placement-674794d9f6-5s9ps\" (UID: \"b5b3ede0-d5ce-41d0-a320-ee0e732c8f86\") " pod="openstack/placement-674794d9f6-5s9ps" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.913046 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b3ede0-d5ce-41d0-a320-ee0e732c8f86-combined-ca-bundle\") pod \"placement-674794d9f6-5s9ps\" (UID: \"b5b3ede0-d5ce-41d0-a320-ee0e732c8f86\") " pod="openstack/placement-674794d9f6-5s9ps" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.913090 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5b3ede0-d5ce-41d0-a320-ee0e732c8f86-logs\") pod \"placement-674794d9f6-5s9ps\" (UID: \"b5b3ede0-d5ce-41d0-a320-ee0e732c8f86\") " pod="openstack/placement-674794d9f6-5s9ps" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.913108 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5b3ede0-d5ce-41d0-a320-ee0e732c8f86-internal-tls-certs\") pod \"placement-674794d9f6-5s9ps\" (UID: \"b5b3ede0-d5ce-41d0-a320-ee0e732c8f86\") " pod="openstack/placement-674794d9f6-5s9ps" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.913176 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5b3ede0-d5ce-41d0-a320-ee0e732c8f86-scripts\") pod \"placement-674794d9f6-5s9ps\" (UID: \"b5b3ede0-d5ce-41d0-a320-ee0e732c8f86\") " pod="openstack/placement-674794d9f6-5s9ps" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.913250 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5b3ede0-d5ce-41d0-a320-ee0e732c8f86-config-data\") pod \"placement-674794d9f6-5s9ps\" (UID: \"b5b3ede0-d5ce-41d0-a320-ee0e732c8f86\") " pod="openstack/placement-674794d9f6-5s9ps" Apr 04 02:25:36 crc kubenswrapper[4681]: I0404 02:25:36.913278 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5b3ede0-d5ce-41d0-a320-ee0e732c8f86-public-tls-certs\") pod \"placement-674794d9f6-5s9ps\" (UID: \"b5b3ede0-d5ce-41d0-a320-ee0e732c8f86\") " pod="openstack/placement-674794d9f6-5s9ps" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.015917 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb799\" (UniqueName: \"kubernetes.io/projected/b5b3ede0-d5ce-41d0-a320-ee0e732c8f86-kube-api-access-wb799\") pod \"placement-674794d9f6-5s9ps\" (UID: \"b5b3ede0-d5ce-41d0-a320-ee0e732c8f86\") " pod="openstack/placement-674794d9f6-5s9ps" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.016697 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b3ede0-d5ce-41d0-a320-ee0e732c8f86-combined-ca-bundle\") pod \"placement-674794d9f6-5s9ps\" (UID: \"b5b3ede0-d5ce-41d0-a320-ee0e732c8f86\") " pod="openstack/placement-674794d9f6-5s9ps" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.017017 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5b3ede0-d5ce-41d0-a320-ee0e732c8f86-logs\") pod \"placement-674794d9f6-5s9ps\" (UID: \"b5b3ede0-d5ce-41d0-a320-ee0e732c8f86\") " pod="openstack/placement-674794d9f6-5s9ps" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.017041 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5b3ede0-d5ce-41d0-a320-ee0e732c8f86-internal-tls-certs\") pod \"placement-674794d9f6-5s9ps\" (UID: \"b5b3ede0-d5ce-41d0-a320-ee0e732c8f86\") " pod="openstack/placement-674794d9f6-5s9ps" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.017179 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5b3ede0-d5ce-41d0-a320-ee0e732c8f86-scripts\") pod \"placement-674794d9f6-5s9ps\" (UID: \"b5b3ede0-d5ce-41d0-a320-ee0e732c8f86\") " pod="openstack/placement-674794d9f6-5s9ps" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.017279 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5b3ede0-d5ce-41d0-a320-ee0e732c8f86-public-tls-certs\") pod \"placement-674794d9f6-5s9ps\" (UID: \"b5b3ede0-d5ce-41d0-a320-ee0e732c8f86\") " pod="openstack/placement-674794d9f6-5s9ps" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.017299 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5b3ede0-d5ce-41d0-a320-ee0e732c8f86-config-data\") pod \"placement-674794d9f6-5s9ps\" (UID: \"b5b3ede0-d5ce-41d0-a320-ee0e732c8f86\") " pod="openstack/placement-674794d9f6-5s9ps" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.018198 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5b3ede0-d5ce-41d0-a320-ee0e732c8f86-logs\") pod \"placement-674794d9f6-5s9ps\" (UID: \"b5b3ede0-d5ce-41d0-a320-ee0e732c8f86\") " pod="openstack/placement-674794d9f6-5s9ps" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.025865 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5b3ede0-d5ce-41d0-a320-ee0e732c8f86-scripts\") pod \"placement-674794d9f6-5s9ps\" (UID: \"b5b3ede0-d5ce-41d0-a320-ee0e732c8f86\") " pod="openstack/placement-674794d9f6-5s9ps" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.026761 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5b3ede0-d5ce-41d0-a320-ee0e732c8f86-config-data\") pod \"placement-674794d9f6-5s9ps\" (UID: \"b5b3ede0-d5ce-41d0-a320-ee0e732c8f86\") " pod="openstack/placement-674794d9f6-5s9ps" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.040137 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b3ede0-d5ce-41d0-a320-ee0e732c8f86-combined-ca-bundle\") pod \"placement-674794d9f6-5s9ps\" (UID: \"b5b3ede0-d5ce-41d0-a320-ee0e732c8f86\") " pod="openstack/placement-674794d9f6-5s9ps" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.043448 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5b3ede0-d5ce-41d0-a320-ee0e732c8f86-internal-tls-certs\") pod \"placement-674794d9f6-5s9ps\" (UID: \"b5b3ede0-d5ce-41d0-a320-ee0e732c8f86\") " pod="openstack/placement-674794d9f6-5s9ps" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.044240 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5b3ede0-d5ce-41d0-a320-ee0e732c8f86-public-tls-certs\") pod \"placement-674794d9f6-5s9ps\" (UID: \"b5b3ede0-d5ce-41d0-a320-ee0e732c8f86\") " pod="openstack/placement-674794d9f6-5s9ps" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.061355 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb799\" (UniqueName: \"kubernetes.io/projected/b5b3ede0-d5ce-41d0-a320-ee0e732c8f86-kube-api-access-wb799\") pod \"placement-674794d9f6-5s9ps\" (UID: \"b5b3ede0-d5ce-41d0-a320-ee0e732c8f86\") " pod="openstack/placement-674794d9f6-5s9ps" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.179589 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-674794d9f6-5s9ps" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.350008 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-27c9s" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.430177 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/04422fbb-2a81-4627-9df3-dce05665ec03-db-sync-config-data\") pod \"04422fbb-2a81-4627-9df3-dce05665ec03\" (UID: \"04422fbb-2a81-4627-9df3-dce05665ec03\") " Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.430333 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rrz8\" (UniqueName: \"kubernetes.io/projected/04422fbb-2a81-4627-9df3-dce05665ec03-kube-api-access-7rrz8\") pod \"04422fbb-2a81-4627-9df3-dce05665ec03\" (UID: \"04422fbb-2a81-4627-9df3-dce05665ec03\") " Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.430388 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04422fbb-2a81-4627-9df3-dce05665ec03-combined-ca-bundle\") pod \"04422fbb-2a81-4627-9df3-dce05665ec03\" (UID: \"04422fbb-2a81-4627-9df3-dce05665ec03\") " Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.436299 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04422fbb-2a81-4627-9df3-dce05665ec03-kube-api-access-7rrz8" (OuterVolumeSpecName: "kube-api-access-7rrz8") pod "04422fbb-2a81-4627-9df3-dce05665ec03" (UID: "04422fbb-2a81-4627-9df3-dce05665ec03"). InnerVolumeSpecName "kube-api-access-7rrz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.437446 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04422fbb-2a81-4627-9df3-dce05665ec03-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "04422fbb-2a81-4627-9df3-dce05665ec03" (UID: "04422fbb-2a81-4627-9df3-dce05665ec03"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.469541 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04422fbb-2a81-4627-9df3-dce05665ec03-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04422fbb-2a81-4627-9df3-dce05665ec03" (UID: "04422fbb-2a81-4627-9df3-dce05665ec03"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.532072 4681 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/04422fbb-2a81-4627-9df3-dce05665ec03-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.532108 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rrz8\" (UniqueName: \"kubernetes.io/projected/04422fbb-2a81-4627-9df3-dce05665ec03-kube-api-access-7rrz8\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.532120 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04422fbb-2a81-4627-9df3-dce05665ec03-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.713535 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-674794d9f6-5s9ps"] Apr 04 02:25:37 crc kubenswrapper[4681]: W0404 02:25:37.724327 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5b3ede0_d5ce_41d0_a320_ee0e732c8f86.slice/crio-57868461b105df32f559813328f26e8c0c1cf2ac0653edcee63c0b111baa850a WatchSource:0}: Error finding container 57868461b105df32f559813328f26e8c0c1cf2ac0653edcee63c0b111baa850a: Status 404 returned error can't find the container with id 57868461b105df32f559813328f26e8c0c1cf2ac0653edcee63c0b111baa850a Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.783581 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-674794d9f6-5s9ps" event={"ID":"b5b3ede0-d5ce-41d0-a320-ee0e732c8f86","Type":"ContainerStarted","Data":"57868461b105df32f559813328f26e8c0c1cf2ac0653edcee63c0b111baa850a"} Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.792130 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-27c9s" event={"ID":"04422fbb-2a81-4627-9df3-dce05665ec03","Type":"ContainerDied","Data":"e957a8c7489e6688f04564352379986913ef8758d43a1fa9edccd4940ceb87ea"} Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.792514 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e957a8c7489e6688f04564352379986913ef8758d43a1fa9edccd4940ceb87ea" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.792206 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-27c9s" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.963658 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-796868c666-kk4mh"] Apr 04 02:25:37 crc kubenswrapper[4681]: E0404 02:25:37.964341 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04422fbb-2a81-4627-9df3-dce05665ec03" containerName="barbican-db-sync" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.964453 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="04422fbb-2a81-4627-9df3-dce05665ec03" containerName="barbican-db-sync" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.964774 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="04422fbb-2a81-4627-9df3-dce05665ec03" containerName="barbican-db-sync" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.968293 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-796868c666-kk4mh" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.980473 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.981532 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-zmh52" Apr 04 02:25:37 crc kubenswrapper[4681]: I0404 02:25:37.981740 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:37.991398 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-796868c666-kk4mh"] Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.029159 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5699bfbbf-jpbrf"] Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.030629 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5699bfbbf-jpbrf" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.034631 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.041036 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9699275-8e01-4222-9e46-b90aa70f2a3c-logs\") pod \"barbican-keystone-listener-796868c666-kk4mh\" (UID: \"e9699275-8e01-4222-9e46-b90aa70f2a3c\") " pod="openstack/barbican-keystone-listener-796868c666-kk4mh" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.041797 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9699275-8e01-4222-9e46-b90aa70f2a3c-config-data\") pod \"barbican-keystone-listener-796868c666-kk4mh\" (UID: \"e9699275-8e01-4222-9e46-b90aa70f2a3c\") " pod="openstack/barbican-keystone-listener-796868c666-kk4mh" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.041900 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgjrj\" (UniqueName: \"kubernetes.io/projected/e9699275-8e01-4222-9e46-b90aa70f2a3c-kube-api-access-qgjrj\") pod \"barbican-keystone-listener-796868c666-kk4mh\" (UID: \"e9699275-8e01-4222-9e46-b90aa70f2a3c\") " pod="openstack/barbican-keystone-listener-796868c666-kk4mh" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.042128 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9699275-8e01-4222-9e46-b90aa70f2a3c-config-data-custom\") pod \"barbican-keystone-listener-796868c666-kk4mh\" (UID: \"e9699275-8e01-4222-9e46-b90aa70f2a3c\") " pod="openstack/barbican-keystone-listener-796868c666-kk4mh" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.042329 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9699275-8e01-4222-9e46-b90aa70f2a3c-combined-ca-bundle\") pod \"barbican-keystone-listener-796868c666-kk4mh\" (UID: \"e9699275-8e01-4222-9e46-b90aa70f2a3c\") " pod="openstack/barbican-keystone-listener-796868c666-kk4mh" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.105060 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5699bfbbf-jpbrf"] Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.131870 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7945546f89-xn4hv"] Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.134665 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7945546f89-xn4hv" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.150280 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9699275-8e01-4222-9e46-b90aa70f2a3c-logs\") pod \"barbican-keystone-listener-796868c666-kk4mh\" (UID: \"e9699275-8e01-4222-9e46-b90aa70f2a3c\") " pod="openstack/barbican-keystone-listener-796868c666-kk4mh" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.150351 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgjrj\" (UniqueName: \"kubernetes.io/projected/e9699275-8e01-4222-9e46-b90aa70f2a3c-kube-api-access-qgjrj\") pod \"barbican-keystone-listener-796868c666-kk4mh\" (UID: \"e9699275-8e01-4222-9e46-b90aa70f2a3c\") " pod="openstack/barbican-keystone-listener-796868c666-kk4mh" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.150377 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9699275-8e01-4222-9e46-b90aa70f2a3c-config-data\") pod \"barbican-keystone-listener-796868c666-kk4mh\" (UID: \"e9699275-8e01-4222-9e46-b90aa70f2a3c\") " pod="openstack/barbican-keystone-listener-796868c666-kk4mh" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.150468 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24831041-c157-474d-9e6d-55931683ed21-combined-ca-bundle\") pod \"barbican-worker-5699bfbbf-jpbrf\" (UID: \"24831041-c157-474d-9e6d-55931683ed21\") " pod="openstack/barbican-worker-5699bfbbf-jpbrf" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.154309 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e9699275-8e01-4222-9e46-b90aa70f2a3c-logs\") pod \"barbican-keystone-listener-796868c666-kk4mh\" (UID: \"e9699275-8e01-4222-9e46-b90aa70f2a3c\") " pod="openstack/barbican-keystone-listener-796868c666-kk4mh" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.155218 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24831041-c157-474d-9e6d-55931683ed21-logs\") pod \"barbican-worker-5699bfbbf-jpbrf\" (UID: \"24831041-c157-474d-9e6d-55931683ed21\") " pod="openstack/barbican-worker-5699bfbbf-jpbrf" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.155418 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52wvj\" (UniqueName: \"kubernetes.io/projected/24831041-c157-474d-9e6d-55931683ed21-kube-api-access-52wvj\") pod \"barbican-worker-5699bfbbf-jpbrf\" (UID: \"24831041-c157-474d-9e6d-55931683ed21\") " pod="openstack/barbican-worker-5699bfbbf-jpbrf" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.155652 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9699275-8e01-4222-9e46-b90aa70f2a3c-config-data-custom\") pod \"barbican-keystone-listener-796868c666-kk4mh\" (UID: \"e9699275-8e01-4222-9e46-b90aa70f2a3c\") " pod="openstack/barbican-keystone-listener-796868c666-kk4mh" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.155829 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24831041-c157-474d-9e6d-55931683ed21-config-data-custom\") pod \"barbican-worker-5699bfbbf-jpbrf\" (UID: \"24831041-c157-474d-9e6d-55931683ed21\") " pod="openstack/barbican-worker-5699bfbbf-jpbrf" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.156018 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9699275-8e01-4222-9e46-b90aa70f2a3c-combined-ca-bundle\") pod \"barbican-keystone-listener-796868c666-kk4mh\" (UID: \"e9699275-8e01-4222-9e46-b90aa70f2a3c\") " pod="openstack/barbican-keystone-listener-796868c666-kk4mh" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.156106 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24831041-c157-474d-9e6d-55931683ed21-config-data\") pod \"barbican-worker-5699bfbbf-jpbrf\" (UID: \"24831041-c157-474d-9e6d-55931683ed21\") " pod="openstack/barbican-worker-5699bfbbf-jpbrf" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.174000 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9699275-8e01-4222-9e46-b90aa70f2a3c-config-data\") pod \"barbican-keystone-listener-796868c666-kk4mh\" (UID: \"e9699275-8e01-4222-9e46-b90aa70f2a3c\") " pod="openstack/barbican-keystone-listener-796868c666-kk4mh" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.185872 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgjrj\" (UniqueName: \"kubernetes.io/projected/e9699275-8e01-4222-9e46-b90aa70f2a3c-kube-api-access-qgjrj\") pod \"barbican-keystone-listener-796868c666-kk4mh\" (UID: \"e9699275-8e01-4222-9e46-b90aa70f2a3c\") " pod="openstack/barbican-keystone-listener-796868c666-kk4mh" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.198185 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9699275-8e01-4222-9e46-b90aa70f2a3c-combined-ca-bundle\") pod \"barbican-keystone-listener-796868c666-kk4mh\" (UID: \"e9699275-8e01-4222-9e46-b90aa70f2a3c\") " pod="openstack/barbican-keystone-listener-796868c666-kk4mh" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.205597 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e9699275-8e01-4222-9e46-b90aa70f2a3c-config-data-custom\") pod \"barbican-keystone-listener-796868c666-kk4mh\" (UID: \"e9699275-8e01-4222-9e46-b90aa70f2a3c\") " pod="openstack/barbican-keystone-listener-796868c666-kk4mh" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.212729 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7945546f89-xn4hv"] Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.239471 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.273249 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24831041-c157-474d-9e6d-55931683ed21-combined-ca-bundle\") pod \"barbican-worker-5699bfbbf-jpbrf\" (UID: \"24831041-c157-474d-9e6d-55931683ed21\") " pod="openstack/barbican-worker-5699bfbbf-jpbrf" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.274024 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24831041-c157-474d-9e6d-55931683ed21-logs\") pod \"barbican-worker-5699bfbbf-jpbrf\" (UID: \"24831041-c157-474d-9e6d-55931683ed21\") " pod="openstack/barbican-worker-5699bfbbf-jpbrf" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.274174 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c2dba8a-c0ce-4c67-bba7-293b09a65566-ovsdbserver-nb\") pod \"dnsmasq-dns-7945546f89-xn4hv\" (UID: \"8c2dba8a-c0ce-4c67-bba7-293b09a65566\") " pod="openstack/dnsmasq-dns-7945546f89-xn4hv" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.274328 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52wvj\" (UniqueName: \"kubernetes.io/projected/24831041-c157-474d-9e6d-55931683ed21-kube-api-access-52wvj\") pod \"barbican-worker-5699bfbbf-jpbrf\" (UID: \"24831041-c157-474d-9e6d-55931683ed21\") " pod="openstack/barbican-worker-5699bfbbf-jpbrf" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.274794 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24831041-c157-474d-9e6d-55931683ed21-config-data-custom\") pod \"barbican-worker-5699bfbbf-jpbrf\" (UID: \"24831041-c157-474d-9e6d-55931683ed21\") " pod="openstack/barbican-worker-5699bfbbf-jpbrf" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.274931 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnrvj\" (UniqueName: \"kubernetes.io/projected/8c2dba8a-c0ce-4c67-bba7-293b09a65566-kube-api-access-rnrvj\") pod \"dnsmasq-dns-7945546f89-xn4hv\" (UID: \"8c2dba8a-c0ce-4c67-bba7-293b09a65566\") " pod="openstack/dnsmasq-dns-7945546f89-xn4hv" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.275029 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24831041-c157-474d-9e6d-55931683ed21-config-data\") pod \"barbican-worker-5699bfbbf-jpbrf\" (UID: \"24831041-c157-474d-9e6d-55931683ed21\") " pod="openstack/barbican-worker-5699bfbbf-jpbrf" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.275166 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2dba8a-c0ce-4c67-bba7-293b09a65566-config\") pod \"dnsmasq-dns-7945546f89-xn4hv\" (UID: \"8c2dba8a-c0ce-4c67-bba7-293b09a65566\") " pod="openstack/dnsmasq-dns-7945546f89-xn4hv" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.275324 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c2dba8a-c0ce-4c67-bba7-293b09a65566-dns-svc\") pod \"dnsmasq-dns-7945546f89-xn4hv\" (UID: \"8c2dba8a-c0ce-4c67-bba7-293b09a65566\") " pod="openstack/dnsmasq-dns-7945546f89-xn4hv" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.275930 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/24831041-c157-474d-9e6d-55931683ed21-logs\") pod \"barbican-worker-5699bfbbf-jpbrf\" (UID: \"24831041-c157-474d-9e6d-55931683ed21\") " pod="openstack/barbican-worker-5699bfbbf-jpbrf" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.278448 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c2dba8a-c0ce-4c67-bba7-293b09a65566-dns-swift-storage-0\") pod \"dnsmasq-dns-7945546f89-xn4hv\" (UID: \"8c2dba8a-c0ce-4c67-bba7-293b09a65566\") " pod="openstack/dnsmasq-dns-7945546f89-xn4hv" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.278518 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c2dba8a-c0ce-4c67-bba7-293b09a65566-ovsdbserver-sb\") pod \"dnsmasq-dns-7945546f89-xn4hv\" (UID: \"8c2dba8a-c0ce-4c67-bba7-293b09a65566\") " pod="openstack/dnsmasq-dns-7945546f89-xn4hv" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.287336 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-75f6b5466d-jv68p"] Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.288973 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75f6b5466d-jv68p" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.295125 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24831041-c157-474d-9e6d-55931683ed21-combined-ca-bundle\") pod \"barbican-worker-5699bfbbf-jpbrf\" (UID: \"24831041-c157-474d-9e6d-55931683ed21\") " pod="openstack/barbican-worker-5699bfbbf-jpbrf" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.295821 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.298726 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24831041-c157-474d-9e6d-55931683ed21-config-data\") pod \"barbican-worker-5699bfbbf-jpbrf\" (UID: \"24831041-c157-474d-9e6d-55931683ed21\") " pod="openstack/barbican-worker-5699bfbbf-jpbrf" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.305016 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24831041-c157-474d-9e6d-55931683ed21-config-data-custom\") pod \"barbican-worker-5699bfbbf-jpbrf\" (UID: \"24831041-c157-474d-9e6d-55931683ed21\") " pod="openstack/barbican-worker-5699bfbbf-jpbrf" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.308576 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52wvj\" (UniqueName: \"kubernetes.io/projected/24831041-c157-474d-9e6d-55931683ed21-kube-api-access-52wvj\") pod \"barbican-worker-5699bfbbf-jpbrf\" (UID: \"24831041-c157-474d-9e6d-55931683ed21\") " pod="openstack/barbican-worker-5699bfbbf-jpbrf" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.311190 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-796868c666-kk4mh" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.312366 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-75f6b5466d-jv68p"] Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.356910 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.380842 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c2dba8a-c0ce-4c67-bba7-293b09a65566-dns-swift-storage-0\") pod \"dnsmasq-dns-7945546f89-xn4hv\" (UID: \"8c2dba8a-c0ce-4c67-bba7-293b09a65566\") " pod="openstack/dnsmasq-dns-7945546f89-xn4hv" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.380885 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c2dba8a-c0ce-4c67-bba7-293b09a65566-ovsdbserver-sb\") pod \"dnsmasq-dns-7945546f89-xn4hv\" (UID: \"8c2dba8a-c0ce-4c67-bba7-293b09a65566\") " pod="openstack/dnsmasq-dns-7945546f89-xn4hv" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.380930 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de70d856-1c6d-498e-b548-1724bbc8cb66-combined-ca-bundle\") pod \"barbican-api-75f6b5466d-jv68p\" (UID: \"de70d856-1c6d-498e-b548-1724bbc8cb66\") " pod="openstack/barbican-api-75f6b5466d-jv68p" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.380972 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de70d856-1c6d-498e-b548-1724bbc8cb66-config-data-custom\") pod \"barbican-api-75f6b5466d-jv68p\" (UID: \"de70d856-1c6d-498e-b548-1724bbc8cb66\") " pod="openstack/barbican-api-75f6b5466d-jv68p" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.381008 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c2dba8a-c0ce-4c67-bba7-293b09a65566-ovsdbserver-nb\") pod \"dnsmasq-dns-7945546f89-xn4hv\" (UID: \"8c2dba8a-c0ce-4c67-bba7-293b09a65566\") " pod="openstack/dnsmasq-dns-7945546f89-xn4hv" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.381040 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvlgm\" (UniqueName: \"kubernetes.io/projected/de70d856-1c6d-498e-b548-1724bbc8cb66-kube-api-access-tvlgm\") pod \"barbican-api-75f6b5466d-jv68p\" (UID: \"de70d856-1c6d-498e-b548-1724bbc8cb66\") " pod="openstack/barbican-api-75f6b5466d-jv68p" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.381059 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de70d856-1c6d-498e-b548-1724bbc8cb66-logs\") pod \"barbican-api-75f6b5466d-jv68p\" (UID: \"de70d856-1c6d-498e-b548-1724bbc8cb66\") " pod="openstack/barbican-api-75f6b5466d-jv68p" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.381139 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnrvj\" (UniqueName: \"kubernetes.io/projected/8c2dba8a-c0ce-4c67-bba7-293b09a65566-kube-api-access-rnrvj\") pod \"dnsmasq-dns-7945546f89-xn4hv\" (UID: \"8c2dba8a-c0ce-4c67-bba7-293b09a65566\") " pod="openstack/dnsmasq-dns-7945546f89-xn4hv" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.381164 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2dba8a-c0ce-4c67-bba7-293b09a65566-config\") pod \"dnsmasq-dns-7945546f89-xn4hv\" (UID: \"8c2dba8a-c0ce-4c67-bba7-293b09a65566\") " pod="openstack/dnsmasq-dns-7945546f89-xn4hv" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.381196 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c2dba8a-c0ce-4c67-bba7-293b09a65566-dns-svc\") pod \"dnsmasq-dns-7945546f89-xn4hv\" (UID: \"8c2dba8a-c0ce-4c67-bba7-293b09a65566\") " pod="openstack/dnsmasq-dns-7945546f89-xn4hv" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.381223 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de70d856-1c6d-498e-b548-1724bbc8cb66-config-data\") pod \"barbican-api-75f6b5466d-jv68p\" (UID: \"de70d856-1c6d-498e-b548-1724bbc8cb66\") " pod="openstack/barbican-api-75f6b5466d-jv68p" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.382391 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c2dba8a-c0ce-4c67-bba7-293b09a65566-dns-swift-storage-0\") pod \"dnsmasq-dns-7945546f89-xn4hv\" (UID: \"8c2dba8a-c0ce-4c67-bba7-293b09a65566\") " pod="openstack/dnsmasq-dns-7945546f89-xn4hv" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.385070 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c2dba8a-c0ce-4c67-bba7-293b09a65566-ovsdbserver-sb\") pod \"dnsmasq-dns-7945546f89-xn4hv\" (UID: \"8c2dba8a-c0ce-4c67-bba7-293b09a65566\") " pod="openstack/dnsmasq-dns-7945546f89-xn4hv" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.386008 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c2dba8a-c0ce-4c67-bba7-293b09a65566-ovsdbserver-nb\") pod \"dnsmasq-dns-7945546f89-xn4hv\" (UID: \"8c2dba8a-c0ce-4c67-bba7-293b09a65566\") " pod="openstack/dnsmasq-dns-7945546f89-xn4hv" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.386457 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2dba8a-c0ce-4c67-bba7-293b09a65566-config\") pod \"dnsmasq-dns-7945546f89-xn4hv\" (UID: \"8c2dba8a-c0ce-4c67-bba7-293b09a65566\") " pod="openstack/dnsmasq-dns-7945546f89-xn4hv" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.396115 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c2dba8a-c0ce-4c67-bba7-293b09a65566-dns-svc\") pod \"dnsmasq-dns-7945546f89-xn4hv\" (UID: \"8c2dba8a-c0ce-4c67-bba7-293b09a65566\") " pod="openstack/dnsmasq-dns-7945546f89-xn4hv" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.409581 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnrvj\" (UniqueName: \"kubernetes.io/projected/8c2dba8a-c0ce-4c67-bba7-293b09a65566-kube-api-access-rnrvj\") pod \"dnsmasq-dns-7945546f89-xn4hv\" (UID: \"8c2dba8a-c0ce-4c67-bba7-293b09a65566\") " pod="openstack/dnsmasq-dns-7945546f89-xn4hv" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.432913 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5699bfbbf-jpbrf" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.447653 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7945546f89-xn4hv" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.491986 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de70d856-1c6d-498e-b548-1724bbc8cb66-config-data\") pod \"barbican-api-75f6b5466d-jv68p\" (UID: \"de70d856-1c6d-498e-b548-1724bbc8cb66\") " pod="openstack/barbican-api-75f6b5466d-jv68p" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.492098 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de70d856-1c6d-498e-b548-1724bbc8cb66-combined-ca-bundle\") pod \"barbican-api-75f6b5466d-jv68p\" (UID: \"de70d856-1c6d-498e-b548-1724bbc8cb66\") " pod="openstack/barbican-api-75f6b5466d-jv68p" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.492130 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de70d856-1c6d-498e-b548-1724bbc8cb66-config-data-custom\") pod \"barbican-api-75f6b5466d-jv68p\" (UID: \"de70d856-1c6d-498e-b548-1724bbc8cb66\") " pod="openstack/barbican-api-75f6b5466d-jv68p" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.492204 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvlgm\" (UniqueName: \"kubernetes.io/projected/de70d856-1c6d-498e-b548-1724bbc8cb66-kube-api-access-tvlgm\") pod \"barbican-api-75f6b5466d-jv68p\" (UID: \"de70d856-1c6d-498e-b548-1724bbc8cb66\") " pod="openstack/barbican-api-75f6b5466d-jv68p" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.492228 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de70d856-1c6d-498e-b548-1724bbc8cb66-logs\") pod \"barbican-api-75f6b5466d-jv68p\" (UID: \"de70d856-1c6d-498e-b548-1724bbc8cb66\") " pod="openstack/barbican-api-75f6b5466d-jv68p" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.492703 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de70d856-1c6d-498e-b548-1724bbc8cb66-logs\") pod \"barbican-api-75f6b5466d-jv68p\" (UID: \"de70d856-1c6d-498e-b548-1724bbc8cb66\") " pod="openstack/barbican-api-75f6b5466d-jv68p" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.502908 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de70d856-1c6d-498e-b548-1724bbc8cb66-config-data\") pod \"barbican-api-75f6b5466d-jv68p\" (UID: \"de70d856-1c6d-498e-b548-1724bbc8cb66\") " pod="openstack/barbican-api-75f6b5466d-jv68p" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.504395 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de70d856-1c6d-498e-b548-1724bbc8cb66-combined-ca-bundle\") pod \"barbican-api-75f6b5466d-jv68p\" (UID: \"de70d856-1c6d-498e-b548-1724bbc8cb66\") " pod="openstack/barbican-api-75f6b5466d-jv68p" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.510165 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de70d856-1c6d-498e-b548-1724bbc8cb66-config-data-custom\") pod \"barbican-api-75f6b5466d-jv68p\" (UID: \"de70d856-1c6d-498e-b548-1724bbc8cb66\") " pod="openstack/barbican-api-75f6b5466d-jv68p" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.519112 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvlgm\" (UniqueName: \"kubernetes.io/projected/de70d856-1c6d-498e-b548-1724bbc8cb66-kube-api-access-tvlgm\") pod \"barbican-api-75f6b5466d-jv68p\" (UID: \"de70d856-1c6d-498e-b548-1724bbc8cb66\") " pod="openstack/barbican-api-75f6b5466d-jv68p" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.553114 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9vwnm" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.696046 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e-combined-ca-bundle\") pod \"ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e\" (UID: \"ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e\") " Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.696408 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e-config-data\") pod \"ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e\" (UID: \"ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e\") " Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.697199 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9x787\" (UniqueName: \"kubernetes.io/projected/ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e-kube-api-access-9x787\") pod \"ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e\" (UID: \"ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e\") " Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.697904 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e-db-sync-config-data\") pod \"ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e\" (UID: \"ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e\") " Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.708396 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e-kube-api-access-9x787" (OuterVolumeSpecName: "kube-api-access-9x787") pod "ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e" (UID: "ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e"). InnerVolumeSpecName "kube-api-access-9x787". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.708607 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e" (UID: "ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.751870 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e" (UID: "ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.762130 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75f6b5466d-jv68p" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.804066 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9x787\" (UniqueName: \"kubernetes.io/projected/ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e-kube-api-access-9x787\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.804099 4681 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.804113 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.808521 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e-config-data" (OuterVolumeSpecName: "config-data") pod "ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e" (UID: "ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.837571 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-9vwnm" event={"ID":"ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e","Type":"ContainerDied","Data":"1208eea38ad6d5f860d7f5934eeee97caf5a9de48228317592bddc17bb8cb052"} Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.837647 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1208eea38ad6d5f860d7f5934eeee97caf5a9de48228317592bddc17bb8cb052" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.837766 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-9vwnm" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.864132 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-674794d9f6-5s9ps" event={"ID":"b5b3ede0-d5ce-41d0-a320-ee0e732c8f86","Type":"ContainerStarted","Data":"e0327455913e920ebd0c64a41b3256d90e94f22d0b4ccc56e199fa8c89a59a64"} Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.864162 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-674794d9f6-5s9ps" event={"ID":"b5b3ede0-d5ce-41d0-a320-ee0e732c8f86","Type":"ContainerStarted","Data":"a7f42fc57b7029f96e35d63075e9029c5ca236025e3dd995217915d70a84d57e"} Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.864178 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-674794d9f6-5s9ps" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.864187 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-674794d9f6-5s9ps" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.864922 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.887245 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-674794d9f6-5s9ps" podStartSLOduration=2.8872246759999998 podStartE2EDuration="2.887224676s" podCreationTimestamp="2026-04-04 02:25:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:25:38.883246057 +0000 UTC m=+1818.549021177" watchObservedRunningTime="2026-04-04 02:25:38.887224676 +0000 UTC m=+1818.552999796" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.905984 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:38 crc kubenswrapper[4681]: I0404 02:25:38.940191 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.057899 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.058048 4681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.245350 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-796868c666-kk4mh"] Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.319716 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7945546f89-xn4hv"] Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.333050 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5699bfbbf-jpbrf"] Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.346636 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7945546f89-xn4hv"] Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.417128 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c5cf958f7-nnfhw"] Apr 04 02:25:39 crc kubenswrapper[4681]: E0404 02:25:39.417807 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e" containerName="glance-db-sync" Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.417892 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e" containerName="glance-db-sync" Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.418242 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e" containerName="glance-db-sync" Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.419465 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.432116 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.476467 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c5cf958f7-nnfhw"] Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.529296 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed63431c-61d3-47d4-84a4-9eca959a780f-dns-svc\") pod \"dnsmasq-dns-6c5cf958f7-nnfhw\" (UID: \"ed63431c-61d3-47d4-84a4-9eca959a780f\") " pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.529383 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed63431c-61d3-47d4-84a4-9eca959a780f-ovsdbserver-nb\") pod \"dnsmasq-dns-6c5cf958f7-nnfhw\" (UID: \"ed63431c-61d3-47d4-84a4-9eca959a780f\") " pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.529942 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed63431c-61d3-47d4-84a4-9eca959a780f-ovsdbserver-sb\") pod \"dnsmasq-dns-6c5cf958f7-nnfhw\" (UID: \"ed63431c-61d3-47d4-84a4-9eca959a780f\") " pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.531690 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed63431c-61d3-47d4-84a4-9eca959a780f-config\") pod \"dnsmasq-dns-6c5cf958f7-nnfhw\" (UID: \"ed63431c-61d3-47d4-84a4-9eca959a780f\") " pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.533146 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed63431c-61d3-47d4-84a4-9eca959a780f-dns-swift-storage-0\") pod \"dnsmasq-dns-6c5cf958f7-nnfhw\" (UID: \"ed63431c-61d3-47d4-84a4-9eca959a780f\") " pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.533223 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8z4p\" (UniqueName: \"kubernetes.io/projected/ed63431c-61d3-47d4-84a4-9eca959a780f-kube-api-access-n8z4p\") pod \"dnsmasq-dns-6c5cf958f7-nnfhw\" (UID: \"ed63431c-61d3-47d4-84a4-9eca959a780f\") " pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.635326 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed63431c-61d3-47d4-84a4-9eca959a780f-dns-swift-storage-0\") pod \"dnsmasq-dns-6c5cf958f7-nnfhw\" (UID: \"ed63431c-61d3-47d4-84a4-9eca959a780f\") " pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.635399 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8z4p\" (UniqueName: \"kubernetes.io/projected/ed63431c-61d3-47d4-84a4-9eca959a780f-kube-api-access-n8z4p\") pod \"dnsmasq-dns-6c5cf958f7-nnfhw\" (UID: \"ed63431c-61d3-47d4-84a4-9eca959a780f\") " pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.635477 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed63431c-61d3-47d4-84a4-9eca959a780f-dns-svc\") pod \"dnsmasq-dns-6c5cf958f7-nnfhw\" (UID: \"ed63431c-61d3-47d4-84a4-9eca959a780f\") " pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.635501 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed63431c-61d3-47d4-84a4-9eca959a780f-ovsdbserver-nb\") pod \"dnsmasq-dns-6c5cf958f7-nnfhw\" (UID: \"ed63431c-61d3-47d4-84a4-9eca959a780f\") " pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.635583 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed63431c-61d3-47d4-84a4-9eca959a780f-ovsdbserver-sb\") pod \"dnsmasq-dns-6c5cf958f7-nnfhw\" (UID: \"ed63431c-61d3-47d4-84a4-9eca959a780f\") " pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.635614 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed63431c-61d3-47d4-84a4-9eca959a780f-config\") pod \"dnsmasq-dns-6c5cf958f7-nnfhw\" (UID: \"ed63431c-61d3-47d4-84a4-9eca959a780f\") " pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.636685 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed63431c-61d3-47d4-84a4-9eca959a780f-config\") pod \"dnsmasq-dns-6c5cf958f7-nnfhw\" (UID: \"ed63431c-61d3-47d4-84a4-9eca959a780f\") " pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.639174 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed63431c-61d3-47d4-84a4-9eca959a780f-dns-swift-storage-0\") pod \"dnsmasq-dns-6c5cf958f7-nnfhw\" (UID: \"ed63431c-61d3-47d4-84a4-9eca959a780f\") " pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.639482 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed63431c-61d3-47d4-84a4-9eca959a780f-dns-svc\") pod \"dnsmasq-dns-6c5cf958f7-nnfhw\" (UID: \"ed63431c-61d3-47d4-84a4-9eca959a780f\") " pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.640416 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed63431c-61d3-47d4-84a4-9eca959a780f-ovsdbserver-nb\") pod \"dnsmasq-dns-6c5cf958f7-nnfhw\" (UID: \"ed63431c-61d3-47d4-84a4-9eca959a780f\") " pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.640875 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed63431c-61d3-47d4-84a4-9eca959a780f-ovsdbserver-sb\") pod \"dnsmasq-dns-6c5cf958f7-nnfhw\" (UID: \"ed63431c-61d3-47d4-84a4-9eca959a780f\") " pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.681611 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8z4p\" (UniqueName: \"kubernetes.io/projected/ed63431c-61d3-47d4-84a4-9eca959a780f-kube-api-access-n8z4p\") pod \"dnsmasq-dns-6c5cf958f7-nnfhw\" (UID: \"ed63431c-61d3-47d4-84a4-9eca959a780f\") " pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" Apr 04 02:25:39 crc kubenswrapper[4681]: W0404 02:25:39.809302 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde70d856_1c6d_498e_b548_1724bbc8cb66.slice/crio-7bd0558ac2b9ecdf985c08c5c22f266e5f9bee72ee3e4ad74e6cdbf964435f49 WatchSource:0}: Error finding container 7bd0558ac2b9ecdf985c08c5c22f266e5f9bee72ee3e4ad74e6cdbf964435f49: Status 404 returned error can't find the container with id 7bd0558ac2b9ecdf985c08c5c22f266e5f9bee72ee3e4ad74e6cdbf964435f49 Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.835841 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-75f6b5466d-jv68p"] Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.839759 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.893522 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-796868c666-kk4mh" event={"ID":"e9699275-8e01-4222-9e46-b90aa70f2a3c","Type":"ContainerStarted","Data":"71f9dbbdd6161785d5afec3ecb773ff503bc82f6539887a8fd67bd72de51ab2a"} Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.917736 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75f6b5466d-jv68p" event={"ID":"de70d856-1c6d-498e-b548-1724bbc8cb66","Type":"ContainerStarted","Data":"7bd0558ac2b9ecdf985c08c5c22f266e5f9bee72ee3e4ad74e6cdbf964435f49"} Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.937331 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5699bfbbf-jpbrf" event={"ID":"24831041-c157-474d-9e6d-55931683ed21","Type":"ContainerStarted","Data":"f0fb2faace39e6abd629edf48e99df85208e6b2164057a59ebe251a6d5b5ec82"} Apr 04 02:25:39 crc kubenswrapper[4681]: I0404 02:25:39.945211 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7945546f89-xn4hv" event={"ID":"8c2dba8a-c0ce-4c67-bba7-293b09a65566","Type":"ContainerStarted","Data":"be0202512c916560222d3934979e7d297beb28c7ee72501536bc97adf32e094c"} Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.205801 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.207914 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.209840 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.211417 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-9bgjc" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.213407 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.245352 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.356549 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37de3e0c-8775-4ee4-8462-65507df44b67-config-data\") pod \"glance-default-external-api-0\" (UID: \"37de3e0c-8775-4ee4-8462-65507df44b67\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.356598 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"37de3e0c-8775-4ee4-8462-65507df44b67\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.356645 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/37de3e0c-8775-4ee4-8462-65507df44b67-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"37de3e0c-8775-4ee4-8462-65507df44b67\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.356670 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37de3e0c-8775-4ee4-8462-65507df44b67-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"37de3e0c-8775-4ee4-8462-65507df44b67\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.356800 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2gcv\" (UniqueName: \"kubernetes.io/projected/37de3e0c-8775-4ee4-8462-65507df44b67-kube-api-access-m2gcv\") pod \"glance-default-external-api-0\" (UID: \"37de3e0c-8775-4ee4-8462-65507df44b67\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.357041 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37de3e0c-8775-4ee4-8462-65507df44b67-logs\") pod \"glance-default-external-api-0\" (UID: \"37de3e0c-8775-4ee4-8462-65507df44b67\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.357389 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37de3e0c-8775-4ee4-8462-65507df44b67-scripts\") pod \"glance-default-external-api-0\" (UID: \"37de3e0c-8775-4ee4-8462-65507df44b67\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.459435 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37de3e0c-8775-4ee4-8462-65507df44b67-config-data\") pod \"glance-default-external-api-0\" (UID: \"37de3e0c-8775-4ee4-8462-65507df44b67\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.459517 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"37de3e0c-8775-4ee4-8462-65507df44b67\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.459573 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/37de3e0c-8775-4ee4-8462-65507df44b67-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"37de3e0c-8775-4ee4-8462-65507df44b67\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.459610 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37de3e0c-8775-4ee4-8462-65507df44b67-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"37de3e0c-8775-4ee4-8462-65507df44b67\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.459663 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2gcv\" (UniqueName: \"kubernetes.io/projected/37de3e0c-8775-4ee4-8462-65507df44b67-kube-api-access-m2gcv\") pod \"glance-default-external-api-0\" (UID: \"37de3e0c-8775-4ee4-8462-65507df44b67\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.459800 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37de3e0c-8775-4ee4-8462-65507df44b67-logs\") pod \"glance-default-external-api-0\" (UID: \"37de3e0c-8775-4ee4-8462-65507df44b67\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.459842 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37de3e0c-8775-4ee4-8462-65507df44b67-scripts\") pod \"glance-default-external-api-0\" (UID: \"37de3e0c-8775-4ee4-8462-65507df44b67\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.459936 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"37de3e0c-8775-4ee4-8462-65507df44b67\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.460272 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/37de3e0c-8775-4ee4-8462-65507df44b67-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"37de3e0c-8775-4ee4-8462-65507df44b67\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.460433 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37de3e0c-8775-4ee4-8462-65507df44b67-logs\") pod \"glance-default-external-api-0\" (UID: \"37de3e0c-8775-4ee4-8462-65507df44b67\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.467195 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37de3e0c-8775-4ee4-8462-65507df44b67-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"37de3e0c-8775-4ee4-8462-65507df44b67\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.467469 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37de3e0c-8775-4ee4-8462-65507df44b67-config-data\") pod \"glance-default-external-api-0\" (UID: \"37de3e0c-8775-4ee4-8462-65507df44b67\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.470455 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37de3e0c-8775-4ee4-8462-65507df44b67-scripts\") pod \"glance-default-external-api-0\" (UID: \"37de3e0c-8775-4ee4-8462-65507df44b67\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.481891 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2gcv\" (UniqueName: \"kubernetes.io/projected/37de3e0c-8775-4ee4-8462-65507df44b67-kube-api-access-m2gcv\") pod \"glance-default-external-api-0\" (UID: \"37de3e0c-8775-4ee4-8462-65507df44b67\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.489655 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"37de3e0c-8775-4ee4-8462-65507df44b67\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.548375 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.553717 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.555517 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.558540 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.573680 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.584295 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c5cf958f7-nnfhw"] Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.665649 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.665964 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.665993 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-logs\") pod \"glance-default-internal-api-0\" (UID: \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.666017 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-652t8\" (UniqueName: \"kubernetes.io/projected/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-kube-api-access-652t8\") pod \"glance-default-internal-api-0\" (UID: \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.666038 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.666058 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.666418 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: W0404 02:25:40.706195 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded63431c_61d3_47d4_84a4_9eca959a780f.slice/crio-81aa56898b97447e3f1dd780c4e79980c077a51ea5ba0dba0f654f92e08af882 WatchSource:0}: Error finding container 81aa56898b97447e3f1dd780c4e79980c077a51ea5ba0dba0f654f92e08af882: Status 404 returned error can't find the container with id 81aa56898b97447e3f1dd780c4e79980c077a51ea5ba0dba0f654f92e08af882 Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.768187 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.768311 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.768340 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-logs\") pod \"glance-default-internal-api-0\" (UID: \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.768370 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-652t8\" (UniqueName: \"kubernetes.io/projected/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-kube-api-access-652t8\") pod \"glance-default-internal-api-0\" (UID: \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.768396 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.768423 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.768608 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.769161 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.769905 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.770030 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-logs\") pod \"glance-default-internal-api-0\" (UID: \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.774543 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.775256 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.780863 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.798179 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-652t8\" (UniqueName: \"kubernetes.io/projected/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-kube-api-access-652t8\") pod \"glance-default-internal-api-0\" (UID: \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.855415 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.951031 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.981310 4681 generic.go:334] "Generic (PLEG): container finished" podID="8c2dba8a-c0ce-4c67-bba7-293b09a65566" containerID="14d81ac2ba0fc1f50cb03269d57158aecca7e09f7e3b0931d965e3edc48996f0" exitCode=0 Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.981429 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7945546f89-xn4hv" event={"ID":"8c2dba8a-c0ce-4c67-bba7-293b09a65566","Type":"ContainerDied","Data":"14d81ac2ba0fc1f50cb03269d57158aecca7e09f7e3b0931d965e3edc48996f0"} Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.986940 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75f6b5466d-jv68p" event={"ID":"de70d856-1c6d-498e-b548-1724bbc8cb66","Type":"ContainerStarted","Data":"9005a9445ac119c8f75c372c7463598b76e4b46ccd22083dffbd6cfeb296d6f5"} Apr 04 02:25:40 crc kubenswrapper[4681]: I0404 02:25:40.997858 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" event={"ID":"ed63431c-61d3-47d4-84a4-9eca959a780f","Type":"ContainerStarted","Data":"81aa56898b97447e3f1dd780c4e79980c077a51ea5ba0dba0f654f92e08af882"} Apr 04 02:25:41 crc kubenswrapper[4681]: I0404 02:25:41.321516 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Apr 04 02:25:41 crc kubenswrapper[4681]: I0404 02:25:41.727353 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Apr 04 02:25:41 crc kubenswrapper[4681]: I0404 02:25:41.750151 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7945546f89-xn4hv" Apr 04 02:25:41 crc kubenswrapper[4681]: I0404 02:25:41.828472 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c2dba8a-c0ce-4c67-bba7-293b09a65566-dns-swift-storage-0\") pod \"8c2dba8a-c0ce-4c67-bba7-293b09a65566\" (UID: \"8c2dba8a-c0ce-4c67-bba7-293b09a65566\") " Apr 04 02:25:41 crc kubenswrapper[4681]: I0404 02:25:41.828553 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2dba8a-c0ce-4c67-bba7-293b09a65566-config\") pod \"8c2dba8a-c0ce-4c67-bba7-293b09a65566\" (UID: \"8c2dba8a-c0ce-4c67-bba7-293b09a65566\") " Apr 04 02:25:41 crc kubenswrapper[4681]: I0404 02:25:41.828612 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c2dba8a-c0ce-4c67-bba7-293b09a65566-ovsdbserver-nb\") pod \"8c2dba8a-c0ce-4c67-bba7-293b09a65566\" (UID: \"8c2dba8a-c0ce-4c67-bba7-293b09a65566\") " Apr 04 02:25:41 crc kubenswrapper[4681]: I0404 02:25:41.828691 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c2dba8a-c0ce-4c67-bba7-293b09a65566-ovsdbserver-sb\") pod \"8c2dba8a-c0ce-4c67-bba7-293b09a65566\" (UID: \"8c2dba8a-c0ce-4c67-bba7-293b09a65566\") " Apr 04 02:25:41 crc kubenswrapper[4681]: I0404 02:25:41.828806 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnrvj\" (UniqueName: \"kubernetes.io/projected/8c2dba8a-c0ce-4c67-bba7-293b09a65566-kube-api-access-rnrvj\") pod \"8c2dba8a-c0ce-4c67-bba7-293b09a65566\" (UID: \"8c2dba8a-c0ce-4c67-bba7-293b09a65566\") " Apr 04 02:25:41 crc kubenswrapper[4681]: I0404 02:25:41.828904 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c2dba8a-c0ce-4c67-bba7-293b09a65566-dns-svc\") pod \"8c2dba8a-c0ce-4c67-bba7-293b09a65566\" (UID: \"8c2dba8a-c0ce-4c67-bba7-293b09a65566\") " Apr 04 02:25:41 crc kubenswrapper[4681]: I0404 02:25:41.839619 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c2dba8a-c0ce-4c67-bba7-293b09a65566-kube-api-access-rnrvj" (OuterVolumeSpecName: "kube-api-access-rnrvj") pod "8c2dba8a-c0ce-4c67-bba7-293b09a65566" (UID: "8c2dba8a-c0ce-4c67-bba7-293b09a65566"). InnerVolumeSpecName "kube-api-access-rnrvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:25:41 crc kubenswrapper[4681]: I0404 02:25:41.863400 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c2dba8a-c0ce-4c67-bba7-293b09a65566-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8c2dba8a-c0ce-4c67-bba7-293b09a65566" (UID: "8c2dba8a-c0ce-4c67-bba7-293b09a65566"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:25:41 crc kubenswrapper[4681]: I0404 02:25:41.876907 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c2dba8a-c0ce-4c67-bba7-293b09a65566-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8c2dba8a-c0ce-4c67-bba7-293b09a65566" (UID: "8c2dba8a-c0ce-4c67-bba7-293b09a65566"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:25:41 crc kubenswrapper[4681]: I0404 02:25:41.880840 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c2dba8a-c0ce-4c67-bba7-293b09a65566-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8c2dba8a-c0ce-4c67-bba7-293b09a65566" (UID: "8c2dba8a-c0ce-4c67-bba7-293b09a65566"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:25:41 crc kubenswrapper[4681]: I0404 02:25:41.894231 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c2dba8a-c0ce-4c67-bba7-293b09a65566-config" (OuterVolumeSpecName: "config") pod "8c2dba8a-c0ce-4c67-bba7-293b09a65566" (UID: "8c2dba8a-c0ce-4c67-bba7-293b09a65566"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:25:41 crc kubenswrapper[4681]: I0404 02:25:41.899366 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c2dba8a-c0ce-4c67-bba7-293b09a65566-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8c2dba8a-c0ce-4c67-bba7-293b09a65566" (UID: "8c2dba8a-c0ce-4c67-bba7-293b09a65566"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:25:41 crc kubenswrapper[4681]: I0404 02:25:41.931363 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnrvj\" (UniqueName: \"kubernetes.io/projected/8c2dba8a-c0ce-4c67-bba7-293b09a65566-kube-api-access-rnrvj\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:41 crc kubenswrapper[4681]: I0404 02:25:41.931394 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c2dba8a-c0ce-4c67-bba7-293b09a65566-dns-svc\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:41 crc kubenswrapper[4681]: I0404 02:25:41.931405 4681 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c2dba8a-c0ce-4c67-bba7-293b09a65566-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:41 crc kubenswrapper[4681]: I0404 02:25:41.931414 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2dba8a-c0ce-4c67-bba7-293b09a65566-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:41 crc kubenswrapper[4681]: I0404 02:25:41.931423 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c2dba8a-c0ce-4c67-bba7-293b09a65566-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:41 crc kubenswrapper[4681]: I0404 02:25:41.931431 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c2dba8a-c0ce-4c67-bba7-293b09a65566-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.038879 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5","Type":"ContainerStarted","Data":"51d4fba56fb25fba59160f7862219cbd18a9e71411b36e8ae0bc76dd54d27280"} Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.060611 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75f6b5466d-jv68p" event={"ID":"de70d856-1c6d-498e-b548-1724bbc8cb66","Type":"ContainerStarted","Data":"307ea29142e5a8ac9efffe73f740a75316b7511980add14e24838d074e5bc7b1"} Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.067353 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7945546f89-xn4hv" event={"ID":"8c2dba8a-c0ce-4c67-bba7-293b09a65566","Type":"ContainerDied","Data":"be0202512c916560222d3934979e7d297beb28c7ee72501536bc97adf32e094c"} Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.067415 4681 scope.go:117] "RemoveContainer" containerID="14d81ac2ba0fc1f50cb03269d57158aecca7e09f7e3b0931d965e3edc48996f0" Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.067539 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7945546f89-xn4hv" Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.075249 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"37de3e0c-8775-4ee4-8462-65507df44b67","Type":"ContainerStarted","Data":"faebd27ecea430ae21d884d593a4b64406f9d9638df415341fb9062cd5dbba9c"} Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.240487 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7945546f89-xn4hv"] Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.244548 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7945546f89-xn4hv"] Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.376445 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5cb6cbf97d-96269"] Apr 04 02:25:42 crc kubenswrapper[4681]: E0404 02:25:42.376864 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c2dba8a-c0ce-4c67-bba7-293b09a65566" containerName="init" Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.376875 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c2dba8a-c0ce-4c67-bba7-293b09a65566" containerName="init" Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.377048 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c2dba8a-c0ce-4c67-bba7-293b09a65566" containerName="init" Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.377999 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cb6cbf97d-96269" Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.384191 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.384333 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.394650 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5cb6cbf97d-96269"] Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.448264 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bce5c08c-6cdc-47ae-9454-ffc500f6e34c-config-data-custom\") pod \"barbican-api-5cb6cbf97d-96269\" (UID: \"bce5c08c-6cdc-47ae-9454-ffc500f6e34c\") " pod="openstack/barbican-api-5cb6cbf97d-96269" Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.448472 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljxs7\" (UniqueName: \"kubernetes.io/projected/bce5c08c-6cdc-47ae-9454-ffc500f6e34c-kube-api-access-ljxs7\") pod \"barbican-api-5cb6cbf97d-96269\" (UID: \"bce5c08c-6cdc-47ae-9454-ffc500f6e34c\") " pod="openstack/barbican-api-5cb6cbf97d-96269" Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.448538 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bce5c08c-6cdc-47ae-9454-ffc500f6e34c-internal-tls-certs\") pod \"barbican-api-5cb6cbf97d-96269\" (UID: \"bce5c08c-6cdc-47ae-9454-ffc500f6e34c\") " pod="openstack/barbican-api-5cb6cbf97d-96269" Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.448554 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bce5c08c-6cdc-47ae-9454-ffc500f6e34c-public-tls-certs\") pod \"barbican-api-5cb6cbf97d-96269\" (UID: \"bce5c08c-6cdc-47ae-9454-ffc500f6e34c\") " pod="openstack/barbican-api-5cb6cbf97d-96269" Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.448594 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bce5c08c-6cdc-47ae-9454-ffc500f6e34c-config-data\") pod \"barbican-api-5cb6cbf97d-96269\" (UID: \"bce5c08c-6cdc-47ae-9454-ffc500f6e34c\") " pod="openstack/barbican-api-5cb6cbf97d-96269" Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.448609 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bce5c08c-6cdc-47ae-9454-ffc500f6e34c-logs\") pod \"barbican-api-5cb6cbf97d-96269\" (UID: \"bce5c08c-6cdc-47ae-9454-ffc500f6e34c\") " pod="openstack/barbican-api-5cb6cbf97d-96269" Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.448651 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bce5c08c-6cdc-47ae-9454-ffc500f6e34c-combined-ca-bundle\") pod \"barbican-api-5cb6cbf97d-96269\" (UID: \"bce5c08c-6cdc-47ae-9454-ffc500f6e34c\") " pod="openstack/barbican-api-5cb6cbf97d-96269" Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.551315 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bce5c08c-6cdc-47ae-9454-ffc500f6e34c-config-data-custom\") pod \"barbican-api-5cb6cbf97d-96269\" (UID: \"bce5c08c-6cdc-47ae-9454-ffc500f6e34c\") " pod="openstack/barbican-api-5cb6cbf97d-96269" Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.551408 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljxs7\" (UniqueName: \"kubernetes.io/projected/bce5c08c-6cdc-47ae-9454-ffc500f6e34c-kube-api-access-ljxs7\") pod \"barbican-api-5cb6cbf97d-96269\" (UID: \"bce5c08c-6cdc-47ae-9454-ffc500f6e34c\") " pod="openstack/barbican-api-5cb6cbf97d-96269" Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.551445 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bce5c08c-6cdc-47ae-9454-ffc500f6e34c-internal-tls-certs\") pod \"barbican-api-5cb6cbf97d-96269\" (UID: \"bce5c08c-6cdc-47ae-9454-ffc500f6e34c\") " pod="openstack/barbican-api-5cb6cbf97d-96269" Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.551469 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bce5c08c-6cdc-47ae-9454-ffc500f6e34c-public-tls-certs\") pod \"barbican-api-5cb6cbf97d-96269\" (UID: \"bce5c08c-6cdc-47ae-9454-ffc500f6e34c\") " pod="openstack/barbican-api-5cb6cbf97d-96269" Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.551499 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bce5c08c-6cdc-47ae-9454-ffc500f6e34c-config-data\") pod \"barbican-api-5cb6cbf97d-96269\" (UID: \"bce5c08c-6cdc-47ae-9454-ffc500f6e34c\") " pod="openstack/barbican-api-5cb6cbf97d-96269" Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.551521 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bce5c08c-6cdc-47ae-9454-ffc500f6e34c-logs\") pod \"barbican-api-5cb6cbf97d-96269\" (UID: \"bce5c08c-6cdc-47ae-9454-ffc500f6e34c\") " pod="openstack/barbican-api-5cb6cbf97d-96269" Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.551552 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bce5c08c-6cdc-47ae-9454-ffc500f6e34c-combined-ca-bundle\") pod \"barbican-api-5cb6cbf97d-96269\" (UID: \"bce5c08c-6cdc-47ae-9454-ffc500f6e34c\") " pod="openstack/barbican-api-5cb6cbf97d-96269" Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.552391 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bce5c08c-6cdc-47ae-9454-ffc500f6e34c-logs\") pod \"barbican-api-5cb6cbf97d-96269\" (UID: \"bce5c08c-6cdc-47ae-9454-ffc500f6e34c\") " pod="openstack/barbican-api-5cb6cbf97d-96269" Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.563008 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bce5c08c-6cdc-47ae-9454-ffc500f6e34c-public-tls-certs\") pod \"barbican-api-5cb6cbf97d-96269\" (UID: \"bce5c08c-6cdc-47ae-9454-ffc500f6e34c\") " pod="openstack/barbican-api-5cb6cbf97d-96269" Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.563637 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bce5c08c-6cdc-47ae-9454-ffc500f6e34c-config-data\") pod \"barbican-api-5cb6cbf97d-96269\" (UID: \"bce5c08c-6cdc-47ae-9454-ffc500f6e34c\") " pod="openstack/barbican-api-5cb6cbf97d-96269" Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.563901 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bce5c08c-6cdc-47ae-9454-ffc500f6e34c-config-data-custom\") pod \"barbican-api-5cb6cbf97d-96269\" (UID: \"bce5c08c-6cdc-47ae-9454-ffc500f6e34c\") " pod="openstack/barbican-api-5cb6cbf97d-96269" Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.567744 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bce5c08c-6cdc-47ae-9454-ffc500f6e34c-internal-tls-certs\") pod \"barbican-api-5cb6cbf97d-96269\" (UID: \"bce5c08c-6cdc-47ae-9454-ffc500f6e34c\") " pod="openstack/barbican-api-5cb6cbf97d-96269" Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.574839 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bce5c08c-6cdc-47ae-9454-ffc500f6e34c-combined-ca-bundle\") pod \"barbican-api-5cb6cbf97d-96269\" (UID: \"bce5c08c-6cdc-47ae-9454-ffc500f6e34c\") " pod="openstack/barbican-api-5cb6cbf97d-96269" Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.575955 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljxs7\" (UniqueName: \"kubernetes.io/projected/bce5c08c-6cdc-47ae-9454-ffc500f6e34c-kube-api-access-ljxs7\") pod \"barbican-api-5cb6cbf97d-96269\" (UID: \"bce5c08c-6cdc-47ae-9454-ffc500f6e34c\") " pod="openstack/barbican-api-5cb6cbf97d-96269" Apr 04 02:25:42 crc kubenswrapper[4681]: I0404 02:25:42.706655 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cb6cbf97d-96269" Apr 04 02:25:43 crc kubenswrapper[4681]: I0404 02:25:43.084513 4681 generic.go:334] "Generic (PLEG): container finished" podID="442b54de-22a7-4121-aab3-5365d4e0872d" containerID="5c2e20ac3012a965348100f0100e8c91268dde1c0d1be74ad54fe599de278148" exitCode=1 Apr 04 02:25:43 crc kubenswrapper[4681]: I0404 02:25:43.084569 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"442b54de-22a7-4121-aab3-5365d4e0872d","Type":"ContainerDied","Data":"5c2e20ac3012a965348100f0100e8c91268dde1c0d1be74ad54fe599de278148"} Apr 04 02:25:43 crc kubenswrapper[4681]: I0404 02:25:43.085129 4681 scope.go:117] "RemoveContainer" containerID="5c2e20ac3012a965348100f0100e8c91268dde1c0d1be74ad54fe599de278148" Apr 04 02:25:43 crc kubenswrapper[4681]: E0404 02:25:43.085461 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(442b54de-22a7-4121-aab3-5365d4e0872d)\"" pod="openstack/watcher-decision-engine-0" podUID="442b54de-22a7-4121-aab3-5365d4e0872d" Apr 04 02:25:43 crc kubenswrapper[4681]: I0404 02:25:43.087085 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" event={"ID":"ed63431c-61d3-47d4-84a4-9eca959a780f","Type":"ContainerStarted","Data":"314cd7b265486d66ae3487235cc262a2d8f84ebce430a7ad3538684e03244d90"} Apr 04 02:25:43 crc kubenswrapper[4681]: I0404 02:25:43.213539 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c2dba8a-c0ce-4c67-bba7-293b09a65566" path="/var/lib/kubelet/pods/8c2dba8a-c0ce-4c67-bba7-293b09a65566/volumes" Apr 04 02:25:44 crc kubenswrapper[4681]: I0404 02:25:44.058183 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Apr 04 02:25:44 crc kubenswrapper[4681]: I0404 02:25:44.068477 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Apr 04 02:25:44 crc kubenswrapper[4681]: I0404 02:25:44.106535 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Apr 04 02:25:44 crc kubenswrapper[4681]: I0404 02:25:44.258887 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Apr 04 02:25:44 crc kubenswrapper[4681]: I0404 02:25:44.308020 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Apr 04 02:25:45 crc kubenswrapper[4681]: I0404 02:25:45.107100 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"37de3e0c-8775-4ee4-8462-65507df44b67","Type":"ContainerStarted","Data":"0248d823aa4435843e910484d889b31c61063e59c11089ea84fa231723ad2331"} Apr 04 02:25:45 crc kubenswrapper[4681]: I0404 02:25:45.109076 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5","Type":"ContainerStarted","Data":"9947c061de491d1176770280a3cafeab6827ef53fcf165fc754e7423901c82da"} Apr 04 02:25:45 crc kubenswrapper[4681]: I0404 02:25:45.110638 4681 generic.go:334] "Generic (PLEG): container finished" podID="ed63431c-61d3-47d4-84a4-9eca959a780f" containerID="314cd7b265486d66ae3487235cc262a2d8f84ebce430a7ad3538684e03244d90" exitCode=0 Apr 04 02:25:45 crc kubenswrapper[4681]: I0404 02:25:45.110735 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" event={"ID":"ed63431c-61d3-47d4-84a4-9eca959a780f","Type":"ContainerDied","Data":"314cd7b265486d66ae3487235cc262a2d8f84ebce430a7ad3538684e03244d90"} Apr 04 02:25:45 crc kubenswrapper[4681]: I0404 02:25:45.111188 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75f6b5466d-jv68p" Apr 04 02:25:45 crc kubenswrapper[4681]: I0404 02:25:45.111216 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-75f6b5466d-jv68p" Apr 04 02:25:45 crc kubenswrapper[4681]: I0404 02:25:45.159458 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-75f6b5466d-jv68p" podStartSLOduration=7.159439033 podStartE2EDuration="7.159439033s" podCreationTimestamp="2026-04-04 02:25:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:25:45.142041956 +0000 UTC m=+1824.807817076" watchObservedRunningTime="2026-04-04 02:25:45.159439033 +0000 UTC m=+1824.825214153" Apr 04 02:25:45 crc kubenswrapper[4681]: I0404 02:25:45.932423 4681 scope.go:117] "RemoveContainer" containerID="596d58724c4139b2b8f51ced98fde4359adf5f591dbff7adc473e65110e87d1d" Apr 04 02:25:46 crc kubenswrapper[4681]: I0404 02:25:46.875200 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-75f6b5466d-jv68p" Apr 04 02:25:48 crc kubenswrapper[4681]: I0404 02:25:48.125690 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-75f6b5466d-jv68p" Apr 04 02:25:48 crc kubenswrapper[4681]: I0404 02:25:48.202583 4681 scope.go:117] "RemoveContainer" containerID="a115e63e478f2dcd96d6a1ea5579c4abd8544184899aee1395f08415f92823da" Apr 04 02:25:48 crc kubenswrapper[4681]: E0404 02:25:48.202888 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:25:48 crc kubenswrapper[4681]: I0404 02:25:48.239461 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Apr 04 02:25:48 crc kubenswrapper[4681]: I0404 02:25:48.240197 4681 scope.go:117] "RemoveContainer" containerID="5c2e20ac3012a965348100f0100e8c91268dde1c0d1be74ad54fe599de278148" Apr 04 02:25:48 crc kubenswrapper[4681]: E0404 02:25:48.240463 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(442b54de-22a7-4121-aab3-5365d4e0872d)\"" pod="openstack/watcher-decision-engine-0" podUID="442b54de-22a7-4121-aab3-5365d4e0872d" Apr 04 02:25:51 crc kubenswrapper[4681]: E0404 02:25:51.274504 4681 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Apr 04 02:25:51 crc kubenswrapper[4681]: E0404 02:25:51.277465 4681 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p4m27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d56f3f04-51e5-4b48-a4bb-88fe0b9be711): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Apr 04 02:25:51 crc kubenswrapper[4681]: E0404 02:25:51.279016 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="d56f3f04-51e5-4b48-a4bb-88fe0b9be711" Apr 04 02:25:51 crc kubenswrapper[4681]: I0404 02:25:51.715639 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5cb6cbf97d-96269"] Apr 04 02:25:51 crc kubenswrapper[4681]: W0404 02:25:51.718935 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbce5c08c_6cdc_47ae_9454_ffc500f6e34c.slice/crio-04faaef59ae3ccd078d33aae631caa0e7e2e95a1d922bf4440c97e531285e1a2 WatchSource:0}: Error finding container 04faaef59ae3ccd078d33aae631caa0e7e2e95a1d922bf4440c97e531285e1a2: Status 404 returned error can't find the container with id 04faaef59ae3ccd078d33aae631caa0e7e2e95a1d922bf4440c97e531285e1a2 Apr 04 02:25:52 crc kubenswrapper[4681]: I0404 02:25:52.220919 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5699bfbbf-jpbrf" event={"ID":"24831041-c157-474d-9e6d-55931683ed21","Type":"ContainerStarted","Data":"22722166d9401e4951daff62e5d63abf3d4d10c0001d0f33aa10e0dce1063cd1"} Apr 04 02:25:52 crc kubenswrapper[4681]: I0404 02:25:52.220980 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5699bfbbf-jpbrf" event={"ID":"24831041-c157-474d-9e6d-55931683ed21","Type":"ContainerStarted","Data":"6e3ca04f1c846716b5f1a5baabd7a1fcf9c3d91c17cbbd04119a413cff5734bb"} Apr 04 02:25:52 crc kubenswrapper[4681]: I0404 02:25:52.223016 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"37de3e0c-8775-4ee4-8462-65507df44b67","Type":"ContainerStarted","Data":"df2b8959eaf0ca658e3ed76e530e1f23b682d4fc7daef50e9bfd119346415feb"} Apr 04 02:25:52 crc kubenswrapper[4681]: I0404 02:25:52.226244 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5","Type":"ContainerStarted","Data":"6c4073f2a2f3cb8c96e83aaa13dfcb56976589c7f23d418e6f0c29b290422fd1"} Apr 04 02:25:52 crc kubenswrapper[4681]: I0404 02:25:52.228281 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cb6cbf97d-96269" event={"ID":"bce5c08c-6cdc-47ae-9454-ffc500f6e34c","Type":"ContainerStarted","Data":"b6a9d60faab89c3bff4d18dc68231cbbf46003ea9e1b174a3684b7bcd6f9b285"} Apr 04 02:25:52 crc kubenswrapper[4681]: I0404 02:25:52.228318 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cb6cbf97d-96269" event={"ID":"bce5c08c-6cdc-47ae-9454-ffc500f6e34c","Type":"ContainerStarted","Data":"04faaef59ae3ccd078d33aae631caa0e7e2e95a1d922bf4440c97e531285e1a2"} Apr 04 02:25:52 crc kubenswrapper[4681]: I0404 02:25:52.229890 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-796868c666-kk4mh" event={"ID":"e9699275-8e01-4222-9e46-b90aa70f2a3c","Type":"ContainerStarted","Data":"499840a4550dd20fffa46688b03221f72456c82491770f9030118c23ecba05eb"} Apr 04 02:25:52 crc kubenswrapper[4681]: I0404 02:25:52.229943 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-796868c666-kk4mh" event={"ID":"e9699275-8e01-4222-9e46-b90aa70f2a3c","Type":"ContainerStarted","Data":"3ae06343ec72a930a778aec1c4142949f1d86338e8080df6baa115f1334208c9"} Apr 04 02:25:52 crc kubenswrapper[4681]: I0404 02:25:52.233139 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" event={"ID":"ed63431c-61d3-47d4-84a4-9eca959a780f","Type":"ContainerStarted","Data":"9fd7f3ca436ef02929af93630a15e8f95c1b3765dfd66fb496d24bf0f659cc99"} Apr 04 02:25:52 crc kubenswrapper[4681]: I0404 02:25:52.233231 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d56f3f04-51e5-4b48-a4bb-88fe0b9be711" containerName="ceilometer-central-agent" containerID="cri-o://ceee4573b358f1ca9a688f529ac7b6118eb053a9ec5924129a1dd9f1be1c8b60" gracePeriod=30 Apr 04 02:25:52 crc kubenswrapper[4681]: I0404 02:25:52.233346 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d56f3f04-51e5-4b48-a4bb-88fe0b9be711" containerName="ceilometer-notification-agent" containerID="cri-o://8b63eab707cd330468951cdf271ac0a95bcd43320b17bc8881c7a76e6d644d30" gracePeriod=30 Apr 04 02:25:52 crc kubenswrapper[4681]: I0404 02:25:52.233379 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d56f3f04-51e5-4b48-a4bb-88fe0b9be711" containerName="sg-core" containerID="cri-o://eefc3a63314e5a2815171b5855800cfd3e778f94f71f55e799d20a984520e16d" gracePeriod=30 Apr 04 02:25:52 crc kubenswrapper[4681]: I0404 02:25:52.260890 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" podStartSLOduration=13.260864011 podStartE2EDuration="13.260864011s" podCreationTimestamp="2026-04-04 02:25:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:25:52.250852225 +0000 UTC m=+1831.916627345" watchObservedRunningTime="2026-04-04 02:25:52.260864011 +0000 UTC m=+1831.926639131" Apr 04 02:25:53 crc kubenswrapper[4681]: I0404 02:25:53.254061 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cb6cbf97d-96269" event={"ID":"bce5c08c-6cdc-47ae-9454-ffc500f6e34c","Type":"ContainerStarted","Data":"1492053fc91a578b6167861fb7133db8ebea577dff406ec98d808b119d40578f"} Apr 04 02:25:53 crc kubenswrapper[4681]: I0404 02:25:53.254666 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5cb6cbf97d-96269" Apr 04 02:25:53 crc kubenswrapper[4681]: I0404 02:25:53.257005 4681 generic.go:334] "Generic (PLEG): container finished" podID="d56f3f04-51e5-4b48-a4bb-88fe0b9be711" containerID="eefc3a63314e5a2815171b5855800cfd3e778f94f71f55e799d20a984520e16d" exitCode=2 Apr 04 02:25:53 crc kubenswrapper[4681]: I0404 02:25:53.257043 4681 generic.go:334] "Generic (PLEG): container finished" podID="d56f3f04-51e5-4b48-a4bb-88fe0b9be711" containerID="ceee4573b358f1ca9a688f529ac7b6118eb053a9ec5924129a1dd9f1be1c8b60" exitCode=0 Apr 04 02:25:53 crc kubenswrapper[4681]: I0404 02:25:53.257174 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="836f050e-5d07-40bb-a5c9-b1ec6b87f9a5" containerName="glance-log" containerID="cri-o://9947c061de491d1176770280a3cafeab6827ef53fcf165fc754e7423901c82da" gracePeriod=30 Apr 04 02:25:53 crc kubenswrapper[4681]: I0404 02:25:53.257257 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d56f3f04-51e5-4b48-a4bb-88fe0b9be711","Type":"ContainerDied","Data":"eefc3a63314e5a2815171b5855800cfd3e778f94f71f55e799d20a984520e16d"} Apr 04 02:25:53 crc kubenswrapper[4681]: I0404 02:25:53.257306 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d56f3f04-51e5-4b48-a4bb-88fe0b9be711","Type":"ContainerDied","Data":"ceee4573b358f1ca9a688f529ac7b6118eb053a9ec5924129a1dd9f1be1c8b60"} Apr 04 02:25:53 crc kubenswrapper[4681]: I0404 02:25:53.257356 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="836f050e-5d07-40bb-a5c9-b1ec6b87f9a5" containerName="glance-httpd" containerID="cri-o://6c4073f2a2f3cb8c96e83aaa13dfcb56976589c7f23d418e6f0c29b290422fd1" gracePeriod=30 Apr 04 02:25:53 crc kubenswrapper[4681]: I0404 02:25:53.258170 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" Apr 04 02:25:53 crc kubenswrapper[4681]: I0404 02:25:53.258568 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="37de3e0c-8775-4ee4-8462-65507df44b67" containerName="glance-log" containerID="cri-o://0248d823aa4435843e910484d889b31c61063e59c11089ea84fa231723ad2331" gracePeriod=30 Apr 04 02:25:53 crc kubenswrapper[4681]: I0404 02:25:53.258636 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="37de3e0c-8775-4ee4-8462-65507df44b67" containerName="glance-httpd" containerID="cri-o://df2b8959eaf0ca658e3ed76e530e1f23b682d4fc7daef50e9bfd119346415feb" gracePeriod=30 Apr 04 02:25:53 crc kubenswrapper[4681]: I0404 02:25:53.288992 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5cb6cbf97d-96269" podStartSLOduration=11.288970613 podStartE2EDuration="11.288970613s" podCreationTimestamp="2026-04-04 02:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:25:53.287567144 +0000 UTC m=+1832.953342264" watchObservedRunningTime="2026-04-04 02:25:53.288970613 +0000 UTC m=+1832.954745733" Apr 04 02:25:53 crc kubenswrapper[4681]: I0404 02:25:53.316062 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=14.316042226 podStartE2EDuration="14.316042226s" podCreationTimestamp="2026-04-04 02:25:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:25:53.310143523 +0000 UTC m=+1832.975918653" watchObservedRunningTime="2026-04-04 02:25:53.316042226 +0000 UTC m=+1832.981817346" Apr 04 02:25:53 crc kubenswrapper[4681]: I0404 02:25:53.332718 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=14.332344723 podStartE2EDuration="14.332344723s" podCreationTimestamp="2026-04-04 02:25:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:25:53.329753702 +0000 UTC m=+1832.995528822" watchObservedRunningTime="2026-04-04 02:25:53.332344723 +0000 UTC m=+1832.998119843" Apr 04 02:25:53 crc kubenswrapper[4681]: I0404 02:25:53.355913 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5699bfbbf-jpbrf" podStartSLOduration=4.511053822 podStartE2EDuration="16.35589337s" podCreationTimestamp="2026-04-04 02:25:37 +0000 UTC" firstStartedPulling="2026-04-04 02:25:39.404393153 +0000 UTC m=+1819.070168273" lastFinishedPulling="2026-04-04 02:25:51.249232701 +0000 UTC m=+1830.915007821" observedRunningTime="2026-04-04 02:25:53.346481701 +0000 UTC m=+1833.012256821" watchObservedRunningTime="2026-04-04 02:25:53.35589337 +0000 UTC m=+1833.021668490" Apr 04 02:25:53 crc kubenswrapper[4681]: I0404 02:25:53.375196 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-796868c666-kk4mh" podStartSLOduration=4.409812851 podStartE2EDuration="16.375175399s" podCreationTimestamp="2026-04-04 02:25:37 +0000 UTC" firstStartedPulling="2026-04-04 02:25:39.247168756 +0000 UTC m=+1818.912943876" lastFinishedPulling="2026-04-04 02:25:51.212531304 +0000 UTC m=+1830.878306424" observedRunningTime="2026-04-04 02:25:53.371670942 +0000 UTC m=+1833.037446082" watchObservedRunningTime="2026-04-04 02:25:53.375175399 +0000 UTC m=+1833.040950519" Apr 04 02:25:54 crc kubenswrapper[4681]: I0404 02:25:54.265658 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5cb6cbf97d-96269" Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.278120 4681 generic.go:334] "Generic (PLEG): container finished" podID="37de3e0c-8775-4ee4-8462-65507df44b67" containerID="df2b8959eaf0ca658e3ed76e530e1f23b682d4fc7daef50e9bfd119346415feb" exitCode=0 Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.278461 4681 generic.go:334] "Generic (PLEG): container finished" podID="37de3e0c-8775-4ee4-8462-65507df44b67" containerID="0248d823aa4435843e910484d889b31c61063e59c11089ea84fa231723ad2331" exitCode=143 Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.278213 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"37de3e0c-8775-4ee4-8462-65507df44b67","Type":"ContainerDied","Data":"df2b8959eaf0ca658e3ed76e530e1f23b682d4fc7daef50e9bfd119346415feb"} Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.278520 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"37de3e0c-8775-4ee4-8462-65507df44b67","Type":"ContainerDied","Data":"0248d823aa4435843e910484d889b31c61063e59c11089ea84fa231723ad2331"} Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.281088 4681 generic.go:334] "Generic (PLEG): container finished" podID="836f050e-5d07-40bb-a5c9-b1ec6b87f9a5" containerID="6c4073f2a2f3cb8c96e83aaa13dfcb56976589c7f23d418e6f0c29b290422fd1" exitCode=0 Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.281120 4681 generic.go:334] "Generic (PLEG): container finished" podID="836f050e-5d07-40bb-a5c9-b1ec6b87f9a5" containerID="9947c061de491d1176770280a3cafeab6827ef53fcf165fc754e7423901c82da" exitCode=143 Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.281138 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5","Type":"ContainerDied","Data":"6c4073f2a2f3cb8c96e83aaa13dfcb56976589c7f23d418e6f0c29b290422fd1"} Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.281179 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5","Type":"ContainerDied","Data":"9947c061de491d1176770280a3cafeab6827ef53fcf165fc754e7423901c82da"} Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.281196 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5","Type":"ContainerDied","Data":"51d4fba56fb25fba59160f7862219cbd18a9e71411b36e8ae0bc76dd54d27280"} Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.281208 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51d4fba56fb25fba59160f7862219cbd18a9e71411b36e8ae0bc76dd54d27280" Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.297239 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.396310 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-logs\") pod \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\" (UID: \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\") " Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.396374 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-config-data\") pod \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\" (UID: \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\") " Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.396406 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\" (UID: \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\") " Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.396466 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-652t8\" (UniqueName: \"kubernetes.io/projected/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-kube-api-access-652t8\") pod \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\" (UID: \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\") " Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.396513 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-scripts\") pod \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\" (UID: \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\") " Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.396610 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-combined-ca-bundle\") pod \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\" (UID: \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\") " Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.396662 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-httpd-run\") pod \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\" (UID: \"836f050e-5d07-40bb-a5c9-b1ec6b87f9a5\") " Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.396821 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-logs" (OuterVolumeSpecName: "logs") pod "836f050e-5d07-40bb-a5c9-b1ec6b87f9a5" (UID: "836f050e-5d07-40bb-a5c9-b1ec6b87f9a5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.397437 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "836f050e-5d07-40bb-a5c9-b1ec6b87f9a5" (UID: "836f050e-5d07-40bb-a5c9-b1ec6b87f9a5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.397775 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-logs\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.401897 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-scripts" (OuterVolumeSpecName: "scripts") pod "836f050e-5d07-40bb-a5c9-b1ec6b87f9a5" (UID: "836f050e-5d07-40bb-a5c9-b1ec6b87f9a5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.405240 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-kube-api-access-652t8" (OuterVolumeSpecName: "kube-api-access-652t8") pod "836f050e-5d07-40bb-a5c9-b1ec6b87f9a5" (UID: "836f050e-5d07-40bb-a5c9-b1ec6b87f9a5"). InnerVolumeSpecName "kube-api-access-652t8". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.405636 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "836f050e-5d07-40bb-a5c9-b1ec6b87f9a5" (UID: "836f050e-5d07-40bb-a5c9-b1ec6b87f9a5"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.429584 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "836f050e-5d07-40bb-a5c9-b1ec6b87f9a5" (UID: "836f050e-5d07-40bb-a5c9-b1ec6b87f9a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.450432 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-config-data" (OuterVolumeSpecName: "config-data") pod "836f050e-5d07-40bb-a5c9-b1ec6b87f9a5" (UID: "836f050e-5d07-40bb-a5c9-b1ec6b87f9a5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.500421 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.500460 4681 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-httpd-run\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.500472 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.500510 4681 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.500522 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-652t8\" (UniqueName: \"kubernetes.io/projected/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-kube-api-access-652t8\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.500536 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.526284 4681 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Apr 04 02:25:55 crc kubenswrapper[4681]: I0404 02:25:55.602434 4681 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.250989 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.301854 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.302453 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.303237 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"37de3e0c-8775-4ee4-8462-65507df44b67","Type":"ContainerDied","Data":"faebd27ecea430ae21d884d593a4b64406f9d9638df415341fb9062cd5dbba9c"} Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.303293 4681 scope.go:117] "RemoveContainer" containerID="df2b8959eaf0ca658e3ed76e530e1f23b682d4fc7daef50e9bfd119346415feb" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.328763 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"37de3e0c-8775-4ee4-8462-65507df44b67\" (UID: \"37de3e0c-8775-4ee4-8462-65507df44b67\") " Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.329980 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37de3e0c-8775-4ee4-8462-65507df44b67-config-data\") pod \"37de3e0c-8775-4ee4-8462-65507df44b67\" (UID: \"37de3e0c-8775-4ee4-8462-65507df44b67\") " Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.330102 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/37de3e0c-8775-4ee4-8462-65507df44b67-httpd-run\") pod \"37de3e0c-8775-4ee4-8462-65507df44b67\" (UID: \"37de3e0c-8775-4ee4-8462-65507df44b67\") " Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.330161 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2gcv\" (UniqueName: \"kubernetes.io/projected/37de3e0c-8775-4ee4-8462-65507df44b67-kube-api-access-m2gcv\") pod \"37de3e0c-8775-4ee4-8462-65507df44b67\" (UID: \"37de3e0c-8775-4ee4-8462-65507df44b67\") " Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.330242 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37de3e0c-8775-4ee4-8462-65507df44b67-combined-ca-bundle\") pod \"37de3e0c-8775-4ee4-8462-65507df44b67\" (UID: \"37de3e0c-8775-4ee4-8462-65507df44b67\") " Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.330305 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37de3e0c-8775-4ee4-8462-65507df44b67-logs\") pod \"37de3e0c-8775-4ee4-8462-65507df44b67\" (UID: \"37de3e0c-8775-4ee4-8462-65507df44b67\") " Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.330340 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37de3e0c-8775-4ee4-8462-65507df44b67-scripts\") pod \"37de3e0c-8775-4ee4-8462-65507df44b67\" (UID: \"37de3e0c-8775-4ee4-8462-65507df44b67\") " Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.331250 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37de3e0c-8775-4ee4-8462-65507df44b67-logs" (OuterVolumeSpecName: "logs") pod "37de3e0c-8775-4ee4-8462-65507df44b67" (UID: "37de3e0c-8775-4ee4-8462-65507df44b67"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.331673 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/37de3e0c-8775-4ee4-8462-65507df44b67-logs\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.332137 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37de3e0c-8775-4ee4-8462-65507df44b67-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "37de3e0c-8775-4ee4-8462-65507df44b67" (UID: "37de3e0c-8775-4ee4-8462-65507df44b67"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.334347 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "37de3e0c-8775-4ee4-8462-65507df44b67" (UID: "37de3e0c-8775-4ee4-8462-65507df44b67"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.334668 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37de3e0c-8775-4ee4-8462-65507df44b67-kube-api-access-m2gcv" (OuterVolumeSpecName: "kube-api-access-m2gcv") pod "37de3e0c-8775-4ee4-8462-65507df44b67" (UID: "37de3e0c-8775-4ee4-8462-65507df44b67"). InnerVolumeSpecName "kube-api-access-m2gcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.337446 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.337763 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37de3e0c-8775-4ee4-8462-65507df44b67-scripts" (OuterVolumeSpecName: "scripts") pod "37de3e0c-8775-4ee4-8462-65507df44b67" (UID: "37de3e0c-8775-4ee4-8462-65507df44b67"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.339249 4681 scope.go:117] "RemoveContainer" containerID="0248d823aa4435843e910484d889b31c61063e59c11089ea84fa231723ad2331" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.357134 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.369518 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Apr 04 02:25:56 crc kubenswrapper[4681]: E0404 02:25:56.370095 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37de3e0c-8775-4ee4-8462-65507df44b67" containerName="glance-httpd" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.370123 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="37de3e0c-8775-4ee4-8462-65507df44b67" containerName="glance-httpd" Apr 04 02:25:56 crc kubenswrapper[4681]: E0404 02:25:56.370138 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836f050e-5d07-40bb-a5c9-b1ec6b87f9a5" containerName="glance-httpd" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.370144 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="836f050e-5d07-40bb-a5c9-b1ec6b87f9a5" containerName="glance-httpd" Apr 04 02:25:56 crc kubenswrapper[4681]: E0404 02:25:56.370174 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="836f050e-5d07-40bb-a5c9-b1ec6b87f9a5" containerName="glance-log" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.370180 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="836f050e-5d07-40bb-a5c9-b1ec6b87f9a5" containerName="glance-log" Apr 04 02:25:56 crc kubenswrapper[4681]: E0404 02:25:56.370198 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37de3e0c-8775-4ee4-8462-65507df44b67" containerName="glance-log" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.370204 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="37de3e0c-8775-4ee4-8462-65507df44b67" containerName="glance-log" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.370401 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="37de3e0c-8775-4ee4-8462-65507df44b67" containerName="glance-httpd" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.370416 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="37de3e0c-8775-4ee4-8462-65507df44b67" containerName="glance-log" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.370431 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="836f050e-5d07-40bb-a5c9-b1ec6b87f9a5" containerName="glance-log" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.370447 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="836f050e-5d07-40bb-a5c9-b1ec6b87f9a5" containerName="glance-httpd" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.377909 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.380137 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.382397 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37de3e0c-8775-4ee4-8462-65507df44b67-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "37de3e0c-8775-4ee4-8462-65507df44b67" (UID: "37de3e0c-8775-4ee4-8462-65507df44b67"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.390074 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.399167 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.415723 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37de3e0c-8775-4ee4-8462-65507df44b67-config-data" (OuterVolumeSpecName: "config-data") pod "37de3e0c-8775-4ee4-8462-65507df44b67" (UID: "37de3e0c-8775-4ee4-8462-65507df44b67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.433456 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b42bc47-8477-440f-b606-5c8c5cc6dee3-logs\") pod \"glance-default-internal-api-0\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.433537 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqjjm\" (UniqueName: \"kubernetes.io/projected/4b42bc47-8477-440f-b606-5c8c5cc6dee3-kube-api-access-xqjjm\") pod \"glance-default-internal-api-0\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.433597 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b42bc47-8477-440f-b606-5c8c5cc6dee3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.433660 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b42bc47-8477-440f-b606-5c8c5cc6dee3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.433743 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b42bc47-8477-440f-b606-5c8c5cc6dee3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.433838 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b42bc47-8477-440f-b606-5c8c5cc6dee3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.434034 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b42bc47-8477-440f-b606-5c8c5cc6dee3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.434588 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.434706 4681 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.434750 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/37de3e0c-8775-4ee4-8462-65507df44b67-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.434765 4681 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/37de3e0c-8775-4ee4-8462-65507df44b67-httpd-run\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.434777 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2gcv\" (UniqueName: \"kubernetes.io/projected/37de3e0c-8775-4ee4-8462-65507df44b67-kube-api-access-m2gcv\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.434788 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37de3e0c-8775-4ee4-8462-65507df44b67-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.434798 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/37de3e0c-8775-4ee4-8462-65507df44b67-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.458371 4681 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.536023 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.536081 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqjjm\" (UniqueName: \"kubernetes.io/projected/4b42bc47-8477-440f-b606-5c8c5cc6dee3-kube-api-access-xqjjm\") pod \"glance-default-internal-api-0\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.536108 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b42bc47-8477-440f-b606-5c8c5cc6dee3-logs\") pod \"glance-default-internal-api-0\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.536157 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b42bc47-8477-440f-b606-5c8c5cc6dee3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.536199 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b42bc47-8477-440f-b606-5c8c5cc6dee3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.536254 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b42bc47-8477-440f-b606-5c8c5cc6dee3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.536373 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b42bc47-8477-440f-b606-5c8c5cc6dee3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.536391 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.539817 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b42bc47-8477-440f-b606-5c8c5cc6dee3-logs\") pod \"glance-default-internal-api-0\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.540159 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b42bc47-8477-440f-b606-5c8c5cc6dee3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.540358 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b42bc47-8477-440f-b606-5c8c5cc6dee3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.540492 4681 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.542745 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b42bc47-8477-440f-b606-5c8c5cc6dee3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.545523 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b42bc47-8477-440f-b606-5c8c5cc6dee3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.547595 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b42bc47-8477-440f-b606-5c8c5cc6dee3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.552632 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b42bc47-8477-440f-b606-5c8c5cc6dee3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.556149 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqjjm\" (UniqueName: \"kubernetes.io/projected/4b42bc47-8477-440f-b606-5c8c5cc6dee3-kube-api-access-xqjjm\") pod \"glance-default-internal-api-0\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.579833 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.661576 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.671213 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.688751 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.691121 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.694310 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.694430 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.705139 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.746199 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb6wm\" (UniqueName: \"kubernetes.io/projected/60193934-a521-4dda-8d57-f41affeaab02-kube-api-access-mb6wm\") pod \"glance-default-external-api-0\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.746397 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.746440 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60193934-a521-4dda-8d57-f41affeaab02-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.746474 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60193934-a521-4dda-8d57-f41affeaab02-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.746500 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60193934-a521-4dda-8d57-f41affeaab02-logs\") pod \"glance-default-external-api-0\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.746515 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/60193934-a521-4dda-8d57-f41affeaab02-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.746542 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60193934-a521-4dda-8d57-f41affeaab02-scripts\") pod \"glance-default-external-api-0\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.746562 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60193934-a521-4dda-8d57-f41affeaab02-config-data\") pod \"glance-default-external-api-0\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.798948 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.848209 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb6wm\" (UniqueName: \"kubernetes.io/projected/60193934-a521-4dda-8d57-f41affeaab02-kube-api-access-mb6wm\") pod \"glance-default-external-api-0\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.848344 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.848389 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60193934-a521-4dda-8d57-f41affeaab02-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.848422 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60193934-a521-4dda-8d57-f41affeaab02-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.848445 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60193934-a521-4dda-8d57-f41affeaab02-logs\") pod \"glance-default-external-api-0\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.848468 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/60193934-a521-4dda-8d57-f41affeaab02-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.848507 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60193934-a521-4dda-8d57-f41affeaab02-scripts\") pod \"glance-default-external-api-0\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.848529 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60193934-a521-4dda-8d57-f41affeaab02-config-data\") pod \"glance-default-external-api-0\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.848764 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.849164 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/60193934-a521-4dda-8d57-f41affeaab02-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.849280 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60193934-a521-4dda-8d57-f41affeaab02-logs\") pod \"glance-default-external-api-0\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.859365 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60193934-a521-4dda-8d57-f41affeaab02-scripts\") pod \"glance-default-external-api-0\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.859583 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60193934-a521-4dda-8d57-f41affeaab02-config-data\") pod \"glance-default-external-api-0\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.862019 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60193934-a521-4dda-8d57-f41affeaab02-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.876535 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60193934-a521-4dda-8d57-f41affeaab02-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.877161 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb6wm\" (UniqueName: \"kubernetes.io/projected/60193934-a521-4dda-8d57-f41affeaab02-kube-api-access-mb6wm\") pod \"glance-default-external-api-0\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:56 crc kubenswrapper[4681]: I0404 02:25:56.916879 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " pod="openstack/glance-default-external-api-0" Apr 04 02:25:57 crc kubenswrapper[4681]: I0404 02:25:57.011447 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Apr 04 02:25:57 crc kubenswrapper[4681]: I0404 02:25:57.221352 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37de3e0c-8775-4ee4-8462-65507df44b67" path="/var/lib/kubelet/pods/37de3e0c-8775-4ee4-8462-65507df44b67/volumes" Apr 04 02:25:57 crc kubenswrapper[4681]: I0404 02:25:57.222759 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="836f050e-5d07-40bb-a5c9-b1ec6b87f9a5" path="/var/lib/kubelet/pods/836f050e-5d07-40bb-a5c9-b1ec6b87f9a5/volumes" Apr 04 02:25:57 crc kubenswrapper[4681]: I0404 02:25:57.315225 4681 generic.go:334] "Generic (PLEG): container finished" podID="d56f3f04-51e5-4b48-a4bb-88fe0b9be711" containerID="8b63eab707cd330468951cdf271ac0a95bcd43320b17bc8881c7a76e6d644d30" exitCode=0 Apr 04 02:25:57 crc kubenswrapper[4681]: I0404 02:25:57.315311 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d56f3f04-51e5-4b48-a4bb-88fe0b9be711","Type":"ContainerDied","Data":"8b63eab707cd330468951cdf271ac0a95bcd43320b17bc8881c7a76e6d644d30"} Apr 04 02:25:57 crc kubenswrapper[4681]: I0404 02:25:57.361645 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Apr 04 02:25:57 crc kubenswrapper[4681]: W0404 02:25:57.362764 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b42bc47_8477_440f_b606_5c8c5cc6dee3.slice/crio-e0215b4bd80fea3875abc995e2bda8f208083b22a3807b390a26a20158866062 WatchSource:0}: Error finding container e0215b4bd80fea3875abc995e2bda8f208083b22a3807b390a26a20158866062: Status 404 returned error can't find the container with id e0215b4bd80fea3875abc995e2bda8f208083b22a3807b390a26a20158866062 Apr 04 02:25:57 crc kubenswrapper[4681]: I0404 02:25:57.558365 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Apr 04 02:25:57 crc kubenswrapper[4681]: W0404 02:25:57.570959 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60193934_a521_4dda_8d57_f41affeaab02.slice/crio-5c1ce7b7e6429aab3163ce4fd4403dc6993144e555d0349de648c0896441fb48 WatchSource:0}: Error finding container 5c1ce7b7e6429aab3163ce4fd4403dc6993144e555d0349de648c0896441fb48: Status 404 returned error can't find the container with id 5c1ce7b7e6429aab3163ce4fd4403dc6993144e555d0349de648c0896441fb48 Apr 04 02:25:57 crc kubenswrapper[4681]: I0404 02:25:57.703865 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:25:57 crc kubenswrapper[4681]: I0404 02:25:57.764788 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-combined-ca-bundle\") pod \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\" (UID: \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\") " Apr 04 02:25:57 crc kubenswrapper[4681]: I0404 02:25:57.764931 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4m27\" (UniqueName: \"kubernetes.io/projected/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-kube-api-access-p4m27\") pod \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\" (UID: \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\") " Apr 04 02:25:57 crc kubenswrapper[4681]: I0404 02:25:57.764956 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-log-httpd\") pod \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\" (UID: \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\") " Apr 04 02:25:57 crc kubenswrapper[4681]: I0404 02:25:57.764975 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-sg-core-conf-yaml\") pod \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\" (UID: \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\") " Apr 04 02:25:57 crc kubenswrapper[4681]: I0404 02:25:57.764988 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-config-data\") pod \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\" (UID: \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\") " Apr 04 02:25:57 crc kubenswrapper[4681]: I0404 02:25:57.765087 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-scripts\") pod \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\" (UID: \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\") " Apr 04 02:25:57 crc kubenswrapper[4681]: I0404 02:25:57.765131 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-run-httpd\") pod \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\" (UID: \"d56f3f04-51e5-4b48-a4bb-88fe0b9be711\") " Apr 04 02:25:57 crc kubenswrapper[4681]: I0404 02:25:57.765580 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d56f3f04-51e5-4b48-a4bb-88fe0b9be711" (UID: "d56f3f04-51e5-4b48-a4bb-88fe0b9be711"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:25:57 crc kubenswrapper[4681]: I0404 02:25:57.765788 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d56f3f04-51e5-4b48-a4bb-88fe0b9be711" (UID: "d56f3f04-51e5-4b48-a4bb-88fe0b9be711"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:25:57 crc kubenswrapper[4681]: I0404 02:25:57.771461 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-scripts" (OuterVolumeSpecName: "scripts") pod "d56f3f04-51e5-4b48-a4bb-88fe0b9be711" (UID: "d56f3f04-51e5-4b48-a4bb-88fe0b9be711"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:25:57 crc kubenswrapper[4681]: I0404 02:25:57.771821 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-kube-api-access-p4m27" (OuterVolumeSpecName: "kube-api-access-p4m27") pod "d56f3f04-51e5-4b48-a4bb-88fe0b9be711" (UID: "d56f3f04-51e5-4b48-a4bb-88fe0b9be711"). InnerVolumeSpecName "kube-api-access-p4m27". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:25:57 crc kubenswrapper[4681]: I0404 02:25:57.800217 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d56f3f04-51e5-4b48-a4bb-88fe0b9be711" (UID: "d56f3f04-51e5-4b48-a4bb-88fe0b9be711"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:25:57 crc kubenswrapper[4681]: I0404 02:25:57.833663 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d56f3f04-51e5-4b48-a4bb-88fe0b9be711" (UID: "d56f3f04-51e5-4b48-a4bb-88fe0b9be711"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:25:57 crc kubenswrapper[4681]: I0404 02:25:57.838530 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-config-data" (OuterVolumeSpecName: "config-data") pod "d56f3f04-51e5-4b48-a4bb-88fe0b9be711" (UID: "d56f3f04-51e5-4b48-a4bb-88fe0b9be711"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:25:57 crc kubenswrapper[4681]: I0404 02:25:57.867756 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4m27\" (UniqueName: \"kubernetes.io/projected/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-kube-api-access-p4m27\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:57 crc kubenswrapper[4681]: I0404 02:25:57.867798 4681 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-log-httpd\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:57 crc kubenswrapper[4681]: I0404 02:25:57.867810 4681 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:57 crc kubenswrapper[4681]: I0404 02:25:57.867818 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:57 crc kubenswrapper[4681]: I0404 02:25:57.867826 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:57 crc kubenswrapper[4681]: I0404 02:25:57.867835 4681 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-run-httpd\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:57 crc kubenswrapper[4681]: I0404 02:25:57.867843 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d56f3f04-51e5-4b48-a4bb-88fe0b9be711-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.240090 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.240493 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.241371 4681 scope.go:117] "RemoveContainer" containerID="5c2e20ac3012a965348100f0100e8c91268dde1c0d1be74ad54fe599de278148" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.340426 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d56f3f04-51e5-4b48-a4bb-88fe0b9be711","Type":"ContainerDied","Data":"6a997ab174b32de037fec0a617bcf1cae3de0426517769e3e8da2f30fdd842a1"} Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.340483 4681 scope.go:117] "RemoveContainer" containerID="eefc3a63314e5a2815171b5855800cfd3e778f94f71f55e799d20a984520e16d" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.340627 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.353661 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"60193934-a521-4dda-8d57-f41affeaab02","Type":"ContainerStarted","Data":"08852104fb308cc149f2bf55c77ee20cee14d2e0fff74ac52a5b28ed27db9ff2"} Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.353715 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"60193934-a521-4dda-8d57-f41affeaab02","Type":"ContainerStarted","Data":"5c1ce7b7e6429aab3163ce4fd4403dc6993144e555d0349de648c0896441fb48"} Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.365960 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4b42bc47-8477-440f-b606-5c8c5cc6dee3","Type":"ContainerStarted","Data":"e68dda0525c1dd814b88ac8ab7014ba8cfbdafb2115894da3d97ef447742b5f8"} Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.366018 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4b42bc47-8477-440f-b606-5c8c5cc6dee3","Type":"ContainerStarted","Data":"e0215b4bd80fea3875abc995e2bda8f208083b22a3807b390a26a20158866062"} Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.420001 4681 scope.go:117] "RemoveContainer" containerID="8b63eab707cd330468951cdf271ac0a95bcd43320b17bc8881c7a76e6d644d30" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.433524 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.468504 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.481406 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:25:58 crc kubenswrapper[4681]: E0404 02:25:58.481992 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d56f3f04-51e5-4b48-a4bb-88fe0b9be711" containerName="ceilometer-notification-agent" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.482015 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d56f3f04-51e5-4b48-a4bb-88fe0b9be711" containerName="ceilometer-notification-agent" Apr 04 02:25:58 crc kubenswrapper[4681]: E0404 02:25:58.482030 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d56f3f04-51e5-4b48-a4bb-88fe0b9be711" containerName="ceilometer-central-agent" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.482038 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d56f3f04-51e5-4b48-a4bb-88fe0b9be711" containerName="ceilometer-central-agent" Apr 04 02:25:58 crc kubenswrapper[4681]: E0404 02:25:58.482051 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d56f3f04-51e5-4b48-a4bb-88fe0b9be711" containerName="sg-core" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.482079 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d56f3f04-51e5-4b48-a4bb-88fe0b9be711" containerName="sg-core" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.482407 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d56f3f04-51e5-4b48-a4bb-88fe0b9be711" containerName="ceilometer-notification-agent" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.482425 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d56f3f04-51e5-4b48-a4bb-88fe0b9be711" containerName="sg-core" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.482442 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d56f3f04-51e5-4b48-a4bb-88fe0b9be711" containerName="ceilometer-central-agent" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.487766 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.488838 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4c366f78-0d36-4ad8-b037-f3156da30c73-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4c366f78-0d36-4ad8-b037-f3156da30c73\") " pod="openstack/ceilometer-0" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.488895 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c366f78-0d36-4ad8-b037-f3156da30c73-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4c366f78-0d36-4ad8-b037-f3156da30c73\") " pod="openstack/ceilometer-0" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.488929 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c366f78-0d36-4ad8-b037-f3156da30c73-scripts\") pod \"ceilometer-0\" (UID: \"4c366f78-0d36-4ad8-b037-f3156da30c73\") " pod="openstack/ceilometer-0" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.488998 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c366f78-0d36-4ad8-b037-f3156da30c73-config-data\") pod \"ceilometer-0\" (UID: \"4c366f78-0d36-4ad8-b037-f3156da30c73\") " pod="openstack/ceilometer-0" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.489040 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c366f78-0d36-4ad8-b037-f3156da30c73-run-httpd\") pod \"ceilometer-0\" (UID: \"4c366f78-0d36-4ad8-b037-f3156da30c73\") " pod="openstack/ceilometer-0" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.489108 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfpqj\" (UniqueName: \"kubernetes.io/projected/4c366f78-0d36-4ad8-b037-f3156da30c73-kube-api-access-mfpqj\") pod \"ceilometer-0\" (UID: \"4c366f78-0d36-4ad8-b037-f3156da30c73\") " pod="openstack/ceilometer-0" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.489139 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c366f78-0d36-4ad8-b037-f3156da30c73-log-httpd\") pod \"ceilometer-0\" (UID: \"4c366f78-0d36-4ad8-b037-f3156da30c73\") " pod="openstack/ceilometer-0" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.493793 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.493847 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.497616 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.562391 4681 scope.go:117] "RemoveContainer" containerID="ceee4573b358f1ca9a688f529ac7b6118eb053a9ec5924129a1dd9f1be1c8b60" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.591327 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4c366f78-0d36-4ad8-b037-f3156da30c73-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4c366f78-0d36-4ad8-b037-f3156da30c73\") " pod="openstack/ceilometer-0" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.591370 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c366f78-0d36-4ad8-b037-f3156da30c73-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4c366f78-0d36-4ad8-b037-f3156da30c73\") " pod="openstack/ceilometer-0" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.591387 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c366f78-0d36-4ad8-b037-f3156da30c73-scripts\") pod \"ceilometer-0\" (UID: \"4c366f78-0d36-4ad8-b037-f3156da30c73\") " pod="openstack/ceilometer-0" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.591437 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c366f78-0d36-4ad8-b037-f3156da30c73-config-data\") pod \"ceilometer-0\" (UID: \"4c366f78-0d36-4ad8-b037-f3156da30c73\") " pod="openstack/ceilometer-0" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.591468 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c366f78-0d36-4ad8-b037-f3156da30c73-run-httpd\") pod \"ceilometer-0\" (UID: \"4c366f78-0d36-4ad8-b037-f3156da30c73\") " pod="openstack/ceilometer-0" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.591511 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfpqj\" (UniqueName: \"kubernetes.io/projected/4c366f78-0d36-4ad8-b037-f3156da30c73-kube-api-access-mfpqj\") pod \"ceilometer-0\" (UID: \"4c366f78-0d36-4ad8-b037-f3156da30c73\") " pod="openstack/ceilometer-0" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.591531 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c366f78-0d36-4ad8-b037-f3156da30c73-log-httpd\") pod \"ceilometer-0\" (UID: \"4c366f78-0d36-4ad8-b037-f3156da30c73\") " pod="openstack/ceilometer-0" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.592013 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c366f78-0d36-4ad8-b037-f3156da30c73-log-httpd\") pod \"ceilometer-0\" (UID: \"4c366f78-0d36-4ad8-b037-f3156da30c73\") " pod="openstack/ceilometer-0" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.594715 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c366f78-0d36-4ad8-b037-f3156da30c73-run-httpd\") pod \"ceilometer-0\" (UID: \"4c366f78-0d36-4ad8-b037-f3156da30c73\") " pod="openstack/ceilometer-0" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.600409 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4c366f78-0d36-4ad8-b037-f3156da30c73-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4c366f78-0d36-4ad8-b037-f3156da30c73\") " pod="openstack/ceilometer-0" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.601198 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c366f78-0d36-4ad8-b037-f3156da30c73-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4c366f78-0d36-4ad8-b037-f3156da30c73\") " pod="openstack/ceilometer-0" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.602085 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c366f78-0d36-4ad8-b037-f3156da30c73-config-data\") pod \"ceilometer-0\" (UID: \"4c366f78-0d36-4ad8-b037-f3156da30c73\") " pod="openstack/ceilometer-0" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.626945 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfpqj\" (UniqueName: \"kubernetes.io/projected/4c366f78-0d36-4ad8-b037-f3156da30c73-kube-api-access-mfpqj\") pod \"ceilometer-0\" (UID: \"4c366f78-0d36-4ad8-b037-f3156da30c73\") " pod="openstack/ceilometer-0" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.627180 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c366f78-0d36-4ad8-b037-f3156da30c73-scripts\") pod \"ceilometer-0\" (UID: \"4c366f78-0d36-4ad8-b037-f3156da30c73\") " pod="openstack/ceilometer-0" Apr 04 02:25:58 crc kubenswrapper[4681]: I0404 02:25:58.848457 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:25:59 crc kubenswrapper[4681]: I0404 02:25:59.255695 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d56f3f04-51e5-4b48-a4bb-88fe0b9be711" path="/var/lib/kubelet/pods/d56f3f04-51e5-4b48-a4bb-88fe0b9be711/volumes" Apr 04 02:25:59 crc kubenswrapper[4681]: I0404 02:25:59.384367 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4b42bc47-8477-440f-b606-5c8c5cc6dee3","Type":"ContainerStarted","Data":"7fc6d50786cbfbfb9e2f347c07b3050a1e48e7c61b300d9e7bb625443c95232a"} Apr 04 02:25:59 crc kubenswrapper[4681]: I0404 02:25:59.392539 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"442b54de-22a7-4121-aab3-5365d4e0872d","Type":"ContainerStarted","Data":"13815f30743a40f27c1344893fb0ce4d8bd89907e3e353cbbdd634d8cd72ea30"} Apr 04 02:25:59 crc kubenswrapper[4681]: I0404 02:25:59.402395 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:25:59 crc kubenswrapper[4681]: W0404 02:25:59.402925 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c366f78_0d36_4ad8_b037_f3156da30c73.slice/crio-f0999ea2808112013a444cb66b3ebf1b82105fc800e2c6b44486ce536917efee WatchSource:0}: Error finding container f0999ea2808112013a444cb66b3ebf1b82105fc800e2c6b44486ce536917efee: Status 404 returned error can't find the container with id f0999ea2808112013a444cb66b3ebf1b82105fc800e2c6b44486ce536917efee Apr 04 02:25:59 crc kubenswrapper[4681]: I0404 02:25:59.405300 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 04 02:25:59 crc kubenswrapper[4681]: I0404 02:25:59.839564 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5cb6cbf97d-96269" Apr 04 02:25:59 crc kubenswrapper[4681]: I0404 02:25:59.842171 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" Apr 04 02:25:59 crc kubenswrapper[4681]: I0404 02:25:59.879526 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5cb6cbf97d-96269" Apr 04 02:25:59 crc kubenswrapper[4681]: I0404 02:25:59.952097 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c9c6dbd5-6ghst"] Apr 04 02:25:59 crc kubenswrapper[4681]: I0404 02:25:59.952408 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" podUID="729d6f58-b3b5-4f01-a602-714dffa40001" containerName="dnsmasq-dns" containerID="cri-o://9ca95f147963f3f05f7b633b1aa66e9c1a6744c05024dcd7852e1e0d4624443a" gracePeriod=10 Apr 04 02:26:00 crc kubenswrapper[4681]: I0404 02:26:00.000120 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-75f6b5466d-jv68p"] Apr 04 02:26:00 crc kubenswrapper[4681]: I0404 02:26:00.001305 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-75f6b5466d-jv68p" podUID="de70d856-1c6d-498e-b548-1724bbc8cb66" containerName="barbican-api-log" containerID="cri-o://9005a9445ac119c8f75c372c7463598b76e4b46ccd22083dffbd6cfeb296d6f5" gracePeriod=30 Apr 04 02:26:00 crc kubenswrapper[4681]: I0404 02:26:00.002127 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-75f6b5466d-jv68p" podUID="de70d856-1c6d-498e-b548-1724bbc8cb66" containerName="barbican-api" containerID="cri-o://307ea29142e5a8ac9efffe73f740a75316b7511980add14e24838d074e5bc7b1" gracePeriod=30 Apr 04 02:26:00 crc kubenswrapper[4681]: I0404 02:26:00.171923 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587826-vvnqf"] Apr 04 02:26:00 crc kubenswrapper[4681]: I0404 02:26:00.174207 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587826-vvnqf" Apr 04 02:26:00 crc kubenswrapper[4681]: I0404 02:26:00.181089 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 02:26:00 crc kubenswrapper[4681]: I0404 02:26:00.181483 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 02:26:00 crc kubenswrapper[4681]: I0404 02:26:00.181644 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 02:26:00 crc kubenswrapper[4681]: I0404 02:26:00.195757 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587826-vvnqf"] Apr 04 02:26:00 crc kubenswrapper[4681]: I0404 02:26:00.263378 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpbbb\" (UniqueName: \"kubernetes.io/projected/9130bd5c-6c50-412e-887f-4b22f4bc5377-kube-api-access-kpbbb\") pod \"auto-csr-approver-29587826-vvnqf\" (UID: \"9130bd5c-6c50-412e-887f-4b22f4bc5377\") " pod="openshift-infra/auto-csr-approver-29587826-vvnqf" Apr 04 02:26:00 crc kubenswrapper[4681]: I0404 02:26:00.372938 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpbbb\" (UniqueName: \"kubernetes.io/projected/9130bd5c-6c50-412e-887f-4b22f4bc5377-kube-api-access-kpbbb\") pod \"auto-csr-approver-29587826-vvnqf\" (UID: \"9130bd5c-6c50-412e-887f-4b22f4bc5377\") " pod="openshift-infra/auto-csr-approver-29587826-vvnqf" Apr 04 02:26:00 crc kubenswrapper[4681]: I0404 02:26:00.417449 4681 generic.go:334] "Generic (PLEG): container finished" podID="729d6f58-b3b5-4f01-a602-714dffa40001" containerID="9ca95f147963f3f05f7b633b1aa66e9c1a6744c05024dcd7852e1e0d4624443a" exitCode=0 Apr 04 02:26:00 crc kubenswrapper[4681]: I0404 02:26:00.417542 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" event={"ID":"729d6f58-b3b5-4f01-a602-714dffa40001","Type":"ContainerDied","Data":"9ca95f147963f3f05f7b633b1aa66e9c1a6744c05024dcd7852e1e0d4624443a"} Apr 04 02:26:00 crc kubenswrapper[4681]: I0404 02:26:00.422943 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpbbb\" (UniqueName: \"kubernetes.io/projected/9130bd5c-6c50-412e-887f-4b22f4bc5377-kube-api-access-kpbbb\") pod \"auto-csr-approver-29587826-vvnqf\" (UID: \"9130bd5c-6c50-412e-887f-4b22f4bc5377\") " pod="openshift-infra/auto-csr-approver-29587826-vvnqf" Apr 04 02:26:00 crc kubenswrapper[4681]: I0404 02:26:00.433480 4681 generic.go:334] "Generic (PLEG): container finished" podID="de70d856-1c6d-498e-b548-1724bbc8cb66" containerID="9005a9445ac119c8f75c372c7463598b76e4b46ccd22083dffbd6cfeb296d6f5" exitCode=143 Apr 04 02:26:00 crc kubenswrapper[4681]: I0404 02:26:00.433567 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75f6b5466d-jv68p" event={"ID":"de70d856-1c6d-498e-b548-1724bbc8cb66","Type":"ContainerDied","Data":"9005a9445ac119c8f75c372c7463598b76e4b46ccd22083dffbd6cfeb296d6f5"} Apr 04 02:26:00 crc kubenswrapper[4681]: I0404 02:26:00.452504 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"60193934-a521-4dda-8d57-f41affeaab02","Type":"ContainerStarted","Data":"51cbafa8d2eb1dd9e74f8eb062a9c11c17e082f2fe4bd1ec5b261e6f904b51ef"} Apr 04 02:26:00 crc kubenswrapper[4681]: I0404 02:26:00.465315 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c366f78-0d36-4ad8-b037-f3156da30c73","Type":"ContainerStarted","Data":"f0999ea2808112013a444cb66b3ebf1b82105fc800e2c6b44486ce536917efee"} Apr 04 02:26:00 crc kubenswrapper[4681]: I0404 02:26:00.493088 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.493055728 podStartE2EDuration="4.493055728s" podCreationTimestamp="2026-04-04 02:25:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:26:00.485393327 +0000 UTC m=+1840.151168457" watchObservedRunningTime="2026-04-04 02:26:00.493055728 +0000 UTC m=+1840.158830848" Apr 04 02:26:00 crc kubenswrapper[4681]: I0404 02:26:00.552045 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.552023277 podStartE2EDuration="4.552023277s" podCreationTimestamp="2026-04-04 02:25:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:26:00.538731562 +0000 UTC m=+1840.204506682" watchObservedRunningTime="2026-04-04 02:26:00.552023277 +0000 UTC m=+1840.217798397" Apr 04 02:26:00 crc kubenswrapper[4681]: I0404 02:26:00.602284 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587826-vvnqf" Apr 04 02:26:01 crc kubenswrapper[4681]: I0404 02:26:01.169372 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" Apr 04 02:26:01 crc kubenswrapper[4681]: I0404 02:26:01.237282 4681 scope.go:117] "RemoveContainer" containerID="a115e63e478f2dcd96d6a1ea5579c4abd8544184899aee1395f08415f92823da" Apr 04 02:26:01 crc kubenswrapper[4681]: E0404 02:26:01.237970 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:26:01 crc kubenswrapper[4681]: I0404 02:26:01.301353 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/729d6f58-b3b5-4f01-a602-714dffa40001-dns-swift-storage-0\") pod \"729d6f58-b3b5-4f01-a602-714dffa40001\" (UID: \"729d6f58-b3b5-4f01-a602-714dffa40001\") " Apr 04 02:26:01 crc kubenswrapper[4681]: I0404 02:26:01.301474 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/729d6f58-b3b5-4f01-a602-714dffa40001-config\") pod \"729d6f58-b3b5-4f01-a602-714dffa40001\" (UID: \"729d6f58-b3b5-4f01-a602-714dffa40001\") " Apr 04 02:26:01 crc kubenswrapper[4681]: I0404 02:26:01.301671 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvrh\" (UniqueName: \"kubernetes.io/projected/729d6f58-b3b5-4f01-a602-714dffa40001-kube-api-access-zkvrh\") pod \"729d6f58-b3b5-4f01-a602-714dffa40001\" (UID: \"729d6f58-b3b5-4f01-a602-714dffa40001\") " Apr 04 02:26:01 crc kubenswrapper[4681]: I0404 02:26:01.301732 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/729d6f58-b3b5-4f01-a602-714dffa40001-ovsdbserver-nb\") pod \"729d6f58-b3b5-4f01-a602-714dffa40001\" (UID: \"729d6f58-b3b5-4f01-a602-714dffa40001\") " Apr 04 02:26:01 crc kubenswrapper[4681]: I0404 02:26:01.301840 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/729d6f58-b3b5-4f01-a602-714dffa40001-dns-svc\") pod \"729d6f58-b3b5-4f01-a602-714dffa40001\" (UID: \"729d6f58-b3b5-4f01-a602-714dffa40001\") " Apr 04 02:26:01 crc kubenswrapper[4681]: I0404 02:26:01.301993 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/729d6f58-b3b5-4f01-a602-714dffa40001-ovsdbserver-sb\") pod \"729d6f58-b3b5-4f01-a602-714dffa40001\" (UID: \"729d6f58-b3b5-4f01-a602-714dffa40001\") " Apr 04 02:26:01 crc kubenswrapper[4681]: I0404 02:26:01.353077 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/729d6f58-b3b5-4f01-a602-714dffa40001-kube-api-access-zkvrh" (OuterVolumeSpecName: "kube-api-access-zkvrh") pod "729d6f58-b3b5-4f01-a602-714dffa40001" (UID: "729d6f58-b3b5-4f01-a602-714dffa40001"). InnerVolumeSpecName "kube-api-access-zkvrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:26:01 crc kubenswrapper[4681]: I0404 02:26:01.381547 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587826-vvnqf"] Apr 04 02:26:01 crc kubenswrapper[4681]: I0404 02:26:01.404356 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvrh\" (UniqueName: \"kubernetes.io/projected/729d6f58-b3b5-4f01-a602-714dffa40001-kube-api-access-zkvrh\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:01 crc kubenswrapper[4681]: I0404 02:26:01.423022 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/729d6f58-b3b5-4f01-a602-714dffa40001-config" (OuterVolumeSpecName: "config") pod "729d6f58-b3b5-4f01-a602-714dffa40001" (UID: "729d6f58-b3b5-4f01-a602-714dffa40001"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:26:01 crc kubenswrapper[4681]: I0404 02:26:01.451297 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/729d6f58-b3b5-4f01-a602-714dffa40001-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "729d6f58-b3b5-4f01-a602-714dffa40001" (UID: "729d6f58-b3b5-4f01-a602-714dffa40001"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:26:01 crc kubenswrapper[4681]: I0404 02:26:01.452123 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/729d6f58-b3b5-4f01-a602-714dffa40001-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "729d6f58-b3b5-4f01-a602-714dffa40001" (UID: "729d6f58-b3b5-4f01-a602-714dffa40001"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:26:01 crc kubenswrapper[4681]: I0404 02:26:01.452641 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/729d6f58-b3b5-4f01-a602-714dffa40001-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "729d6f58-b3b5-4f01-a602-714dffa40001" (UID: "729d6f58-b3b5-4f01-a602-714dffa40001"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:26:01 crc kubenswrapper[4681]: I0404 02:26:01.467298 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/729d6f58-b3b5-4f01-a602-714dffa40001-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "729d6f58-b3b5-4f01-a602-714dffa40001" (UID: "729d6f58-b3b5-4f01-a602-714dffa40001"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:26:01 crc kubenswrapper[4681]: I0404 02:26:01.485567 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" event={"ID":"729d6f58-b3b5-4f01-a602-714dffa40001","Type":"ContainerDied","Data":"6fed4d3fa0b544f7342ae385d38ef944d0a24c4d86f75d0ebb6bb891118d5082"} Apr 04 02:26:01 crc kubenswrapper[4681]: I0404 02:26:01.485631 4681 scope.go:117] "RemoveContainer" containerID="9ca95f147963f3f05f7b633b1aa66e9c1a6744c05024dcd7852e1e0d4624443a" Apr 04 02:26:01 crc kubenswrapper[4681]: I0404 02:26:01.485797 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-c9c6dbd5-6ghst" Apr 04 02:26:01 crc kubenswrapper[4681]: I0404 02:26:01.490340 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587826-vvnqf" event={"ID":"9130bd5c-6c50-412e-887f-4b22f4bc5377","Type":"ContainerStarted","Data":"19810feb1e4720fb7d6f88a69dc047d2a405a8ac60cd2fa2c77794dd409cbce2"} Apr 04 02:26:01 crc kubenswrapper[4681]: I0404 02:26:01.503551 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c366f78-0d36-4ad8-b037-f3156da30c73","Type":"ContainerStarted","Data":"f340523cc0fc74f993800c9aad21330a23f1226cbbdd538e546236fce48597ab"} Apr 04 02:26:01 crc kubenswrapper[4681]: I0404 02:26:01.508698 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/729d6f58-b3b5-4f01-a602-714dffa40001-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:01 crc kubenswrapper[4681]: I0404 02:26:01.508751 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/729d6f58-b3b5-4f01-a602-714dffa40001-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:01 crc kubenswrapper[4681]: I0404 02:26:01.508766 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/729d6f58-b3b5-4f01-a602-714dffa40001-dns-svc\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:01 crc kubenswrapper[4681]: I0404 02:26:01.508777 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/729d6f58-b3b5-4f01-a602-714dffa40001-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:01 crc kubenswrapper[4681]: I0404 02:26:01.508791 4681 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/729d6f58-b3b5-4f01-a602-714dffa40001-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:01 crc kubenswrapper[4681]: I0404 02:26:01.527422 4681 scope.go:117] "RemoveContainer" containerID="7fb8c7a686056708c5c1b8a4e1d17ba2105bb74443a85cfe5d247dc52c7f8e52" Apr 04 02:26:01 crc kubenswrapper[4681]: I0404 02:26:01.575920 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-c9c6dbd5-6ghst"] Apr 04 02:26:01 crc kubenswrapper[4681]: I0404 02:26:01.620449 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-c9c6dbd5-6ghst"] Apr 04 02:26:02 crc kubenswrapper[4681]: I0404 02:26:02.195051 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75f6b5466d-jv68p" Apr 04 02:26:02 crc kubenswrapper[4681]: I0404 02:26:02.226622 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de70d856-1c6d-498e-b548-1724bbc8cb66-combined-ca-bundle\") pod \"de70d856-1c6d-498e-b548-1724bbc8cb66\" (UID: \"de70d856-1c6d-498e-b548-1724bbc8cb66\") " Apr 04 02:26:02 crc kubenswrapper[4681]: I0404 02:26:02.226750 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvlgm\" (UniqueName: \"kubernetes.io/projected/de70d856-1c6d-498e-b548-1724bbc8cb66-kube-api-access-tvlgm\") pod \"de70d856-1c6d-498e-b548-1724bbc8cb66\" (UID: \"de70d856-1c6d-498e-b548-1724bbc8cb66\") " Apr 04 02:26:02 crc kubenswrapper[4681]: I0404 02:26:02.226820 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de70d856-1c6d-498e-b548-1724bbc8cb66-config-data-custom\") pod \"de70d856-1c6d-498e-b548-1724bbc8cb66\" (UID: \"de70d856-1c6d-498e-b548-1724bbc8cb66\") " Apr 04 02:26:02 crc kubenswrapper[4681]: I0404 02:26:02.226895 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de70d856-1c6d-498e-b548-1724bbc8cb66-logs\") pod \"de70d856-1c6d-498e-b548-1724bbc8cb66\" (UID: \"de70d856-1c6d-498e-b548-1724bbc8cb66\") " Apr 04 02:26:02 crc kubenswrapper[4681]: I0404 02:26:02.226995 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de70d856-1c6d-498e-b548-1724bbc8cb66-config-data\") pod \"de70d856-1c6d-498e-b548-1724bbc8cb66\" (UID: \"de70d856-1c6d-498e-b548-1724bbc8cb66\") " Apr 04 02:26:02 crc kubenswrapper[4681]: I0404 02:26:02.231988 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de70d856-1c6d-498e-b548-1724bbc8cb66-logs" (OuterVolumeSpecName: "logs") pod "de70d856-1c6d-498e-b548-1724bbc8cb66" (UID: "de70d856-1c6d-498e-b548-1724bbc8cb66"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:26:02 crc kubenswrapper[4681]: I0404 02:26:02.238565 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de70d856-1c6d-498e-b548-1724bbc8cb66-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "de70d856-1c6d-498e-b548-1724bbc8cb66" (UID: "de70d856-1c6d-498e-b548-1724bbc8cb66"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:02 crc kubenswrapper[4681]: I0404 02:26:02.241481 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de70d856-1c6d-498e-b548-1724bbc8cb66-kube-api-access-tvlgm" (OuterVolumeSpecName: "kube-api-access-tvlgm") pod "de70d856-1c6d-498e-b548-1724bbc8cb66" (UID: "de70d856-1c6d-498e-b548-1724bbc8cb66"). InnerVolumeSpecName "kube-api-access-tvlgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:26:02 crc kubenswrapper[4681]: I0404 02:26:02.318557 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de70d856-1c6d-498e-b548-1724bbc8cb66-config-data" (OuterVolumeSpecName: "config-data") pod "de70d856-1c6d-498e-b548-1724bbc8cb66" (UID: "de70d856-1c6d-498e-b548-1724bbc8cb66"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:02 crc kubenswrapper[4681]: I0404 02:26:02.318660 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de70d856-1c6d-498e-b548-1724bbc8cb66-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "de70d856-1c6d-498e-b548-1724bbc8cb66" (UID: "de70d856-1c6d-498e-b548-1724bbc8cb66"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:02 crc kubenswrapper[4681]: I0404 02:26:02.331291 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/de70d856-1c6d-498e-b548-1724bbc8cb66-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:02 crc kubenswrapper[4681]: I0404 02:26:02.331339 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/de70d856-1c6d-498e-b548-1724bbc8cb66-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:02 crc kubenswrapper[4681]: I0404 02:26:02.331379 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvlgm\" (UniqueName: \"kubernetes.io/projected/de70d856-1c6d-498e-b548-1724bbc8cb66-kube-api-access-tvlgm\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:02 crc kubenswrapper[4681]: I0404 02:26:02.331394 4681 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/de70d856-1c6d-498e-b548-1724bbc8cb66-config-data-custom\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:02 crc kubenswrapper[4681]: I0404 02:26:02.331411 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de70d856-1c6d-498e-b548-1724bbc8cb66-logs\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:02 crc kubenswrapper[4681]: I0404 02:26:02.519083 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c366f78-0d36-4ad8-b037-f3156da30c73","Type":"ContainerStarted","Data":"e44c6c6a1e56313c0292526cdd9785c17d31ed407177c66090cbf8597df3d5fd"} Apr 04 02:26:02 crc kubenswrapper[4681]: I0404 02:26:02.519501 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c366f78-0d36-4ad8-b037-f3156da30c73","Type":"ContainerStarted","Data":"f35e348a7048b6bba66a293983bcfc56b466d048fd0e5e28500cbfb96c34a64e"} Apr 04 02:26:02 crc kubenswrapper[4681]: I0404 02:26:02.523306 4681 generic.go:334] "Generic (PLEG): container finished" podID="de70d856-1c6d-498e-b548-1724bbc8cb66" containerID="307ea29142e5a8ac9efffe73f740a75316b7511980add14e24838d074e5bc7b1" exitCode=0 Apr 04 02:26:02 crc kubenswrapper[4681]: I0404 02:26:02.523337 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75f6b5466d-jv68p" event={"ID":"de70d856-1c6d-498e-b548-1724bbc8cb66","Type":"ContainerDied","Data":"307ea29142e5a8ac9efffe73f740a75316b7511980add14e24838d074e5bc7b1"} Apr 04 02:26:02 crc kubenswrapper[4681]: I0404 02:26:02.523358 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-75f6b5466d-jv68p" event={"ID":"de70d856-1c6d-498e-b548-1724bbc8cb66","Type":"ContainerDied","Data":"7bd0558ac2b9ecdf985c08c5c22f266e5f9bee72ee3e4ad74e6cdbf964435f49"} Apr 04 02:26:02 crc kubenswrapper[4681]: I0404 02:26:02.523380 4681 scope.go:117] "RemoveContainer" containerID="307ea29142e5a8ac9efffe73f740a75316b7511980add14e24838d074e5bc7b1" Apr 04 02:26:02 crc kubenswrapper[4681]: I0404 02:26:02.523503 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-75f6b5466d-jv68p" Apr 04 02:26:02 crc kubenswrapper[4681]: I0404 02:26:02.550899 4681 scope.go:117] "RemoveContainer" containerID="9005a9445ac119c8f75c372c7463598b76e4b46ccd22083dffbd6cfeb296d6f5" Apr 04 02:26:02 crc kubenswrapper[4681]: I0404 02:26:02.582307 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-75f6b5466d-jv68p"] Apr 04 02:26:02 crc kubenswrapper[4681]: I0404 02:26:02.595099 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-75f6b5466d-jv68p"] Apr 04 02:26:02 crc kubenswrapper[4681]: I0404 02:26:02.597553 4681 scope.go:117] "RemoveContainer" containerID="307ea29142e5a8ac9efffe73f740a75316b7511980add14e24838d074e5bc7b1" Apr 04 02:26:02 crc kubenswrapper[4681]: E0404 02:26:02.599234 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"307ea29142e5a8ac9efffe73f740a75316b7511980add14e24838d074e5bc7b1\": container with ID starting with 307ea29142e5a8ac9efffe73f740a75316b7511980add14e24838d074e5bc7b1 not found: ID does not exist" containerID="307ea29142e5a8ac9efffe73f740a75316b7511980add14e24838d074e5bc7b1" Apr 04 02:26:02 crc kubenswrapper[4681]: I0404 02:26:02.599312 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"307ea29142e5a8ac9efffe73f740a75316b7511980add14e24838d074e5bc7b1"} err="failed to get container status \"307ea29142e5a8ac9efffe73f740a75316b7511980add14e24838d074e5bc7b1\": rpc error: code = NotFound desc = could not find container \"307ea29142e5a8ac9efffe73f740a75316b7511980add14e24838d074e5bc7b1\": container with ID starting with 307ea29142e5a8ac9efffe73f740a75316b7511980add14e24838d074e5bc7b1 not found: ID does not exist" Apr 04 02:26:02 crc kubenswrapper[4681]: I0404 02:26:02.599366 4681 scope.go:117] "RemoveContainer" containerID="9005a9445ac119c8f75c372c7463598b76e4b46ccd22083dffbd6cfeb296d6f5" Apr 04 02:26:02 crc kubenswrapper[4681]: E0404 02:26:02.599883 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9005a9445ac119c8f75c372c7463598b76e4b46ccd22083dffbd6cfeb296d6f5\": container with ID starting with 9005a9445ac119c8f75c372c7463598b76e4b46ccd22083dffbd6cfeb296d6f5 not found: ID does not exist" containerID="9005a9445ac119c8f75c372c7463598b76e4b46ccd22083dffbd6cfeb296d6f5" Apr 04 02:26:02 crc kubenswrapper[4681]: I0404 02:26:02.599931 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9005a9445ac119c8f75c372c7463598b76e4b46ccd22083dffbd6cfeb296d6f5"} err="failed to get container status \"9005a9445ac119c8f75c372c7463598b76e4b46ccd22083dffbd6cfeb296d6f5\": rpc error: code = NotFound desc = could not find container \"9005a9445ac119c8f75c372c7463598b76e4b46ccd22083dffbd6cfeb296d6f5\": container with ID starting with 9005a9445ac119c8f75c372c7463598b76e4b46ccd22083dffbd6cfeb296d6f5 not found: ID does not exist" Apr 04 02:26:03 crc kubenswrapper[4681]: I0404 02:26:03.214505 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="729d6f58-b3b5-4f01-a602-714dffa40001" path="/var/lib/kubelet/pods/729d6f58-b3b5-4f01-a602-714dffa40001/volumes" Apr 04 02:26:03 crc kubenswrapper[4681]: I0404 02:26:03.215927 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de70d856-1c6d-498e-b548-1724bbc8cb66" path="/var/lib/kubelet/pods/de70d856-1c6d-498e-b548-1724bbc8cb66/volumes" Apr 04 02:26:03 crc kubenswrapper[4681]: I0404 02:26:03.542398 4681 generic.go:334] "Generic (PLEG): container finished" podID="9130bd5c-6c50-412e-887f-4b22f4bc5377" containerID="52c68d734ba6a9807e80976e1095fb0728fa39370bd60b151a1f78a168d61c18" exitCode=0 Apr 04 02:26:03 crc kubenswrapper[4681]: I0404 02:26:03.542475 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587826-vvnqf" event={"ID":"9130bd5c-6c50-412e-887f-4b22f4bc5377","Type":"ContainerDied","Data":"52c68d734ba6a9807e80976e1095fb0728fa39370bd60b151a1f78a168d61c18"} Apr 04 02:26:03 crc kubenswrapper[4681]: I0404 02:26:03.552236 4681 generic.go:334] "Generic (PLEG): container finished" podID="442b54de-22a7-4121-aab3-5365d4e0872d" containerID="13815f30743a40f27c1344893fb0ce4d8bd89907e3e353cbbdd634d8cd72ea30" exitCode=1 Apr 04 02:26:03 crc kubenswrapper[4681]: I0404 02:26:03.552308 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"442b54de-22a7-4121-aab3-5365d4e0872d","Type":"ContainerDied","Data":"13815f30743a40f27c1344893fb0ce4d8bd89907e3e353cbbdd634d8cd72ea30"} Apr 04 02:26:03 crc kubenswrapper[4681]: I0404 02:26:03.552365 4681 scope.go:117] "RemoveContainer" containerID="5c2e20ac3012a965348100f0100e8c91268dde1c0d1be74ad54fe599de278148" Apr 04 02:26:03 crc kubenswrapper[4681]: I0404 02:26:03.553681 4681 scope.go:117] "RemoveContainer" containerID="13815f30743a40f27c1344893fb0ce4d8bd89907e3e353cbbdd634d8cd72ea30" Apr 04 02:26:03 crc kubenswrapper[4681]: E0404 02:26:03.553985 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(442b54de-22a7-4121-aab3-5365d4e0872d)\"" pod="openstack/watcher-decision-engine-0" podUID="442b54de-22a7-4121-aab3-5365d4e0872d" Apr 04 02:26:04 crc kubenswrapper[4681]: I0404 02:26:04.565132 4681 generic.go:334] "Generic (PLEG): container finished" podID="b185d1fc-0c71-44ee-bb6d-915189acc4d8" containerID="48562f62ef66c90bddf853a7d75b62412d3429566b2fc45fddf86e672ea4d924" exitCode=0 Apr 04 02:26:04 crc kubenswrapper[4681]: I0404 02:26:04.565216 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5sd54" event={"ID":"b185d1fc-0c71-44ee-bb6d-915189acc4d8","Type":"ContainerDied","Data":"48562f62ef66c90bddf853a7d75b62412d3429566b2fc45fddf86e672ea4d924"} Apr 04 02:26:04 crc kubenswrapper[4681]: I0404 02:26:04.974790 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587826-vvnqf" Apr 04 02:26:05 crc kubenswrapper[4681]: I0404 02:26:05.085545 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpbbb\" (UniqueName: \"kubernetes.io/projected/9130bd5c-6c50-412e-887f-4b22f4bc5377-kube-api-access-kpbbb\") pod \"9130bd5c-6c50-412e-887f-4b22f4bc5377\" (UID: \"9130bd5c-6c50-412e-887f-4b22f4bc5377\") " Apr 04 02:26:05 crc kubenswrapper[4681]: I0404 02:26:05.090829 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9130bd5c-6c50-412e-887f-4b22f4bc5377-kube-api-access-kpbbb" (OuterVolumeSpecName: "kube-api-access-kpbbb") pod "9130bd5c-6c50-412e-887f-4b22f4bc5377" (UID: "9130bd5c-6c50-412e-887f-4b22f4bc5377"). InnerVolumeSpecName "kube-api-access-kpbbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:26:05 crc kubenswrapper[4681]: I0404 02:26:05.188165 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpbbb\" (UniqueName: \"kubernetes.io/projected/9130bd5c-6c50-412e-887f-4b22f4bc5377-kube-api-access-kpbbb\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:05 crc kubenswrapper[4681]: I0404 02:26:05.577606 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587826-vvnqf" event={"ID":"9130bd5c-6c50-412e-887f-4b22f4bc5377","Type":"ContainerDied","Data":"19810feb1e4720fb7d6f88a69dc047d2a405a8ac60cd2fa2c77794dd409cbce2"} Apr 04 02:26:05 crc kubenswrapper[4681]: I0404 02:26:05.577657 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19810feb1e4720fb7d6f88a69dc047d2a405a8ac60cd2fa2c77794dd409cbce2" Apr 04 02:26:05 crc kubenswrapper[4681]: I0404 02:26:05.577622 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587826-vvnqf" Apr 04 02:26:05 crc kubenswrapper[4681]: I0404 02:26:05.580985 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c366f78-0d36-4ad8-b037-f3156da30c73","Type":"ContainerStarted","Data":"775de3a27be0611d87e5f6c5a10d9d4761755ac2b30f27536886ab9bc0d29215"} Apr 04 02:26:05 crc kubenswrapper[4681]: I0404 02:26:05.581074 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Apr 04 02:26:05 crc kubenswrapper[4681]: I0404 02:26:05.609766 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.588221548 podStartE2EDuration="7.609742052s" podCreationTimestamp="2026-04-04 02:25:58 +0000 UTC" firstStartedPulling="2026-04-04 02:25:59.405095592 +0000 UTC m=+1839.070870712" lastFinishedPulling="2026-04-04 02:26:04.426616086 +0000 UTC m=+1844.092391216" observedRunningTime="2026-04-04 02:26:05.606837143 +0000 UTC m=+1845.272612263" watchObservedRunningTime="2026-04-04 02:26:05.609742052 +0000 UTC m=+1845.275517192" Apr 04 02:26:05 crc kubenswrapper[4681]: I0404 02:26:05.926667 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5sd54" Apr 04 02:26:06 crc kubenswrapper[4681]: I0404 02:26:06.027499 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b185d1fc-0c71-44ee-bb6d-915189acc4d8-combined-ca-bundle\") pod \"b185d1fc-0c71-44ee-bb6d-915189acc4d8\" (UID: \"b185d1fc-0c71-44ee-bb6d-915189acc4d8\") " Apr 04 02:26:06 crc kubenswrapper[4681]: I0404 02:26:06.027767 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b185d1fc-0c71-44ee-bb6d-915189acc4d8-etc-machine-id\") pod \"b185d1fc-0c71-44ee-bb6d-915189acc4d8\" (UID: \"b185d1fc-0c71-44ee-bb6d-915189acc4d8\") " Apr 04 02:26:06 crc kubenswrapper[4681]: I0404 02:26:06.027856 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhhls\" (UniqueName: \"kubernetes.io/projected/b185d1fc-0c71-44ee-bb6d-915189acc4d8-kube-api-access-xhhls\") pod \"b185d1fc-0c71-44ee-bb6d-915189acc4d8\" (UID: \"b185d1fc-0c71-44ee-bb6d-915189acc4d8\") " Apr 04 02:26:06 crc kubenswrapper[4681]: I0404 02:26:06.027857 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b185d1fc-0c71-44ee-bb6d-915189acc4d8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b185d1fc-0c71-44ee-bb6d-915189acc4d8" (UID: "b185d1fc-0c71-44ee-bb6d-915189acc4d8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:26:06 crc kubenswrapper[4681]: I0404 02:26:06.027891 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b185d1fc-0c71-44ee-bb6d-915189acc4d8-db-sync-config-data\") pod \"b185d1fc-0c71-44ee-bb6d-915189acc4d8\" (UID: \"b185d1fc-0c71-44ee-bb6d-915189acc4d8\") " Apr 04 02:26:06 crc kubenswrapper[4681]: I0404 02:26:06.027934 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b185d1fc-0c71-44ee-bb6d-915189acc4d8-scripts\") pod \"b185d1fc-0c71-44ee-bb6d-915189acc4d8\" (UID: \"b185d1fc-0c71-44ee-bb6d-915189acc4d8\") " Apr 04 02:26:06 crc kubenswrapper[4681]: I0404 02:26:06.027984 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b185d1fc-0c71-44ee-bb6d-915189acc4d8-config-data\") pod \"b185d1fc-0c71-44ee-bb6d-915189acc4d8\" (UID: \"b185d1fc-0c71-44ee-bb6d-915189acc4d8\") " Apr 04 02:26:06 crc kubenswrapper[4681]: I0404 02:26:06.030178 4681 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b185d1fc-0c71-44ee-bb6d-915189acc4d8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:06 crc kubenswrapper[4681]: I0404 02:26:06.032696 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b185d1fc-0c71-44ee-bb6d-915189acc4d8-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b185d1fc-0c71-44ee-bb6d-915189acc4d8" (UID: "b185d1fc-0c71-44ee-bb6d-915189acc4d8"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:06 crc kubenswrapper[4681]: I0404 02:26:06.053153 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587820-s4hzt"] Apr 04 02:26:06 crc kubenswrapper[4681]: I0404 02:26:06.060320 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b185d1fc-0c71-44ee-bb6d-915189acc4d8-kube-api-access-xhhls" (OuterVolumeSpecName: "kube-api-access-xhhls") pod "b185d1fc-0c71-44ee-bb6d-915189acc4d8" (UID: "b185d1fc-0c71-44ee-bb6d-915189acc4d8"). InnerVolumeSpecName "kube-api-access-xhhls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:26:06 crc kubenswrapper[4681]: I0404 02:26:06.061461 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b185d1fc-0c71-44ee-bb6d-915189acc4d8-scripts" (OuterVolumeSpecName: "scripts") pod "b185d1fc-0c71-44ee-bb6d-915189acc4d8" (UID: "b185d1fc-0c71-44ee-bb6d-915189acc4d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:06 crc kubenswrapper[4681]: I0404 02:26:06.065427 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b185d1fc-0c71-44ee-bb6d-915189acc4d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b185d1fc-0c71-44ee-bb6d-915189acc4d8" (UID: "b185d1fc-0c71-44ee-bb6d-915189acc4d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:06 crc kubenswrapper[4681]: I0404 02:26:06.068536 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587820-s4hzt"] Apr 04 02:26:06 crc kubenswrapper[4681]: I0404 02:26:06.101876 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b185d1fc-0c71-44ee-bb6d-915189acc4d8-config-data" (OuterVolumeSpecName: "config-data") pod "b185d1fc-0c71-44ee-bb6d-915189acc4d8" (UID: "b185d1fc-0c71-44ee-bb6d-915189acc4d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:06 crc kubenswrapper[4681]: I0404 02:26:06.132589 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b185d1fc-0c71-44ee-bb6d-915189acc4d8-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:06 crc kubenswrapper[4681]: I0404 02:26:06.132667 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b185d1fc-0c71-44ee-bb6d-915189acc4d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:06 crc kubenswrapper[4681]: I0404 02:26:06.132685 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhhls\" (UniqueName: \"kubernetes.io/projected/b185d1fc-0c71-44ee-bb6d-915189acc4d8-kube-api-access-xhhls\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:06 crc kubenswrapper[4681]: I0404 02:26:06.132699 4681 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b185d1fc-0c71-44ee-bb6d-915189acc4d8-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:06 crc kubenswrapper[4681]: I0404 02:26:06.132711 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b185d1fc-0c71-44ee-bb6d-915189acc4d8-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:06 crc kubenswrapper[4681]: I0404 02:26:06.595644 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5sd54" Apr 04 02:26:06 crc kubenswrapper[4681]: I0404 02:26:06.595652 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5sd54" event={"ID":"b185d1fc-0c71-44ee-bb6d-915189acc4d8","Type":"ContainerDied","Data":"66dc9e92ec9e1366ac8c48892b9d73eec5af8954b4cfb518a32d83a4f99a8f1a"} Apr 04 02:26:06 crc kubenswrapper[4681]: I0404 02:26:06.595717 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66dc9e92ec9e1366ac8c48892b9d73eec5af8954b4cfb518a32d83a4f99a8f1a" Apr 04 02:26:06 crc kubenswrapper[4681]: I0404 02:26:06.800035 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Apr 04 02:26:06 crc kubenswrapper[4681]: I0404 02:26:06.800087 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Apr 04 02:26:06 crc kubenswrapper[4681]: I0404 02:26:06.867676 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Apr 04 02:26:06 crc kubenswrapper[4681]: I0404 02:26:06.876411 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.016415 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.019732 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.023688 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Apr 04 02:26:07 crc kubenswrapper[4681]: E0404 02:26:07.024143 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de70d856-1c6d-498e-b548-1724bbc8cb66" containerName="barbican-api" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.024160 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="de70d856-1c6d-498e-b548-1724bbc8cb66" containerName="barbican-api" Apr 04 02:26:07 crc kubenswrapper[4681]: E0404 02:26:07.024192 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9130bd5c-6c50-412e-887f-4b22f4bc5377" containerName="oc" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.024201 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="9130bd5c-6c50-412e-887f-4b22f4bc5377" containerName="oc" Apr 04 02:26:07 crc kubenswrapper[4681]: E0404 02:26:07.024219 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b185d1fc-0c71-44ee-bb6d-915189acc4d8" containerName="cinder-db-sync" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.024227 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="b185d1fc-0c71-44ee-bb6d-915189acc4d8" containerName="cinder-db-sync" Apr 04 02:26:07 crc kubenswrapper[4681]: E0404 02:26:07.024246 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="729d6f58-b3b5-4f01-a602-714dffa40001" containerName="dnsmasq-dns" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.024253 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="729d6f58-b3b5-4f01-a602-714dffa40001" containerName="dnsmasq-dns" Apr 04 02:26:07 crc kubenswrapper[4681]: E0404 02:26:07.024317 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="729d6f58-b3b5-4f01-a602-714dffa40001" containerName="init" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.024325 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="729d6f58-b3b5-4f01-a602-714dffa40001" containerName="init" Apr 04 02:26:07 crc kubenswrapper[4681]: E0404 02:26:07.024342 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de70d856-1c6d-498e-b548-1724bbc8cb66" containerName="barbican-api-log" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.024349 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="de70d856-1c6d-498e-b548-1724bbc8cb66" containerName="barbican-api-log" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.024559 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="de70d856-1c6d-498e-b548-1724bbc8cb66" containerName="barbican-api" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.024580 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="de70d856-1c6d-498e-b548-1724bbc8cb66" containerName="barbican-api-log" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.024592 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="b185d1fc-0c71-44ee-bb6d-915189acc4d8" containerName="cinder-db-sync" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.024601 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="9130bd5c-6c50-412e-887f-4b22f4bc5377" containerName="oc" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.024609 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="729d6f58-b3b5-4f01-a602-714dffa40001" containerName="dnsmasq-dns" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.025729 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.044481 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-7fbt5" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.044739 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.044917 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.045059 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.086822 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.099538 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b9b888545-tqsw9"] Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.102076 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.102182 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.142579 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b9b888545-tqsw9"] Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.162439 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.229161 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72057968-aa72-4a22-aaea-f74196e09c9e" path="/var/lib/kubelet/pods/72057968-aa72-4a22-aaea-f74196e09c9e/volumes" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.261900 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d424526-1e00-412b-aeec-97b628067dcc-dns-svc\") pod \"dnsmasq-dns-5b9b888545-tqsw9\" (UID: \"4d424526-1e00-412b-aeec-97b628067dcc\") " pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.261965 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d424526-1e00-412b-aeec-97b628067dcc-dns-swift-storage-0\") pod \"dnsmasq-dns-5b9b888545-tqsw9\" (UID: \"4d424526-1e00-412b-aeec-97b628067dcc\") " pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.262041 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8be34f0e-92b9-49f8-8164-090a9e4260e2-config-data\") pod \"cinder-scheduler-0\" (UID: \"8be34f0e-92b9-49f8-8164-090a9e4260e2\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.262122 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8be34f0e-92b9-49f8-8164-090a9e4260e2-scripts\") pod \"cinder-scheduler-0\" (UID: \"8be34f0e-92b9-49f8-8164-090a9e4260e2\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.262137 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d424526-1e00-412b-aeec-97b628067dcc-ovsdbserver-nb\") pod \"dnsmasq-dns-5b9b888545-tqsw9\" (UID: \"4d424526-1e00-412b-aeec-97b628067dcc\") " pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.262160 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8be34f0e-92b9-49f8-8164-090a9e4260e2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8be34f0e-92b9-49f8-8164-090a9e4260e2\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.262184 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8be34f0e-92b9-49f8-8164-090a9e4260e2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8be34f0e-92b9-49f8-8164-090a9e4260e2\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.262200 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d424526-1e00-412b-aeec-97b628067dcc-config\") pod \"dnsmasq-dns-5b9b888545-tqsw9\" (UID: \"4d424526-1e00-412b-aeec-97b628067dcc\") " pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.262218 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8be34f0e-92b9-49f8-8164-090a9e4260e2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8be34f0e-92b9-49f8-8164-090a9e4260e2\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.262240 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58l98\" (UniqueName: \"kubernetes.io/projected/8be34f0e-92b9-49f8-8164-090a9e4260e2-kube-api-access-58l98\") pod \"cinder-scheduler-0\" (UID: \"8be34f0e-92b9-49f8-8164-090a9e4260e2\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.262275 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d424526-1e00-412b-aeec-97b628067dcc-ovsdbserver-sb\") pod \"dnsmasq-dns-5b9b888545-tqsw9\" (UID: \"4d424526-1e00-412b-aeec-97b628067dcc\") " pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.262294 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq7xz\" (UniqueName: \"kubernetes.io/projected/4d424526-1e00-412b-aeec-97b628067dcc-kube-api-access-wq7xz\") pod \"dnsmasq-dns-5b9b888545-tqsw9\" (UID: \"4d424526-1e00-412b-aeec-97b628067dcc\") " pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.365235 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d424526-1e00-412b-aeec-97b628067dcc-ovsdbserver-nb\") pod \"dnsmasq-dns-5b9b888545-tqsw9\" (UID: \"4d424526-1e00-412b-aeec-97b628067dcc\") " pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.365291 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8be34f0e-92b9-49f8-8164-090a9e4260e2-scripts\") pod \"cinder-scheduler-0\" (UID: \"8be34f0e-92b9-49f8-8164-090a9e4260e2\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.365320 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8be34f0e-92b9-49f8-8164-090a9e4260e2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8be34f0e-92b9-49f8-8164-090a9e4260e2\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.365342 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d424526-1e00-412b-aeec-97b628067dcc-config\") pod \"dnsmasq-dns-5b9b888545-tqsw9\" (UID: \"4d424526-1e00-412b-aeec-97b628067dcc\") " pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.365358 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8be34f0e-92b9-49f8-8164-090a9e4260e2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8be34f0e-92b9-49f8-8164-090a9e4260e2\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.365373 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8be34f0e-92b9-49f8-8164-090a9e4260e2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8be34f0e-92b9-49f8-8164-090a9e4260e2\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.365394 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58l98\" (UniqueName: \"kubernetes.io/projected/8be34f0e-92b9-49f8-8164-090a9e4260e2-kube-api-access-58l98\") pod \"cinder-scheduler-0\" (UID: \"8be34f0e-92b9-49f8-8164-090a9e4260e2\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.365414 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d424526-1e00-412b-aeec-97b628067dcc-ovsdbserver-sb\") pod \"dnsmasq-dns-5b9b888545-tqsw9\" (UID: \"4d424526-1e00-412b-aeec-97b628067dcc\") " pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.365431 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq7xz\" (UniqueName: \"kubernetes.io/projected/4d424526-1e00-412b-aeec-97b628067dcc-kube-api-access-wq7xz\") pod \"dnsmasq-dns-5b9b888545-tqsw9\" (UID: \"4d424526-1e00-412b-aeec-97b628067dcc\") " pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.365449 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d424526-1e00-412b-aeec-97b628067dcc-dns-svc\") pod \"dnsmasq-dns-5b9b888545-tqsw9\" (UID: \"4d424526-1e00-412b-aeec-97b628067dcc\") " pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.365486 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d424526-1e00-412b-aeec-97b628067dcc-dns-swift-storage-0\") pod \"dnsmasq-dns-5b9b888545-tqsw9\" (UID: \"4d424526-1e00-412b-aeec-97b628067dcc\") " pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.365610 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8be34f0e-92b9-49f8-8164-090a9e4260e2-config-data\") pod \"cinder-scheduler-0\" (UID: \"8be34f0e-92b9-49f8-8164-090a9e4260e2\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.367152 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d424526-1e00-412b-aeec-97b628067dcc-dns-swift-storage-0\") pod \"dnsmasq-dns-5b9b888545-tqsw9\" (UID: \"4d424526-1e00-412b-aeec-97b628067dcc\") " pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.367830 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d424526-1e00-412b-aeec-97b628067dcc-ovsdbserver-nb\") pod \"dnsmasq-dns-5b9b888545-tqsw9\" (UID: \"4d424526-1e00-412b-aeec-97b628067dcc\") " pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.370315 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d424526-1e00-412b-aeec-97b628067dcc-ovsdbserver-sb\") pod \"dnsmasq-dns-5b9b888545-tqsw9\" (UID: \"4d424526-1e00-412b-aeec-97b628067dcc\") " pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.371189 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d424526-1e00-412b-aeec-97b628067dcc-config\") pod \"dnsmasq-dns-5b9b888545-tqsw9\" (UID: \"4d424526-1e00-412b-aeec-97b628067dcc\") " pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.371244 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8be34f0e-92b9-49f8-8164-090a9e4260e2-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8be34f0e-92b9-49f8-8164-090a9e4260e2\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.372021 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d424526-1e00-412b-aeec-97b628067dcc-dns-svc\") pod \"dnsmasq-dns-5b9b888545-tqsw9\" (UID: \"4d424526-1e00-412b-aeec-97b628067dcc\") " pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.380183 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8be34f0e-92b9-49f8-8164-090a9e4260e2-config-data\") pod \"cinder-scheduler-0\" (UID: \"8be34f0e-92b9-49f8-8164-090a9e4260e2\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.385906 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8be34f0e-92b9-49f8-8164-090a9e4260e2-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8be34f0e-92b9-49f8-8164-090a9e4260e2\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.391778 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8be34f0e-92b9-49f8-8164-090a9e4260e2-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8be34f0e-92b9-49f8-8164-090a9e4260e2\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.395109 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq7xz\" (UniqueName: \"kubernetes.io/projected/4d424526-1e00-412b-aeec-97b628067dcc-kube-api-access-wq7xz\") pod \"dnsmasq-dns-5b9b888545-tqsw9\" (UID: \"4d424526-1e00-412b-aeec-97b628067dcc\") " pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.396740 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8be34f0e-92b9-49f8-8164-090a9e4260e2-scripts\") pod \"cinder-scheduler-0\" (UID: \"8be34f0e-92b9-49f8-8164-090a9e4260e2\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.418070 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58l98\" (UniqueName: \"kubernetes.io/projected/8be34f0e-92b9-49f8-8164-090a9e4260e2-kube-api-access-58l98\") pod \"cinder-scheduler-0\" (UID: \"8be34f0e-92b9-49f8-8164-090a9e4260e2\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.444774 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.455905 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.482767 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.487603 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.517358 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\") " pod="openstack/cinder-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.519223 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-scripts\") pod \"cinder-api-0\" (UID: \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\") " pod="openstack/cinder-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.519428 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-config-data-custom\") pod \"cinder-api-0\" (UID: \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\") " pod="openstack/cinder-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.520313 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-config-data\") pod \"cinder-api-0\" (UID: \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\") " pod="openstack/cinder-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.536970 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.622727 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.622920 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.622965 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.622975 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.624069 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-scripts\") pod \"cinder-api-0\" (UID: \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\") " pod="openstack/cinder-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.624152 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-config-data-custom\") pod \"cinder-api-0\" (UID: \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\") " pod="openstack/cinder-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.624179 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-config-data\") pod \"cinder-api-0\" (UID: \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\") " pod="openstack/cinder-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.624213 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\") " pod="openstack/cinder-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.624349 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6hmk\" (UniqueName: \"kubernetes.io/projected/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-kube-api-access-c6hmk\") pod \"cinder-api-0\" (UID: \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\") " pod="openstack/cinder-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.624373 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-logs\") pod \"cinder-api-0\" (UID: \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\") " pod="openstack/cinder-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.624407 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\") " pod="openstack/cinder-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.624507 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\") " pod="openstack/cinder-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.635238 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-config-data\") pod \"cinder-api-0\" (UID: \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\") " pod="openstack/cinder-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.639664 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-config-data-custom\") pod \"cinder-api-0\" (UID: \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\") " pod="openstack/cinder-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.641770 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-scripts\") pod \"cinder-api-0\" (UID: \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\") " pod="openstack/cinder-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.675217 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.725753 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6hmk\" (UniqueName: \"kubernetes.io/projected/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-kube-api-access-c6hmk\") pod \"cinder-api-0\" (UID: \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\") " pod="openstack/cinder-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.725801 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-logs\") pod \"cinder-api-0\" (UID: \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\") " pod="openstack/cinder-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.725908 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\") " pod="openstack/cinder-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.727071 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-logs\") pod \"cinder-api-0\" (UID: \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\") " pod="openstack/cinder-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.760411 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\") " pod="openstack/cinder-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.775966 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6hmk\" (UniqueName: \"kubernetes.io/projected/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-kube-api-access-c6hmk\") pod \"cinder-api-0\" (UID: \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\") " pod="openstack/cinder-api-0" Apr 04 02:26:07 crc kubenswrapper[4681]: I0404 02:26:07.819678 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Apr 04 02:26:08 crc kubenswrapper[4681]: I0404 02:26:08.170533 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b9b888545-tqsw9"] Apr 04 02:26:08 crc kubenswrapper[4681]: W0404 02:26:08.192635 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d424526_1e00_412b_aeec_97b628067dcc.slice/crio-3f27dab3cabf2ce11240690c99a4ddd63e883fc3548511fdedcb6808657af66b WatchSource:0}: Error finding container 3f27dab3cabf2ce11240690c99a4ddd63e883fc3548511fdedcb6808657af66b: Status 404 returned error can't find the container with id 3f27dab3cabf2ce11240690c99a4ddd63e883fc3548511fdedcb6808657af66b Apr 04 02:26:08 crc kubenswrapper[4681]: I0404 02:26:08.240372 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Apr 04 02:26:08 crc kubenswrapper[4681]: I0404 02:26:08.240432 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Apr 04 02:26:08 crc kubenswrapper[4681]: I0404 02:26:08.241227 4681 scope.go:117] "RemoveContainer" containerID="13815f30743a40f27c1344893fb0ce4d8bd89907e3e353cbbdd634d8cd72ea30" Apr 04 02:26:08 crc kubenswrapper[4681]: E0404 02:26:08.241539 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(442b54de-22a7-4121-aab3-5365d4e0872d)\"" pod="openstack/watcher-decision-engine-0" podUID="442b54de-22a7-4121-aab3-5365d4e0872d" Apr 04 02:26:08 crc kubenswrapper[4681]: I0404 02:26:08.465453 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Apr 04 02:26:08 crc kubenswrapper[4681]: I0404 02:26:08.602568 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Apr 04 02:26:08 crc kubenswrapper[4681]: I0404 02:26:08.643256 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8be34f0e-92b9-49f8-8164-090a9e4260e2","Type":"ContainerStarted","Data":"c800834743526457f90afe5cb822d44b83ee9867415b2af6827fa26b57761c2a"} Apr 04 02:26:08 crc kubenswrapper[4681]: I0404 02:26:08.649856 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" event={"ID":"4d424526-1e00-412b-aeec-97b628067dcc","Type":"ContainerStarted","Data":"3f27dab3cabf2ce11240690c99a4ddd63e883fc3548511fdedcb6808657af66b"} Apr 04 02:26:08 crc kubenswrapper[4681]: W0404 02:26:08.690936 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a2bc18f_1ff7_410b_aae9_9d3e72475f11.slice/crio-f7118dc9253eb4d94d58aff5b463edc16e6f4998ce5b61f6aefeb0775e0b91f3 WatchSource:0}: Error finding container f7118dc9253eb4d94d58aff5b463edc16e6f4998ce5b61f6aefeb0775e0b91f3: Status 404 returned error can't find the container with id f7118dc9253eb4d94d58aff5b463edc16e6f4998ce5b61f6aefeb0775e0b91f3 Apr 04 02:26:09 crc kubenswrapper[4681]: I0404 02:26:09.679918 4681 generic.go:334] "Generic (PLEG): container finished" podID="4d424526-1e00-412b-aeec-97b628067dcc" containerID="a2ff3f13ed68c53268eafa1f89d7a2c93b07c7d1d86b47e17477c61317bdc131" exitCode=0 Apr 04 02:26:09 crc kubenswrapper[4681]: I0404 02:26:09.680010 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" event={"ID":"4d424526-1e00-412b-aeec-97b628067dcc","Type":"ContainerDied","Data":"a2ff3f13ed68c53268eafa1f89d7a2c93b07c7d1d86b47e17477c61317bdc131"} Apr 04 02:26:09 crc kubenswrapper[4681]: I0404 02:26:09.722943 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0a2bc18f-1ff7-410b-aae9-9d3e72475f11","Type":"ContainerStarted","Data":"f7118dc9253eb4d94d58aff5b463edc16e6f4998ce5b61f6aefeb0775e0b91f3"} Apr 04 02:26:09 crc kubenswrapper[4681]: I0404 02:26:09.985932 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-674794d9f6-5s9ps" Apr 04 02:26:10 crc kubenswrapper[4681]: I0404 02:26:10.000740 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-5cdf6cfbdd-xgxdx" Apr 04 02:26:10 crc kubenswrapper[4681]: I0404 02:26:10.335740 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-674794d9f6-5s9ps" Apr 04 02:26:10 crc kubenswrapper[4681]: I0404 02:26:10.776961 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" event={"ID":"4d424526-1e00-412b-aeec-97b628067dcc","Type":"ContainerStarted","Data":"768ed4fea9c3befa39601443fd5b7b7889f1efb932f17cff0bd9fb7ae963924d"} Apr 04 02:26:10 crc kubenswrapper[4681]: I0404 02:26:10.777404 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" Apr 04 02:26:10 crc kubenswrapper[4681]: I0404 02:26:10.797558 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0a2bc18f-1ff7-410b-aae9-9d3e72475f11","Type":"ContainerStarted","Data":"09a3790ab7662b7c815fea8c8e00534b36649634c9378f052d61b4ac4bfdc410"} Apr 04 02:26:10 crc kubenswrapper[4681]: I0404 02:26:10.813119 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" podStartSLOduration=4.813101226 podStartE2EDuration="4.813101226s" podCreationTimestamp="2026-04-04 02:26:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:26:10.806209958 +0000 UTC m=+1850.471985098" watchObservedRunningTime="2026-04-04 02:26:10.813101226 +0000 UTC m=+1850.478876346" Apr 04 02:26:10 crc kubenswrapper[4681]: I0404 02:26:10.830307 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8be34f0e-92b9-49f8-8164-090a9e4260e2","Type":"ContainerStarted","Data":"39e599d616688a4841b173fd90132bdf99361bbd442fa8675d3b765680b8be26"} Apr 04 02:26:10 crc kubenswrapper[4681]: I0404 02:26:10.967195 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Apr 04 02:26:11 crc kubenswrapper[4681]: I0404 02:26:11.493715 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Apr 04 02:26:11 crc kubenswrapper[4681]: I0404 02:26:11.494022 4681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 04 02:26:11 crc kubenswrapper[4681]: I0404 02:26:11.561298 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Apr 04 02:26:11 crc kubenswrapper[4681]: I0404 02:26:11.570080 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Apr 04 02:26:11 crc kubenswrapper[4681]: I0404 02:26:11.570171 4681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 04 02:26:11 crc kubenswrapper[4681]: I0404 02:26:11.843250 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8be34f0e-92b9-49f8-8164-090a9e4260e2","Type":"ContainerStarted","Data":"af6ab8eab99e4a5010566af8d91fdcaa5f24a7d0963f5ac0f3d9bff287632a29"} Apr 04 02:26:11 crc kubenswrapper[4681]: I0404 02:26:11.845897 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0a2bc18f-1ff7-410b-aae9-9d3e72475f11" containerName="cinder-api-log" containerID="cri-o://09a3790ab7662b7c815fea8c8e00534b36649634c9378f052d61b4ac4bfdc410" gracePeriod=30 Apr 04 02:26:11 crc kubenswrapper[4681]: I0404 02:26:11.846173 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0a2bc18f-1ff7-410b-aae9-9d3e72475f11","Type":"ContainerStarted","Data":"b31e805743e212c987a46b6dd7eb1642c92c05869f30f815e17ee5eaec44b5a5"} Apr 04 02:26:11 crc kubenswrapper[4681]: I0404 02:26:11.846796 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Apr 04 02:26:11 crc kubenswrapper[4681]: I0404 02:26:11.846845 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="0a2bc18f-1ff7-410b-aae9-9d3e72475f11" containerName="cinder-api" containerID="cri-o://b31e805743e212c987a46b6dd7eb1642c92c05869f30f815e17ee5eaec44b5a5" gracePeriod=30 Apr 04 02:26:11 crc kubenswrapper[4681]: I0404 02:26:11.877952 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.545369898 podStartE2EDuration="5.877929987s" podCreationTimestamp="2026-04-04 02:26:06 +0000 UTC" firstStartedPulling="2026-04-04 02:26:08.480938068 +0000 UTC m=+1848.146713198" lastFinishedPulling="2026-04-04 02:26:08.813498167 +0000 UTC m=+1848.479273287" observedRunningTime="2026-04-04 02:26:11.864890339 +0000 UTC m=+1851.530665459" watchObservedRunningTime="2026-04-04 02:26:11.877929987 +0000 UTC m=+1851.543705107" Apr 04 02:26:11 crc kubenswrapper[4681]: I0404 02:26:11.907220 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.90720347 podStartE2EDuration="4.90720347s" podCreationTimestamp="2026-04-04 02:26:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:26:11.904236659 +0000 UTC m=+1851.570011779" watchObservedRunningTime="2026-04-04 02:26:11.90720347 +0000 UTC m=+1851.572978590" Apr 04 02:26:12 crc kubenswrapper[4681]: I0404 02:26:12.409864 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Apr 04 02:26:12 crc kubenswrapper[4681]: I0404 02:26:12.676825 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Apr 04 02:26:12 crc kubenswrapper[4681]: I0404 02:26:12.874159 4681 generic.go:334] "Generic (PLEG): container finished" podID="0a2bc18f-1ff7-410b-aae9-9d3e72475f11" containerID="b31e805743e212c987a46b6dd7eb1642c92c05869f30f815e17ee5eaec44b5a5" exitCode=0 Apr 04 02:26:12 crc kubenswrapper[4681]: I0404 02:26:12.874193 4681 generic.go:334] "Generic (PLEG): container finished" podID="0a2bc18f-1ff7-410b-aae9-9d3e72475f11" containerID="09a3790ab7662b7c815fea8c8e00534b36649634c9378f052d61b4ac4bfdc410" exitCode=143 Apr 04 02:26:12 crc kubenswrapper[4681]: I0404 02:26:12.874256 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0a2bc18f-1ff7-410b-aae9-9d3e72475f11","Type":"ContainerDied","Data":"b31e805743e212c987a46b6dd7eb1642c92c05869f30f815e17ee5eaec44b5a5"} Apr 04 02:26:12 crc kubenswrapper[4681]: I0404 02:26:12.874298 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0a2bc18f-1ff7-410b-aae9-9d3e72475f11","Type":"ContainerDied","Data":"09a3790ab7662b7c815fea8c8e00534b36649634c9378f052d61b4ac4bfdc410"} Apr 04 02:26:12 crc kubenswrapper[4681]: I0404 02:26:12.975122 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Apr 04 02:26:12 crc kubenswrapper[4681]: I0404 02:26:12.976714 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Apr 04 02:26:12 crc kubenswrapper[4681]: I0404 02:26:12.978943 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-cr9hh" Apr 04 02:26:12 crc kubenswrapper[4681]: I0404 02:26:12.979151 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Apr 04 02:26:12 crc kubenswrapper[4681]: I0404 02:26:12.980163 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Apr 04 02:26:12 crc kubenswrapper[4681]: I0404 02:26:12.984584 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.075516 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/23368cd0-88cf-4584-8a8d-8539f86d0b6d-openstack-config\") pod \"openstackclient\" (UID: \"23368cd0-88cf-4584-8a8d-8539f86d0b6d\") " pod="openstack/openstackclient" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.075586 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkpnj\" (UniqueName: \"kubernetes.io/projected/23368cd0-88cf-4584-8a8d-8539f86d0b6d-kube-api-access-gkpnj\") pod \"openstackclient\" (UID: \"23368cd0-88cf-4584-8a8d-8539f86d0b6d\") " pod="openstack/openstackclient" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.075619 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/23368cd0-88cf-4584-8a8d-8539f86d0b6d-openstack-config-secret\") pod \"openstackclient\" (UID: \"23368cd0-88cf-4584-8a8d-8539f86d0b6d\") " pod="openstack/openstackclient" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.075664 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23368cd0-88cf-4584-8a8d-8539f86d0b6d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"23368cd0-88cf-4584-8a8d-8539f86d0b6d\") " pod="openstack/openstackclient" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.177643 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23368cd0-88cf-4584-8a8d-8539f86d0b6d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"23368cd0-88cf-4584-8a8d-8539f86d0b6d\") " pod="openstack/openstackclient" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.177840 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/23368cd0-88cf-4584-8a8d-8539f86d0b6d-openstack-config\") pod \"openstackclient\" (UID: \"23368cd0-88cf-4584-8a8d-8539f86d0b6d\") " pod="openstack/openstackclient" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.177911 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkpnj\" (UniqueName: \"kubernetes.io/projected/23368cd0-88cf-4584-8a8d-8539f86d0b6d-kube-api-access-gkpnj\") pod \"openstackclient\" (UID: \"23368cd0-88cf-4584-8a8d-8539f86d0b6d\") " pod="openstack/openstackclient" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.177944 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/23368cd0-88cf-4584-8a8d-8539f86d0b6d-openstack-config-secret\") pod \"openstackclient\" (UID: \"23368cd0-88cf-4584-8a8d-8539f86d0b6d\") " pod="openstack/openstackclient" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.180907 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/23368cd0-88cf-4584-8a8d-8539f86d0b6d-openstack-config\") pod \"openstackclient\" (UID: \"23368cd0-88cf-4584-8a8d-8539f86d0b6d\") " pod="openstack/openstackclient" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.187071 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23368cd0-88cf-4584-8a8d-8539f86d0b6d-combined-ca-bundle\") pod \"openstackclient\" (UID: \"23368cd0-88cf-4584-8a8d-8539f86d0b6d\") " pod="openstack/openstackclient" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.196212 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/23368cd0-88cf-4584-8a8d-8539f86d0b6d-openstack-config-secret\") pod \"openstackclient\" (UID: \"23368cd0-88cf-4584-8a8d-8539f86d0b6d\") " pod="openstack/openstackclient" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.200876 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkpnj\" (UniqueName: \"kubernetes.io/projected/23368cd0-88cf-4584-8a8d-8539f86d0b6d-kube-api-access-gkpnj\") pod \"openstackclient\" (UID: \"23368cd0-88cf-4584-8a8d-8539f86d0b6d\") " pod="openstack/openstackclient" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.272314 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.273038 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.281128 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.356108 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.358138 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.382934 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.430620 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.484438 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e453c2ba-d2af-4ad5-8f25-91b386e9f9a6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e453c2ba-d2af-4ad5-8f25-91b386e9f9a6\") " pod="openstack/openstackclient" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.484512 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m29rr\" (UniqueName: \"kubernetes.io/projected/e453c2ba-d2af-4ad5-8f25-91b386e9f9a6-kube-api-access-m29rr\") pod \"openstackclient\" (UID: \"e453c2ba-d2af-4ad5-8f25-91b386e9f9a6\") " pod="openstack/openstackclient" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.484560 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e453c2ba-d2af-4ad5-8f25-91b386e9f9a6-openstack-config-secret\") pod \"openstackclient\" (UID: \"e453c2ba-d2af-4ad5-8f25-91b386e9f9a6\") " pod="openstack/openstackclient" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.484657 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e453c2ba-d2af-4ad5-8f25-91b386e9f9a6-openstack-config\") pod \"openstackclient\" (UID: \"e453c2ba-d2af-4ad5-8f25-91b386e9f9a6\") " pod="openstack/openstackclient" Apr 04 02:26:13 crc kubenswrapper[4681]: E0404 02:26:13.525460 4681 log.go:32] "RunPodSandbox from runtime service failed" err=< Apr 04 02:26:13 crc kubenswrapper[4681]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_23368cd0-88cf-4584-8a8d-8539f86d0b6d_0(5e12d79e7dfcf3cee9e78fe488b4aaff4ac8ccf25c54bc0a0b0019ce46caf2bd): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5e12d79e7dfcf3cee9e78fe488b4aaff4ac8ccf25c54bc0a0b0019ce46caf2bd" Netns:"/var/run/netns/a3fdc288-2c8f-43a2-a41c-4030f7baf5f6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=5e12d79e7dfcf3cee9e78fe488b4aaff4ac8ccf25c54bc0a0b0019ce46caf2bd;K8S_POD_UID=23368cd0-88cf-4584-8a8d-8539f86d0b6d" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/23368cd0-88cf-4584-8a8d-8539f86d0b6d]: expected pod UID "23368cd0-88cf-4584-8a8d-8539f86d0b6d" but got "e453c2ba-d2af-4ad5-8f25-91b386e9f9a6" from Kube API Apr 04 02:26:13 crc kubenswrapper[4681]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Apr 04 02:26:13 crc kubenswrapper[4681]: > Apr 04 02:26:13 crc kubenswrapper[4681]: E0404 02:26:13.525536 4681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Apr 04 02:26:13 crc kubenswrapper[4681]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_23368cd0-88cf-4584-8a8d-8539f86d0b6d_0(5e12d79e7dfcf3cee9e78fe488b4aaff4ac8ccf25c54bc0a0b0019ce46caf2bd): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5e12d79e7dfcf3cee9e78fe488b4aaff4ac8ccf25c54bc0a0b0019ce46caf2bd" Netns:"/var/run/netns/a3fdc288-2c8f-43a2-a41c-4030f7baf5f6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=5e12d79e7dfcf3cee9e78fe488b4aaff4ac8ccf25c54bc0a0b0019ce46caf2bd;K8S_POD_UID=23368cd0-88cf-4584-8a8d-8539f86d0b6d" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/23368cd0-88cf-4584-8a8d-8539f86d0b6d]: expected pod UID "23368cd0-88cf-4584-8a8d-8539f86d0b6d" but got "e453c2ba-d2af-4ad5-8f25-91b386e9f9a6" from Kube API Apr 04 02:26:13 crc kubenswrapper[4681]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Apr 04 02:26:13 crc kubenswrapper[4681]: > pod="openstack/openstackclient" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.585543 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-etc-machine-id\") pod \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\" (UID: \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\") " Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.585696 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0a2bc18f-1ff7-410b-aae9-9d3e72475f11" (UID: "0a2bc18f-1ff7-410b-aae9-9d3e72475f11"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.586028 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-logs\") pod \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\" (UID: \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\") " Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.586149 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-config-data\") pod \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\" (UID: \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\") " Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.586234 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6hmk\" (UniqueName: \"kubernetes.io/projected/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-kube-api-access-c6hmk\") pod \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\" (UID: \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\") " Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.586419 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-config-data-custom\") pod \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\" (UID: \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\") " Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.586528 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-scripts\") pod \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\" (UID: \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\") " Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.586657 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-combined-ca-bundle\") pod \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\" (UID: \"0a2bc18f-1ff7-410b-aae9-9d3e72475f11\") " Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.587098 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e453c2ba-d2af-4ad5-8f25-91b386e9f9a6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e453c2ba-d2af-4ad5-8f25-91b386e9f9a6\") " pod="openstack/openstackclient" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.587213 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m29rr\" (UniqueName: \"kubernetes.io/projected/e453c2ba-d2af-4ad5-8f25-91b386e9f9a6-kube-api-access-m29rr\") pod \"openstackclient\" (UID: \"e453c2ba-d2af-4ad5-8f25-91b386e9f9a6\") " pod="openstack/openstackclient" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.587364 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e453c2ba-d2af-4ad5-8f25-91b386e9f9a6-openstack-config-secret\") pod \"openstackclient\" (UID: \"e453c2ba-d2af-4ad5-8f25-91b386e9f9a6\") " pod="openstack/openstackclient" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.587759 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e453c2ba-d2af-4ad5-8f25-91b386e9f9a6-openstack-config\") pod \"openstackclient\" (UID: \"e453c2ba-d2af-4ad5-8f25-91b386e9f9a6\") " pod="openstack/openstackclient" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.587905 4681 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-etc-machine-id\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.588835 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e453c2ba-d2af-4ad5-8f25-91b386e9f9a6-openstack-config\") pod \"openstackclient\" (UID: \"e453c2ba-d2af-4ad5-8f25-91b386e9f9a6\") " pod="openstack/openstackclient" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.591143 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-logs" (OuterVolumeSpecName: "logs") pod "0a2bc18f-1ff7-410b-aae9-9d3e72475f11" (UID: "0a2bc18f-1ff7-410b-aae9-9d3e72475f11"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.593448 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e453c2ba-d2af-4ad5-8f25-91b386e9f9a6-openstack-config-secret\") pod \"openstackclient\" (UID: \"e453c2ba-d2af-4ad5-8f25-91b386e9f9a6\") " pod="openstack/openstackclient" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.596070 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e453c2ba-d2af-4ad5-8f25-91b386e9f9a6-combined-ca-bundle\") pod \"openstackclient\" (UID: \"e453c2ba-d2af-4ad5-8f25-91b386e9f9a6\") " pod="openstack/openstackclient" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.597134 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0a2bc18f-1ff7-410b-aae9-9d3e72475f11" (UID: "0a2bc18f-1ff7-410b-aae9-9d3e72475f11"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.597429 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-scripts" (OuterVolumeSpecName: "scripts") pod "0a2bc18f-1ff7-410b-aae9-9d3e72475f11" (UID: "0a2bc18f-1ff7-410b-aae9-9d3e72475f11"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.608597 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-kube-api-access-c6hmk" (OuterVolumeSpecName: "kube-api-access-c6hmk") pod "0a2bc18f-1ff7-410b-aae9-9d3e72475f11" (UID: "0a2bc18f-1ff7-410b-aae9-9d3e72475f11"). InnerVolumeSpecName "kube-api-access-c6hmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.612890 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m29rr\" (UniqueName: \"kubernetes.io/projected/e453c2ba-d2af-4ad5-8f25-91b386e9f9a6-kube-api-access-m29rr\") pod \"openstackclient\" (UID: \"e453c2ba-d2af-4ad5-8f25-91b386e9f9a6\") " pod="openstack/openstackclient" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.626477 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a2bc18f-1ff7-410b-aae9-9d3e72475f11" (UID: "0a2bc18f-1ff7-410b-aae9-9d3e72475f11"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.648735 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-config-data" (OuterVolumeSpecName: "config-data") pod "0a2bc18f-1ff7-410b-aae9-9d3e72475f11" (UID: "0a2bc18f-1ff7-410b-aae9-9d3e72475f11"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.689719 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-logs\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.689755 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.689768 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6hmk\" (UniqueName: \"kubernetes.io/projected/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-kube-api-access-c6hmk\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.689781 4681 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-config-data-custom\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.689792 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.689803 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a2bc18f-1ff7-410b-aae9-9d3e72475f11-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.764845 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.910430 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.911590 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.929885 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0a2bc18f-1ff7-410b-aae9-9d3e72475f11","Type":"ContainerDied","Data":"f7118dc9253eb4d94d58aff5b463edc16e6f4998ce5b61f6aefeb0775e0b91f3"} Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.929969 4681 scope.go:117] "RemoveContainer" containerID="b31e805743e212c987a46b6dd7eb1642c92c05869f30f815e17ee5eaec44b5a5" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.937118 4681 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="23368cd0-88cf-4584-8a8d-8539f86d0b6d" podUID="e453c2ba-d2af-4ad5-8f25-91b386e9f9a6" Apr 04 02:26:13 crc kubenswrapper[4681]: I0404 02:26:13.947010 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.005730 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.015772 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.019806 4681 scope.go:117] "RemoveContainer" containerID="09a3790ab7662b7c815fea8c8e00534b36649634c9378f052d61b4ac4bfdc410" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.028343 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Apr 04 02:26:14 crc kubenswrapper[4681]: E0404 02:26:14.029133 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a2bc18f-1ff7-410b-aae9-9d3e72475f11" containerName="cinder-api-log" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.029154 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a2bc18f-1ff7-410b-aae9-9d3e72475f11" containerName="cinder-api-log" Apr 04 02:26:14 crc kubenswrapper[4681]: E0404 02:26:14.029199 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a2bc18f-1ff7-410b-aae9-9d3e72475f11" containerName="cinder-api" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.029207 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a2bc18f-1ff7-410b-aae9-9d3e72475f11" containerName="cinder-api" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.029426 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a2bc18f-1ff7-410b-aae9-9d3e72475f11" containerName="cinder-api" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.029463 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a2bc18f-1ff7-410b-aae9-9d3e72475f11" containerName="cinder-api-log" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.030471 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.034832 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.035175 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.039718 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.057144 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.095433 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/23368cd0-88cf-4584-8a8d-8539f86d0b6d-openstack-config-secret\") pod \"23368cd0-88cf-4584-8a8d-8539f86d0b6d\" (UID: \"23368cd0-88cf-4584-8a8d-8539f86d0b6d\") " Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.095573 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/23368cd0-88cf-4584-8a8d-8539f86d0b6d-openstack-config\") pod \"23368cd0-88cf-4584-8a8d-8539f86d0b6d\" (UID: \"23368cd0-88cf-4584-8a8d-8539f86d0b6d\") " Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.095626 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkpnj\" (UniqueName: \"kubernetes.io/projected/23368cd0-88cf-4584-8a8d-8539f86d0b6d-kube-api-access-gkpnj\") pod \"23368cd0-88cf-4584-8a8d-8539f86d0b6d\" (UID: \"23368cd0-88cf-4584-8a8d-8539f86d0b6d\") " Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.096108 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23368cd0-88cf-4584-8a8d-8539f86d0b6d-combined-ca-bundle\") pod \"23368cd0-88cf-4584-8a8d-8539f86d0b6d\" (UID: \"23368cd0-88cf-4584-8a8d-8539f86d0b6d\") " Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.096744 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1f293d4-d146-49d4-a75d-8e972a25b758-config-data\") pod \"cinder-api-0\" (UID: \"a1f293d4-d146-49d4-a75d-8e972a25b758\") " pod="openstack/cinder-api-0" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.096901 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxpfn\" (UniqueName: \"kubernetes.io/projected/a1f293d4-d146-49d4-a75d-8e972a25b758-kube-api-access-jxpfn\") pod \"cinder-api-0\" (UID: \"a1f293d4-d146-49d4-a75d-8e972a25b758\") " pod="openstack/cinder-api-0" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.096936 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23368cd0-88cf-4584-8a8d-8539f86d0b6d-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "23368cd0-88cf-4584-8a8d-8539f86d0b6d" (UID: "23368cd0-88cf-4584-8a8d-8539f86d0b6d"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.097007 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f293d4-d146-49d4-a75d-8e972a25b758-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a1f293d4-d146-49d4-a75d-8e972a25b758\") " pod="openstack/cinder-api-0" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.097069 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1f293d4-d146-49d4-a75d-8e972a25b758-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a1f293d4-d146-49d4-a75d-8e972a25b758\") " pod="openstack/cinder-api-0" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.097203 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1f293d4-d146-49d4-a75d-8e972a25b758-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a1f293d4-d146-49d4-a75d-8e972a25b758\") " pod="openstack/cinder-api-0" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.097255 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1f293d4-d146-49d4-a75d-8e972a25b758-config-data-custom\") pod \"cinder-api-0\" (UID: \"a1f293d4-d146-49d4-a75d-8e972a25b758\") " pod="openstack/cinder-api-0" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.097331 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f293d4-d146-49d4-a75d-8e972a25b758-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a1f293d4-d146-49d4-a75d-8e972a25b758\") " pod="openstack/cinder-api-0" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.098532 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1f293d4-d146-49d4-a75d-8e972a25b758-logs\") pod \"cinder-api-0\" (UID: \"a1f293d4-d146-49d4-a75d-8e972a25b758\") " pod="openstack/cinder-api-0" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.098763 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1f293d4-d146-49d4-a75d-8e972a25b758-scripts\") pod \"cinder-api-0\" (UID: \"a1f293d4-d146-49d4-a75d-8e972a25b758\") " pod="openstack/cinder-api-0" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.099023 4681 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/23368cd0-88cf-4584-8a8d-8539f86d0b6d-openstack-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.112622 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23368cd0-88cf-4584-8a8d-8539f86d0b6d-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "23368cd0-88cf-4584-8a8d-8539f86d0b6d" (UID: "23368cd0-88cf-4584-8a8d-8539f86d0b6d"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.112770 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23368cd0-88cf-4584-8a8d-8539f86d0b6d-kube-api-access-gkpnj" (OuterVolumeSpecName: "kube-api-access-gkpnj") pod "23368cd0-88cf-4584-8a8d-8539f86d0b6d" (UID: "23368cd0-88cf-4584-8a8d-8539f86d0b6d"). InnerVolumeSpecName "kube-api-access-gkpnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.113478 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23368cd0-88cf-4584-8a8d-8539f86d0b6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23368cd0-88cf-4584-8a8d-8539f86d0b6d" (UID: "23368cd0-88cf-4584-8a8d-8539f86d0b6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.200230 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1f293d4-d146-49d4-a75d-8e972a25b758-config-data\") pod \"cinder-api-0\" (UID: \"a1f293d4-d146-49d4-a75d-8e972a25b758\") " pod="openstack/cinder-api-0" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.200318 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxpfn\" (UniqueName: \"kubernetes.io/projected/a1f293d4-d146-49d4-a75d-8e972a25b758-kube-api-access-jxpfn\") pod \"cinder-api-0\" (UID: \"a1f293d4-d146-49d4-a75d-8e972a25b758\") " pod="openstack/cinder-api-0" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.200379 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f293d4-d146-49d4-a75d-8e972a25b758-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a1f293d4-d146-49d4-a75d-8e972a25b758\") " pod="openstack/cinder-api-0" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.200408 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1f293d4-d146-49d4-a75d-8e972a25b758-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a1f293d4-d146-49d4-a75d-8e972a25b758\") " pod="openstack/cinder-api-0" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.200443 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1f293d4-d146-49d4-a75d-8e972a25b758-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a1f293d4-d146-49d4-a75d-8e972a25b758\") " pod="openstack/cinder-api-0" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.200478 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1f293d4-d146-49d4-a75d-8e972a25b758-config-data-custom\") pod \"cinder-api-0\" (UID: \"a1f293d4-d146-49d4-a75d-8e972a25b758\") " pod="openstack/cinder-api-0" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.200513 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f293d4-d146-49d4-a75d-8e972a25b758-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a1f293d4-d146-49d4-a75d-8e972a25b758\") " pod="openstack/cinder-api-0" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.200550 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1f293d4-d146-49d4-a75d-8e972a25b758-logs\") pod \"cinder-api-0\" (UID: \"a1f293d4-d146-49d4-a75d-8e972a25b758\") " pod="openstack/cinder-api-0" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.200585 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1f293d4-d146-49d4-a75d-8e972a25b758-scripts\") pod \"cinder-api-0\" (UID: \"a1f293d4-d146-49d4-a75d-8e972a25b758\") " pod="openstack/cinder-api-0" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.200643 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkpnj\" (UniqueName: \"kubernetes.io/projected/23368cd0-88cf-4584-8a8d-8539f86d0b6d-kube-api-access-gkpnj\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.200655 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23368cd0-88cf-4584-8a8d-8539f86d0b6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.200664 4681 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/23368cd0-88cf-4584-8a8d-8539f86d0b6d-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.201697 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a1f293d4-d146-49d4-a75d-8e972a25b758-logs\") pod \"cinder-api-0\" (UID: \"a1f293d4-d146-49d4-a75d-8e972a25b758\") " pod="openstack/cinder-api-0" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.202776 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a1f293d4-d146-49d4-a75d-8e972a25b758-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a1f293d4-d146-49d4-a75d-8e972a25b758\") " pod="openstack/cinder-api-0" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.204120 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a1f293d4-d146-49d4-a75d-8e972a25b758-scripts\") pod \"cinder-api-0\" (UID: \"a1f293d4-d146-49d4-a75d-8e972a25b758\") " pod="openstack/cinder-api-0" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.205367 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f293d4-d146-49d4-a75d-8e972a25b758-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a1f293d4-d146-49d4-a75d-8e972a25b758\") " pod="openstack/cinder-api-0" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.205855 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1f293d4-d146-49d4-a75d-8e972a25b758-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a1f293d4-d146-49d4-a75d-8e972a25b758\") " pod="openstack/cinder-api-0" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.206879 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1f293d4-d146-49d4-a75d-8e972a25b758-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a1f293d4-d146-49d4-a75d-8e972a25b758\") " pod="openstack/cinder-api-0" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.219064 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a1f293d4-d146-49d4-a75d-8e972a25b758-config-data-custom\") pod \"cinder-api-0\" (UID: \"a1f293d4-d146-49d4-a75d-8e972a25b758\") " pod="openstack/cinder-api-0" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.224813 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxpfn\" (UniqueName: \"kubernetes.io/projected/a1f293d4-d146-49d4-a75d-8e972a25b758-kube-api-access-jxpfn\") pod \"cinder-api-0\" (UID: \"a1f293d4-d146-49d4-a75d-8e972a25b758\") " pod="openstack/cinder-api-0" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.225253 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a1f293d4-d146-49d4-a75d-8e972a25b758-config-data\") pod \"cinder-api-0\" (UID: \"a1f293d4-d146-49d4-a75d-8e972a25b758\") " pod="openstack/cinder-api-0" Apr 04 02:26:14 crc kubenswrapper[4681]: W0404 02:26:14.279791 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode453c2ba_d2af_4ad5_8f25_91b386e9f9a6.slice/crio-1e0fd3ab1ea1303d4c2fe71087b54b35316bb8ecb80a656d22e6e1b0c4e5d4d6 WatchSource:0}: Error finding container 1e0fd3ab1ea1303d4c2fe71087b54b35316bb8ecb80a656d22e6e1b0c4e5d4d6: Status 404 returned error can't find the container with id 1e0fd3ab1ea1303d4c2fe71087b54b35316bb8ecb80a656d22e6e1b0c4e5d4d6 Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.283873 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.349545 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.841727 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.924912 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"e453c2ba-d2af-4ad5-8f25-91b386e9f9a6","Type":"ContainerStarted","Data":"1e0fd3ab1ea1303d4c2fe71087b54b35316bb8ecb80a656d22e6e1b0c4e5d4d6"} Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.928758 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a1f293d4-d146-49d4-a75d-8e972a25b758","Type":"ContainerStarted","Data":"48b27482f82ee838ecf581cfeda7e98425bf139010ee1a0717719c4fc781af12"} Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.930034 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Apr 04 02:26:14 crc kubenswrapper[4681]: I0404 02:26:14.937663 4681 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="23368cd0-88cf-4584-8a8d-8539f86d0b6d" podUID="e453c2ba-d2af-4ad5-8f25-91b386e9f9a6" Apr 04 02:26:15 crc kubenswrapper[4681]: I0404 02:26:15.202023 4681 scope.go:117] "RemoveContainer" containerID="a115e63e478f2dcd96d6a1ea5579c4abd8544184899aee1395f08415f92823da" Apr 04 02:26:15 crc kubenswrapper[4681]: E0404 02:26:15.202343 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:26:15 crc kubenswrapper[4681]: I0404 02:26:15.220767 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a2bc18f-1ff7-410b-aae9-9d3e72475f11" path="/var/lib/kubelet/pods/0a2bc18f-1ff7-410b-aae9-9d3e72475f11/volumes" Apr 04 02:26:15 crc kubenswrapper[4681]: I0404 02:26:15.230425 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23368cd0-88cf-4584-8a8d-8539f86d0b6d" path="/var/lib/kubelet/pods/23368cd0-88cf-4584-8a8d-8539f86d0b6d/volumes" Apr 04 02:26:15 crc kubenswrapper[4681]: I0404 02:26:15.942529 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a1f293d4-d146-49d4-a75d-8e972a25b758","Type":"ContainerStarted","Data":"59b1d778ceac22d0fa8b8d513b5e2d8ac783070644b0b72f0afdd1db70f47fc6"} Apr 04 02:26:16 crc kubenswrapper[4681]: I0404 02:26:16.954559 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a1f293d4-d146-49d4-a75d-8e972a25b758","Type":"ContainerStarted","Data":"ff491550e6216b6d2f3f476e86982621d135e166081da9978ee72c9fe749033e"} Apr 04 02:26:16 crc kubenswrapper[4681]: I0404 02:26:16.955786 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Apr 04 02:26:16 crc kubenswrapper[4681]: I0404 02:26:16.986775 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.986757776 podStartE2EDuration="3.986757776s" podCreationTimestamp="2026-04-04 02:26:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:26:16.97669272 +0000 UTC m=+1856.642467830" watchObservedRunningTime="2026-04-04 02:26:16.986757776 +0000 UTC m=+1856.652532896" Apr 04 02:26:17 crc kubenswrapper[4681]: I0404 02:26:17.446427 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" Apr 04 02:26:17 crc kubenswrapper[4681]: I0404 02:26:17.509746 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c5cf958f7-nnfhw"] Apr 04 02:26:17 crc kubenswrapper[4681]: I0404 02:26:17.514087 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" podUID="ed63431c-61d3-47d4-84a4-9eca959a780f" containerName="dnsmasq-dns" containerID="cri-o://9fd7f3ca436ef02929af93630a15e8f95c1b3765dfd66fb496d24bf0f659cc99" gracePeriod=10 Apr 04 02:26:17 crc kubenswrapper[4681]: I0404 02:26:17.887599 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Apr 04 02:26:17 crc kubenswrapper[4681]: I0404 02:26:17.930950 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Apr 04 02:26:17 crc kubenswrapper[4681]: I0404 02:26:17.966605 4681 generic.go:334] "Generic (PLEG): container finished" podID="ed63431c-61d3-47d4-84a4-9eca959a780f" containerID="9fd7f3ca436ef02929af93630a15e8f95c1b3765dfd66fb496d24bf0f659cc99" exitCode=0 Apr 04 02:26:17 crc kubenswrapper[4681]: I0404 02:26:17.967044 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" event={"ID":"ed63431c-61d3-47d4-84a4-9eca959a780f","Type":"ContainerDied","Data":"9fd7f3ca436ef02929af93630a15e8f95c1b3765dfd66fb496d24bf0f659cc99"} Apr 04 02:26:17 crc kubenswrapper[4681]: I0404 02:26:17.967669 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8be34f0e-92b9-49f8-8164-090a9e4260e2" containerName="cinder-scheduler" containerID="cri-o://39e599d616688a4841b173fd90132bdf99361bbd442fa8675d3b765680b8be26" gracePeriod=30 Apr 04 02:26:17 crc kubenswrapper[4681]: I0404 02:26:17.967715 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8be34f0e-92b9-49f8-8164-090a9e4260e2" containerName="probe" containerID="cri-o://af6ab8eab99e4a5010566af8d91fdcaa5f24a7d0963f5ac0f3d9bff287632a29" gracePeriod=30 Apr 04 02:26:19 crc kubenswrapper[4681]: I0404 02:26:19.201535 4681 scope.go:117] "RemoveContainer" containerID="13815f30743a40f27c1344893fb0ce4d8bd89907e3e353cbbdd634d8cd72ea30" Apr 04 02:26:19 crc kubenswrapper[4681]: E0404 02:26:19.202103 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(442b54de-22a7-4121-aab3-5365d4e0872d)\"" pod="openstack/watcher-decision-engine-0" podUID="442b54de-22a7-4121-aab3-5365d4e0872d" Apr 04 02:26:19 crc kubenswrapper[4681]: I0404 02:26:19.672574 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-8456d9bb7c-dcjw6"] Apr 04 02:26:19 crc kubenswrapper[4681]: I0404 02:26:19.675444 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-8456d9bb7c-dcjw6" Apr 04 02:26:19 crc kubenswrapper[4681]: I0404 02:26:19.682956 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Apr 04 02:26:19 crc kubenswrapper[4681]: I0404 02:26:19.683722 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Apr 04 02:26:19 crc kubenswrapper[4681]: I0404 02:26:19.686970 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Apr 04 02:26:19 crc kubenswrapper[4681]: I0404 02:26:19.702457 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-8456d9bb7c-dcjw6"] Apr 04 02:26:19 crc kubenswrapper[4681]: I0404 02:26:19.833588 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb09ea7e-aac7-4a55-962c-ca71e66e26a8-run-httpd\") pod \"swift-proxy-8456d9bb7c-dcjw6\" (UID: \"cb09ea7e-aac7-4a55-962c-ca71e66e26a8\") " pod="openstack/swift-proxy-8456d9bb7c-dcjw6" Apr 04 02:26:19 crc kubenswrapper[4681]: I0404 02:26:19.833689 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cb09ea7e-aac7-4a55-962c-ca71e66e26a8-etc-swift\") pod \"swift-proxy-8456d9bb7c-dcjw6\" (UID: \"cb09ea7e-aac7-4a55-962c-ca71e66e26a8\") " pod="openstack/swift-proxy-8456d9bb7c-dcjw6" Apr 04 02:26:19 crc kubenswrapper[4681]: I0404 02:26:19.833725 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb74q\" (UniqueName: \"kubernetes.io/projected/cb09ea7e-aac7-4a55-962c-ca71e66e26a8-kube-api-access-vb74q\") pod \"swift-proxy-8456d9bb7c-dcjw6\" (UID: \"cb09ea7e-aac7-4a55-962c-ca71e66e26a8\") " pod="openstack/swift-proxy-8456d9bb7c-dcjw6" Apr 04 02:26:19 crc kubenswrapper[4681]: I0404 02:26:19.833806 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb09ea7e-aac7-4a55-962c-ca71e66e26a8-internal-tls-certs\") pod \"swift-proxy-8456d9bb7c-dcjw6\" (UID: \"cb09ea7e-aac7-4a55-962c-ca71e66e26a8\") " pod="openstack/swift-proxy-8456d9bb7c-dcjw6" Apr 04 02:26:19 crc kubenswrapper[4681]: I0404 02:26:19.833879 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb09ea7e-aac7-4a55-962c-ca71e66e26a8-config-data\") pod \"swift-proxy-8456d9bb7c-dcjw6\" (UID: \"cb09ea7e-aac7-4a55-962c-ca71e66e26a8\") " pod="openstack/swift-proxy-8456d9bb7c-dcjw6" Apr 04 02:26:19 crc kubenswrapper[4681]: I0404 02:26:19.833905 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb09ea7e-aac7-4a55-962c-ca71e66e26a8-public-tls-certs\") pod \"swift-proxy-8456d9bb7c-dcjw6\" (UID: \"cb09ea7e-aac7-4a55-962c-ca71e66e26a8\") " pod="openstack/swift-proxy-8456d9bb7c-dcjw6" Apr 04 02:26:19 crc kubenswrapper[4681]: I0404 02:26:19.833930 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb09ea7e-aac7-4a55-962c-ca71e66e26a8-log-httpd\") pod \"swift-proxy-8456d9bb7c-dcjw6\" (UID: \"cb09ea7e-aac7-4a55-962c-ca71e66e26a8\") " pod="openstack/swift-proxy-8456d9bb7c-dcjw6" Apr 04 02:26:19 crc kubenswrapper[4681]: I0404 02:26:19.833997 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb09ea7e-aac7-4a55-962c-ca71e66e26a8-combined-ca-bundle\") pod \"swift-proxy-8456d9bb7c-dcjw6\" (UID: \"cb09ea7e-aac7-4a55-962c-ca71e66e26a8\") " pod="openstack/swift-proxy-8456d9bb7c-dcjw6" Apr 04 02:26:19 crc kubenswrapper[4681]: I0404 02:26:19.935694 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb09ea7e-aac7-4a55-962c-ca71e66e26a8-internal-tls-certs\") pod \"swift-proxy-8456d9bb7c-dcjw6\" (UID: \"cb09ea7e-aac7-4a55-962c-ca71e66e26a8\") " pod="openstack/swift-proxy-8456d9bb7c-dcjw6" Apr 04 02:26:19 crc kubenswrapper[4681]: I0404 02:26:19.935795 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb09ea7e-aac7-4a55-962c-ca71e66e26a8-config-data\") pod \"swift-proxy-8456d9bb7c-dcjw6\" (UID: \"cb09ea7e-aac7-4a55-962c-ca71e66e26a8\") " pod="openstack/swift-proxy-8456d9bb7c-dcjw6" Apr 04 02:26:19 crc kubenswrapper[4681]: I0404 02:26:19.935824 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb09ea7e-aac7-4a55-962c-ca71e66e26a8-public-tls-certs\") pod \"swift-proxy-8456d9bb7c-dcjw6\" (UID: \"cb09ea7e-aac7-4a55-962c-ca71e66e26a8\") " pod="openstack/swift-proxy-8456d9bb7c-dcjw6" Apr 04 02:26:19 crc kubenswrapper[4681]: I0404 02:26:19.935850 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb09ea7e-aac7-4a55-962c-ca71e66e26a8-log-httpd\") pod \"swift-proxy-8456d9bb7c-dcjw6\" (UID: \"cb09ea7e-aac7-4a55-962c-ca71e66e26a8\") " pod="openstack/swift-proxy-8456d9bb7c-dcjw6" Apr 04 02:26:19 crc kubenswrapper[4681]: I0404 02:26:19.935911 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb09ea7e-aac7-4a55-962c-ca71e66e26a8-combined-ca-bundle\") pod \"swift-proxy-8456d9bb7c-dcjw6\" (UID: \"cb09ea7e-aac7-4a55-962c-ca71e66e26a8\") " pod="openstack/swift-proxy-8456d9bb7c-dcjw6" Apr 04 02:26:19 crc kubenswrapper[4681]: I0404 02:26:19.935964 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb09ea7e-aac7-4a55-962c-ca71e66e26a8-run-httpd\") pod \"swift-proxy-8456d9bb7c-dcjw6\" (UID: \"cb09ea7e-aac7-4a55-962c-ca71e66e26a8\") " pod="openstack/swift-proxy-8456d9bb7c-dcjw6" Apr 04 02:26:19 crc kubenswrapper[4681]: I0404 02:26:19.936028 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cb09ea7e-aac7-4a55-962c-ca71e66e26a8-etc-swift\") pod \"swift-proxy-8456d9bb7c-dcjw6\" (UID: \"cb09ea7e-aac7-4a55-962c-ca71e66e26a8\") " pod="openstack/swift-proxy-8456d9bb7c-dcjw6" Apr 04 02:26:19 crc kubenswrapper[4681]: I0404 02:26:19.936062 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb74q\" (UniqueName: \"kubernetes.io/projected/cb09ea7e-aac7-4a55-962c-ca71e66e26a8-kube-api-access-vb74q\") pod \"swift-proxy-8456d9bb7c-dcjw6\" (UID: \"cb09ea7e-aac7-4a55-962c-ca71e66e26a8\") " pod="openstack/swift-proxy-8456d9bb7c-dcjw6" Apr 04 02:26:19 crc kubenswrapper[4681]: I0404 02:26:19.936924 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb09ea7e-aac7-4a55-962c-ca71e66e26a8-run-httpd\") pod \"swift-proxy-8456d9bb7c-dcjw6\" (UID: \"cb09ea7e-aac7-4a55-962c-ca71e66e26a8\") " pod="openstack/swift-proxy-8456d9bb7c-dcjw6" Apr 04 02:26:19 crc kubenswrapper[4681]: I0404 02:26:19.937624 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cb09ea7e-aac7-4a55-962c-ca71e66e26a8-log-httpd\") pod \"swift-proxy-8456d9bb7c-dcjw6\" (UID: \"cb09ea7e-aac7-4a55-962c-ca71e66e26a8\") " pod="openstack/swift-proxy-8456d9bb7c-dcjw6" Apr 04 02:26:19 crc kubenswrapper[4681]: I0404 02:26:19.950718 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb09ea7e-aac7-4a55-962c-ca71e66e26a8-combined-ca-bundle\") pod \"swift-proxy-8456d9bb7c-dcjw6\" (UID: \"cb09ea7e-aac7-4a55-962c-ca71e66e26a8\") " pod="openstack/swift-proxy-8456d9bb7c-dcjw6" Apr 04 02:26:19 crc kubenswrapper[4681]: I0404 02:26:19.951111 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb09ea7e-aac7-4a55-962c-ca71e66e26a8-config-data\") pod \"swift-proxy-8456d9bb7c-dcjw6\" (UID: \"cb09ea7e-aac7-4a55-962c-ca71e66e26a8\") " pod="openstack/swift-proxy-8456d9bb7c-dcjw6" Apr 04 02:26:19 crc kubenswrapper[4681]: I0404 02:26:19.951682 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb09ea7e-aac7-4a55-962c-ca71e66e26a8-internal-tls-certs\") pod \"swift-proxy-8456d9bb7c-dcjw6\" (UID: \"cb09ea7e-aac7-4a55-962c-ca71e66e26a8\") " pod="openstack/swift-proxy-8456d9bb7c-dcjw6" Apr 04 02:26:19 crc kubenswrapper[4681]: I0404 02:26:19.958065 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cb09ea7e-aac7-4a55-962c-ca71e66e26a8-public-tls-certs\") pod \"swift-proxy-8456d9bb7c-dcjw6\" (UID: \"cb09ea7e-aac7-4a55-962c-ca71e66e26a8\") " pod="openstack/swift-proxy-8456d9bb7c-dcjw6" Apr 04 02:26:19 crc kubenswrapper[4681]: I0404 02:26:19.973414 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/cb09ea7e-aac7-4a55-962c-ca71e66e26a8-etc-swift\") pod \"swift-proxy-8456d9bb7c-dcjw6\" (UID: \"cb09ea7e-aac7-4a55-962c-ca71e66e26a8\") " pod="openstack/swift-proxy-8456d9bb7c-dcjw6" Apr 04 02:26:20 crc kubenswrapper[4681]: I0404 02:26:20.017993 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb74q\" (UniqueName: \"kubernetes.io/projected/cb09ea7e-aac7-4a55-962c-ca71e66e26a8-kube-api-access-vb74q\") pod \"swift-proxy-8456d9bb7c-dcjw6\" (UID: \"cb09ea7e-aac7-4a55-962c-ca71e66e26a8\") " pod="openstack/swift-proxy-8456d9bb7c-dcjw6" Apr 04 02:26:20 crc kubenswrapper[4681]: I0404 02:26:20.068948 4681 generic.go:334] "Generic (PLEG): container finished" podID="8be34f0e-92b9-49f8-8164-090a9e4260e2" containerID="af6ab8eab99e4a5010566af8d91fdcaa5f24a7d0963f5ac0f3d9bff287632a29" exitCode=0 Apr 04 02:26:20 crc kubenswrapper[4681]: I0404 02:26:20.068980 4681 generic.go:334] "Generic (PLEG): container finished" podID="8be34f0e-92b9-49f8-8164-090a9e4260e2" containerID="39e599d616688a4841b173fd90132bdf99361bbd442fa8675d3b765680b8be26" exitCode=0 Apr 04 02:26:20 crc kubenswrapper[4681]: I0404 02:26:20.069000 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8be34f0e-92b9-49f8-8164-090a9e4260e2","Type":"ContainerDied","Data":"af6ab8eab99e4a5010566af8d91fdcaa5f24a7d0963f5ac0f3d9bff287632a29"} Apr 04 02:26:20 crc kubenswrapper[4681]: I0404 02:26:20.069027 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8be34f0e-92b9-49f8-8164-090a9e4260e2","Type":"ContainerDied","Data":"39e599d616688a4841b173fd90132bdf99361bbd442fa8675d3b765680b8be26"} Apr 04 02:26:20 crc kubenswrapper[4681]: I0404 02:26:20.307597 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-8456d9bb7c-dcjw6" Apr 04 02:26:20 crc kubenswrapper[4681]: I0404 02:26:20.871046 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:26:20 crc kubenswrapper[4681]: I0404 02:26:20.871386 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4c366f78-0d36-4ad8-b037-f3156da30c73" containerName="ceilometer-central-agent" containerID="cri-o://f340523cc0fc74f993800c9aad21330a23f1226cbbdd538e546236fce48597ab" gracePeriod=30 Apr 04 02:26:20 crc kubenswrapper[4681]: I0404 02:26:20.871492 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4c366f78-0d36-4ad8-b037-f3156da30c73" containerName="sg-core" containerID="cri-o://e44c6c6a1e56313c0292526cdd9785c17d31ed407177c66090cbf8597df3d5fd" gracePeriod=30 Apr 04 02:26:20 crc kubenswrapper[4681]: I0404 02:26:20.871494 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4c366f78-0d36-4ad8-b037-f3156da30c73" containerName="proxy-httpd" containerID="cri-o://775de3a27be0611d87e5f6c5a10d9d4761755ac2b30f27536886ab9bc0d29215" gracePeriod=30 Apr 04 02:26:20 crc kubenswrapper[4681]: I0404 02:26:20.871509 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4c366f78-0d36-4ad8-b037-f3156da30c73" containerName="ceilometer-notification-agent" containerID="cri-o://f35e348a7048b6bba66a293983bcfc56b466d048fd0e5e28500cbfb96c34a64e" gracePeriod=30 Apr 04 02:26:20 crc kubenswrapper[4681]: I0404 02:26:20.882678 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Apr 04 02:26:21 crc kubenswrapper[4681]: I0404 02:26:21.080678 4681 generic.go:334] "Generic (PLEG): container finished" podID="0f2f493b-34f1-492d-834d-50b24313791c" containerID="d6503b4cf5a6450cce71c8e8e8835f62ed80d92a0688c8b72802cd1862e2f541" exitCode=0 Apr 04 02:26:21 crc kubenswrapper[4681]: I0404 02:26:21.080752 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-x6pg5" event={"ID":"0f2f493b-34f1-492d-834d-50b24313791c","Type":"ContainerDied","Data":"d6503b4cf5a6450cce71c8e8e8835f62ed80d92a0688c8b72802cd1862e2f541"} Apr 04 02:26:21 crc kubenswrapper[4681]: I0404 02:26:21.087688 4681 generic.go:334] "Generic (PLEG): container finished" podID="4c366f78-0d36-4ad8-b037-f3156da30c73" containerID="e44c6c6a1e56313c0292526cdd9785c17d31ed407177c66090cbf8597df3d5fd" exitCode=2 Apr 04 02:26:21 crc kubenswrapper[4681]: I0404 02:26:21.087727 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c366f78-0d36-4ad8-b037-f3156da30c73","Type":"ContainerDied","Data":"e44c6c6a1e56313c0292526cdd9785c17d31ed407177c66090cbf8597df3d5fd"} Apr 04 02:26:22 crc kubenswrapper[4681]: I0404 02:26:22.173954 4681 generic.go:334] "Generic (PLEG): container finished" podID="4c366f78-0d36-4ad8-b037-f3156da30c73" containerID="775de3a27be0611d87e5f6c5a10d9d4761755ac2b30f27536886ab9bc0d29215" exitCode=0 Apr 04 02:26:22 crc kubenswrapper[4681]: I0404 02:26:22.174194 4681 generic.go:334] "Generic (PLEG): container finished" podID="4c366f78-0d36-4ad8-b037-f3156da30c73" containerID="f340523cc0fc74f993800c9aad21330a23f1226cbbdd538e546236fce48597ab" exitCode=0 Apr 04 02:26:22 crc kubenswrapper[4681]: I0404 02:26:22.173984 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c366f78-0d36-4ad8-b037-f3156da30c73","Type":"ContainerDied","Data":"775de3a27be0611d87e5f6c5a10d9d4761755ac2b30f27536886ab9bc0d29215"} Apr 04 02:26:22 crc kubenswrapper[4681]: I0404 02:26:22.174273 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c366f78-0d36-4ad8-b037-f3156da30c73","Type":"ContainerDied","Data":"f340523cc0fc74f993800c9aad21330a23f1226cbbdd538e546236fce48597ab"} Apr 04 02:26:23 crc kubenswrapper[4681]: I0404 02:26:23.188139 4681 generic.go:334] "Generic (PLEG): container finished" podID="4c366f78-0d36-4ad8-b037-f3156da30c73" containerID="f35e348a7048b6bba66a293983bcfc56b466d048fd0e5e28500cbfb96c34a64e" exitCode=0 Apr 04 02:26:23 crc kubenswrapper[4681]: I0404 02:26:23.188484 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c366f78-0d36-4ad8-b037-f3156da30c73","Type":"ContainerDied","Data":"f35e348a7048b6bba66a293983bcfc56b466d048fd0e5e28500cbfb96c34a64e"} Apr 04 02:26:24 crc kubenswrapper[4681]: I0404 02:26:24.842339 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" Apr 04 02:26:24 crc kubenswrapper[4681]: I0404 02:26:24.843448 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" podUID="ed63431c-61d3-47d4-84a4-9eca959a780f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.174:5353: i/o timeout" Apr 04 02:26:24 crc kubenswrapper[4681]: I0404 02:26:24.914356 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed63431c-61d3-47d4-84a4-9eca959a780f-config\") pod \"ed63431c-61d3-47d4-84a4-9eca959a780f\" (UID: \"ed63431c-61d3-47d4-84a4-9eca959a780f\") " Apr 04 02:26:24 crc kubenswrapper[4681]: I0404 02:26:24.914392 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed63431c-61d3-47d4-84a4-9eca959a780f-dns-swift-storage-0\") pod \"ed63431c-61d3-47d4-84a4-9eca959a780f\" (UID: \"ed63431c-61d3-47d4-84a4-9eca959a780f\") " Apr 04 02:26:24 crc kubenswrapper[4681]: I0404 02:26:24.914427 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed63431c-61d3-47d4-84a4-9eca959a780f-dns-svc\") pod \"ed63431c-61d3-47d4-84a4-9eca959a780f\" (UID: \"ed63431c-61d3-47d4-84a4-9eca959a780f\") " Apr 04 02:26:24 crc kubenswrapper[4681]: I0404 02:26:24.914523 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed63431c-61d3-47d4-84a4-9eca959a780f-ovsdbserver-sb\") pod \"ed63431c-61d3-47d4-84a4-9eca959a780f\" (UID: \"ed63431c-61d3-47d4-84a4-9eca959a780f\") " Apr 04 02:26:24 crc kubenswrapper[4681]: I0404 02:26:24.915277 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8z4p\" (UniqueName: \"kubernetes.io/projected/ed63431c-61d3-47d4-84a4-9eca959a780f-kube-api-access-n8z4p\") pod \"ed63431c-61d3-47d4-84a4-9eca959a780f\" (UID: \"ed63431c-61d3-47d4-84a4-9eca959a780f\") " Apr 04 02:26:24 crc kubenswrapper[4681]: I0404 02:26:24.915362 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed63431c-61d3-47d4-84a4-9eca959a780f-ovsdbserver-nb\") pod \"ed63431c-61d3-47d4-84a4-9eca959a780f\" (UID: \"ed63431c-61d3-47d4-84a4-9eca959a780f\") " Apr 04 02:26:24 crc kubenswrapper[4681]: I0404 02:26:24.947540 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed63431c-61d3-47d4-84a4-9eca959a780f-kube-api-access-n8z4p" (OuterVolumeSpecName: "kube-api-access-n8z4p") pod "ed63431c-61d3-47d4-84a4-9eca959a780f" (UID: "ed63431c-61d3-47d4-84a4-9eca959a780f"). InnerVolumeSpecName "kube-api-access-n8z4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.020005 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8z4p\" (UniqueName: \"kubernetes.io/projected/ed63431c-61d3-47d4-84a4-9eca959a780f-kube-api-access-n8z4p\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.061589 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed63431c-61d3-47d4-84a4-9eca959a780f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ed63431c-61d3-47d4-84a4-9eca959a780f" (UID: "ed63431c-61d3-47d4-84a4-9eca959a780f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.066789 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed63431c-61d3-47d4-84a4-9eca959a780f-config" (OuterVolumeSpecName: "config") pod "ed63431c-61d3-47d4-84a4-9eca959a780f" (UID: "ed63431c-61d3-47d4-84a4-9eca959a780f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.092890 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed63431c-61d3-47d4-84a4-9eca959a780f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ed63431c-61d3-47d4-84a4-9eca959a780f" (UID: "ed63431c-61d3-47d4-84a4-9eca959a780f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.104818 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed63431c-61d3-47d4-84a4-9eca959a780f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ed63431c-61d3-47d4-84a4-9eca959a780f" (UID: "ed63431c-61d3-47d4-84a4-9eca959a780f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.118075 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed63431c-61d3-47d4-84a4-9eca959a780f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ed63431c-61d3-47d4-84a4-9eca959a780f" (UID: "ed63431c-61d3-47d4-84a4-9eca959a780f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.122629 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ed63431c-61d3-47d4-84a4-9eca959a780f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.122668 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ed63431c-61d3-47d4-84a4-9eca959a780f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.122682 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed63431c-61d3-47d4-84a4-9eca959a780f-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.122694 4681 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ed63431c-61d3-47d4-84a4-9eca959a780f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.122705 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ed63431c-61d3-47d4-84a4-9eca959a780f-dns-svc\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.131003 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-x6pg5" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.182466 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.224418 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c366f78-0d36-4ad8-b037-f3156da30c73-scripts\") pod \"4c366f78-0d36-4ad8-b037-f3156da30c73\" (UID: \"4c366f78-0d36-4ad8-b037-f3156da30c73\") " Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.224524 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c366f78-0d36-4ad8-b037-f3156da30c73-log-httpd\") pod \"4c366f78-0d36-4ad8-b037-f3156da30c73\" (UID: \"4c366f78-0d36-4ad8-b037-f3156da30c73\") " Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.224556 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c366f78-0d36-4ad8-b037-f3156da30c73-run-httpd\") pod \"4c366f78-0d36-4ad8-b037-f3156da30c73\" (UID: \"4c366f78-0d36-4ad8-b037-f3156da30c73\") " Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.224612 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f2f493b-34f1-492d-834d-50b24313791c-combined-ca-bundle\") pod \"0f2f493b-34f1-492d-834d-50b24313791c\" (UID: \"0f2f493b-34f1-492d-834d-50b24313791c\") " Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.224689 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfpqj\" (UniqueName: \"kubernetes.io/projected/4c366f78-0d36-4ad8-b037-f3156da30c73-kube-api-access-mfpqj\") pod \"4c366f78-0d36-4ad8-b037-f3156da30c73\" (UID: \"4c366f78-0d36-4ad8-b037-f3156da30c73\") " Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.224817 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dtb4\" (UniqueName: \"kubernetes.io/projected/0f2f493b-34f1-492d-834d-50b24313791c-kube-api-access-6dtb4\") pod \"0f2f493b-34f1-492d-834d-50b24313791c\" (UID: \"0f2f493b-34f1-492d-834d-50b24313791c\") " Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.224857 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c366f78-0d36-4ad8-b037-f3156da30c73-combined-ca-bundle\") pod \"4c366f78-0d36-4ad8-b037-f3156da30c73\" (UID: \"4c366f78-0d36-4ad8-b037-f3156da30c73\") " Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.224924 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0f2f493b-34f1-492d-834d-50b24313791c-config\") pod \"0f2f493b-34f1-492d-834d-50b24313791c\" (UID: \"0f2f493b-34f1-492d-834d-50b24313791c\") " Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.224967 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c366f78-0d36-4ad8-b037-f3156da30c73-config-data\") pod \"4c366f78-0d36-4ad8-b037-f3156da30c73\" (UID: \"4c366f78-0d36-4ad8-b037-f3156da30c73\") " Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.224999 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4c366f78-0d36-4ad8-b037-f3156da30c73-sg-core-conf-yaml\") pod \"4c366f78-0d36-4ad8-b037-f3156da30c73\" (UID: \"4c366f78-0d36-4ad8-b037-f3156da30c73\") " Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.229608 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c366f78-0d36-4ad8-b037-f3156da30c73-scripts" (OuterVolumeSpecName: "scripts") pod "4c366f78-0d36-4ad8-b037-f3156da30c73" (UID: "4c366f78-0d36-4ad8-b037-f3156da30c73"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.230132 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c366f78-0d36-4ad8-b037-f3156da30c73-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4c366f78-0d36-4ad8-b037-f3156da30c73" (UID: "4c366f78-0d36-4ad8-b037-f3156da30c73"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.230429 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c366f78-0d36-4ad8-b037-f3156da30c73-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4c366f78-0d36-4ad8-b037-f3156da30c73" (UID: "4c366f78-0d36-4ad8-b037-f3156da30c73"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.239968 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-x6pg5" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.245110 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c366f78-0d36-4ad8-b037-f3156da30c73-kube-api-access-mfpqj" (OuterVolumeSpecName: "kube-api-access-mfpqj") pod "4c366f78-0d36-4ad8-b037-f3156da30c73" (UID: "4c366f78-0d36-4ad8-b037-f3156da30c73"). InnerVolumeSpecName "kube-api-access-mfpqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.245238 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.252513 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f2f493b-34f1-492d-834d-50b24313791c-kube-api-access-6dtb4" (OuterVolumeSpecName: "kube-api-access-6dtb4") pod "0f2f493b-34f1-492d-834d-50b24313791c" (UID: "0f2f493b-34f1-492d-834d-50b24313791c"). InnerVolumeSpecName "kube-api-access-6dtb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.252743 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-x6pg5" event={"ID":"0f2f493b-34f1-492d-834d-50b24313791c","Type":"ContainerDied","Data":"4c8f6d122f10b3d61bb3e230f57dc4526c5c165cf1ba772e1a8b5d52af70b6e8"} Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.252794 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c8f6d122f10b3d61bb3e230f57dc4526c5c165cf1ba772e1a8b5d52af70b6e8" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.252810 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c5cf958f7-nnfhw" event={"ID":"ed63431c-61d3-47d4-84a4-9eca959a780f","Type":"ContainerDied","Data":"81aa56898b97447e3f1dd780c4e79980c077a51ea5ba0dba0f654f92e08af882"} Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.252839 4681 scope.go:117] "RemoveContainer" containerID="9fd7f3ca436ef02929af93630a15e8f95c1b3765dfd66fb496d24bf0f659cc99" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.260896 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4c366f78-0d36-4ad8-b037-f3156da30c73","Type":"ContainerDied","Data":"f0999ea2808112013a444cb66b3ebf1b82105fc800e2c6b44486ce536917efee"} Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.261164 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.264774 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c366f78-0d36-4ad8-b037-f3156da30c73-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4c366f78-0d36-4ad8-b037-f3156da30c73" (UID: "4c366f78-0d36-4ad8-b037-f3156da30c73"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.265613 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f2f493b-34f1-492d-834d-50b24313791c-config" (OuterVolumeSpecName: "config") pod "0f2f493b-34f1-492d-834d-50b24313791c" (UID: "0f2f493b-34f1-492d-834d-50b24313791c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.297546 4681 scope.go:117] "RemoveContainer" containerID="314cd7b265486d66ae3487235cc262a2d8f84ebce430a7ad3538684e03244d90" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.336238 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfpqj\" (UniqueName: \"kubernetes.io/projected/4c366f78-0d36-4ad8-b037-f3156da30c73-kube-api-access-mfpqj\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.336488 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dtb4\" (UniqueName: \"kubernetes.io/projected/0f2f493b-34f1-492d-834d-50b24313791c-kube-api-access-6dtb4\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.336523 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0f2f493b-34f1-492d-834d-50b24313791c-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.336548 4681 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4c366f78-0d36-4ad8-b037-f3156da30c73-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.336561 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4c366f78-0d36-4ad8-b037-f3156da30c73-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.336573 4681 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c366f78-0d36-4ad8-b037-f3156da30c73-log-httpd\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.336582 4681 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4c366f78-0d36-4ad8-b037-f3156da30c73-run-httpd\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.346942 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c5cf958f7-nnfhw"] Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.356681 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f2f493b-34f1-492d-834d-50b24313791c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f2f493b-34f1-492d-834d-50b24313791c" (UID: "0f2f493b-34f1-492d-834d-50b24313791c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.367813 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c5cf958f7-nnfhw"] Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.393408 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c366f78-0d36-4ad8-b037-f3156da30c73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c366f78-0d36-4ad8-b037-f3156da30c73" (UID: "4c366f78-0d36-4ad8-b037-f3156da30c73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.405491 4681 scope.go:117] "RemoveContainer" containerID="775de3a27be0611d87e5f6c5a10d9d4761755ac2b30f27536886ab9bc0d29215" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.442243 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c366f78-0d36-4ad8-b037-f3156da30c73-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.442314 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f2f493b-34f1-492d-834d-50b24313791c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.458289 4681 scope.go:117] "RemoveContainer" containerID="e44c6c6a1e56313c0292526cdd9785c17d31ed407177c66090cbf8597df3d5fd" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.477115 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c366f78-0d36-4ad8-b037-f3156da30c73-config-data" (OuterVolumeSpecName: "config-data") pod "4c366f78-0d36-4ad8-b037-f3156da30c73" (UID: "4c366f78-0d36-4ad8-b037-f3156da30c73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.486876 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.488390 4681 scope.go:117] "RemoveContainer" containerID="f35e348a7048b6bba66a293983bcfc56b466d048fd0e5e28500cbfb96c34a64e" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.523916 4681 scope.go:117] "RemoveContainer" containerID="f340523cc0fc74f993800c9aad21330a23f1226cbbdd538e546236fce48597ab" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.545857 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8be34f0e-92b9-49f8-8164-090a9e4260e2-config-data-custom\") pod \"8be34f0e-92b9-49f8-8164-090a9e4260e2\" (UID: \"8be34f0e-92b9-49f8-8164-090a9e4260e2\") " Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.545997 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8be34f0e-92b9-49f8-8164-090a9e4260e2-etc-machine-id\") pod \"8be34f0e-92b9-49f8-8164-090a9e4260e2\" (UID: \"8be34f0e-92b9-49f8-8164-090a9e4260e2\") " Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.546034 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58l98\" (UniqueName: \"kubernetes.io/projected/8be34f0e-92b9-49f8-8164-090a9e4260e2-kube-api-access-58l98\") pod \"8be34f0e-92b9-49f8-8164-090a9e4260e2\" (UID: \"8be34f0e-92b9-49f8-8164-090a9e4260e2\") " Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.546076 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8be34f0e-92b9-49f8-8164-090a9e4260e2-scripts\") pod \"8be34f0e-92b9-49f8-8164-090a9e4260e2\" (UID: \"8be34f0e-92b9-49f8-8164-090a9e4260e2\") " Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.546106 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8be34f0e-92b9-49f8-8164-090a9e4260e2-combined-ca-bundle\") pod \"8be34f0e-92b9-49f8-8164-090a9e4260e2\" (UID: \"8be34f0e-92b9-49f8-8164-090a9e4260e2\") " Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.546125 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8be34f0e-92b9-49f8-8164-090a9e4260e2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8be34f0e-92b9-49f8-8164-090a9e4260e2" (UID: "8be34f0e-92b9-49f8-8164-090a9e4260e2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.546142 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8be34f0e-92b9-49f8-8164-090a9e4260e2-config-data\") pod \"8be34f0e-92b9-49f8-8164-090a9e4260e2\" (UID: \"8be34f0e-92b9-49f8-8164-090a9e4260e2\") " Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.546905 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c366f78-0d36-4ad8-b037-f3156da30c73-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.546927 4681 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8be34f0e-92b9-49f8-8164-090a9e4260e2-etc-machine-id\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.553635 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8be34f0e-92b9-49f8-8164-090a9e4260e2-scripts" (OuterVolumeSpecName: "scripts") pod "8be34f0e-92b9-49f8-8164-090a9e4260e2" (UID: "8be34f0e-92b9-49f8-8164-090a9e4260e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.554642 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8be34f0e-92b9-49f8-8164-090a9e4260e2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8be34f0e-92b9-49f8-8164-090a9e4260e2" (UID: "8be34f0e-92b9-49f8-8164-090a9e4260e2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.558431 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8be34f0e-92b9-49f8-8164-090a9e4260e2-kube-api-access-58l98" (OuterVolumeSpecName: "kube-api-access-58l98") pod "8be34f0e-92b9-49f8-8164-090a9e4260e2" (UID: "8be34f0e-92b9-49f8-8164-090a9e4260e2"). InnerVolumeSpecName "kube-api-access-58l98". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.627573 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8be34f0e-92b9-49f8-8164-090a9e4260e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8be34f0e-92b9-49f8-8164-090a9e4260e2" (UID: "8be34f0e-92b9-49f8-8164-090a9e4260e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.633859 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.647777 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58l98\" (UniqueName: \"kubernetes.io/projected/8be34f0e-92b9-49f8-8164-090a9e4260e2-kube-api-access-58l98\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.647819 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8be34f0e-92b9-49f8-8164-090a9e4260e2-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.647830 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8be34f0e-92b9-49f8-8164-090a9e4260e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.647840 4681 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8be34f0e-92b9-49f8-8164-090a9e4260e2-config-data-custom\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.650825 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.667915 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:26:25 crc kubenswrapper[4681]: E0404 02:26:25.668435 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c366f78-0d36-4ad8-b037-f3156da30c73" containerName="ceilometer-notification-agent" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.668461 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c366f78-0d36-4ad8-b037-f3156da30c73" containerName="ceilometer-notification-agent" Apr 04 02:26:25 crc kubenswrapper[4681]: E0404 02:26:25.668482 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8be34f0e-92b9-49f8-8164-090a9e4260e2" containerName="probe" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.668491 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="8be34f0e-92b9-49f8-8164-090a9e4260e2" containerName="probe" Apr 04 02:26:25 crc kubenswrapper[4681]: E0404 02:26:25.668512 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c366f78-0d36-4ad8-b037-f3156da30c73" containerName="ceilometer-central-agent" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.668520 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c366f78-0d36-4ad8-b037-f3156da30c73" containerName="ceilometer-central-agent" Apr 04 02:26:25 crc kubenswrapper[4681]: E0404 02:26:25.668551 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed63431c-61d3-47d4-84a4-9eca959a780f" containerName="init" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.668559 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed63431c-61d3-47d4-84a4-9eca959a780f" containerName="init" Apr 04 02:26:25 crc kubenswrapper[4681]: E0404 02:26:25.668577 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed63431c-61d3-47d4-84a4-9eca959a780f" containerName="dnsmasq-dns" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.668588 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed63431c-61d3-47d4-84a4-9eca959a780f" containerName="dnsmasq-dns" Apr 04 02:26:25 crc kubenswrapper[4681]: E0404 02:26:25.668611 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c366f78-0d36-4ad8-b037-f3156da30c73" containerName="proxy-httpd" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.668619 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c366f78-0d36-4ad8-b037-f3156da30c73" containerName="proxy-httpd" Apr 04 02:26:25 crc kubenswrapper[4681]: E0404 02:26:25.668634 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8be34f0e-92b9-49f8-8164-090a9e4260e2" containerName="cinder-scheduler" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.668641 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="8be34f0e-92b9-49f8-8164-090a9e4260e2" containerName="cinder-scheduler" Apr 04 02:26:25 crc kubenswrapper[4681]: E0404 02:26:25.668657 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c366f78-0d36-4ad8-b037-f3156da30c73" containerName="sg-core" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.668664 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c366f78-0d36-4ad8-b037-f3156da30c73" containerName="sg-core" Apr 04 02:26:25 crc kubenswrapper[4681]: E0404 02:26:25.668675 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f2f493b-34f1-492d-834d-50b24313791c" containerName="neutron-db-sync" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.668683 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f2f493b-34f1-492d-834d-50b24313791c" containerName="neutron-db-sync" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.668920 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c366f78-0d36-4ad8-b037-f3156da30c73" containerName="ceilometer-central-agent" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.668934 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c366f78-0d36-4ad8-b037-f3156da30c73" containerName="proxy-httpd" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.668951 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f2f493b-34f1-492d-834d-50b24313791c" containerName="neutron-db-sync" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.668963 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c366f78-0d36-4ad8-b037-f3156da30c73" containerName="ceilometer-notification-agent" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.668976 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed63431c-61d3-47d4-84a4-9eca959a780f" containerName="dnsmasq-dns" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.668991 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c366f78-0d36-4ad8-b037-f3156da30c73" containerName="sg-core" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.669008 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="8be34f0e-92b9-49f8-8164-090a9e4260e2" containerName="cinder-scheduler" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.669021 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="8be34f0e-92b9-49f8-8164-090a9e4260e2" containerName="probe" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.671253 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.676107 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.676740 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.708460 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8be34f0e-92b9-49f8-8164-090a9e4260e2-config-data" (OuterVolumeSpecName: "config-data") pod "8be34f0e-92b9-49f8-8164-090a9e4260e2" (UID: "8be34f0e-92b9-49f8-8164-090a9e4260e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.708555 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.749284 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42021f6e-0d10-4792-a0c1-14ae6fbe581d-scripts\") pod \"ceilometer-0\" (UID: \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\") " pod="openstack/ceilometer-0" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.749345 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42021f6e-0d10-4792-a0c1-14ae6fbe581d-config-data\") pod \"ceilometer-0\" (UID: \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\") " pod="openstack/ceilometer-0" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.749365 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42021f6e-0d10-4792-a0c1-14ae6fbe581d-log-httpd\") pod \"ceilometer-0\" (UID: \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\") " pod="openstack/ceilometer-0" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.749426 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42021f6e-0d10-4792-a0c1-14ae6fbe581d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\") " pod="openstack/ceilometer-0" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.749454 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs8z7\" (UniqueName: \"kubernetes.io/projected/42021f6e-0d10-4792-a0c1-14ae6fbe581d-kube-api-access-xs8z7\") pod \"ceilometer-0\" (UID: \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\") " pod="openstack/ceilometer-0" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.749474 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42021f6e-0d10-4792-a0c1-14ae6fbe581d-run-httpd\") pod \"ceilometer-0\" (UID: \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\") " pod="openstack/ceilometer-0" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.749510 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42021f6e-0d10-4792-a0c1-14ae6fbe581d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\") " pod="openstack/ceilometer-0" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.749561 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8be34f0e-92b9-49f8-8164-090a9e4260e2-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.793595 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-8456d9bb7c-dcjw6"] Apr 04 02:26:25 crc kubenswrapper[4681]: W0404 02:26:25.796704 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb09ea7e_aac7_4a55_962c_ca71e66e26a8.slice/crio-144445b9c91091f7f0a1fca6c326550c3bb86abc54447def443e61c02dd9732c WatchSource:0}: Error finding container 144445b9c91091f7f0a1fca6c326550c3bb86abc54447def443e61c02dd9732c: Status 404 returned error can't find the container with id 144445b9c91091f7f0a1fca6c326550c3bb86abc54447def443e61c02dd9732c Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.856806 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42021f6e-0d10-4792-a0c1-14ae6fbe581d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\") " pod="openstack/ceilometer-0" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.856871 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs8z7\" (UniqueName: \"kubernetes.io/projected/42021f6e-0d10-4792-a0c1-14ae6fbe581d-kube-api-access-xs8z7\") pod \"ceilometer-0\" (UID: \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\") " pod="openstack/ceilometer-0" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.856904 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42021f6e-0d10-4792-a0c1-14ae6fbe581d-run-httpd\") pod \"ceilometer-0\" (UID: \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\") " pod="openstack/ceilometer-0" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.856971 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42021f6e-0d10-4792-a0c1-14ae6fbe581d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\") " pod="openstack/ceilometer-0" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.857038 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42021f6e-0d10-4792-a0c1-14ae6fbe581d-scripts\") pod \"ceilometer-0\" (UID: \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\") " pod="openstack/ceilometer-0" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.857094 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42021f6e-0d10-4792-a0c1-14ae6fbe581d-config-data\") pod \"ceilometer-0\" (UID: \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\") " pod="openstack/ceilometer-0" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.857120 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42021f6e-0d10-4792-a0c1-14ae6fbe581d-log-httpd\") pod \"ceilometer-0\" (UID: \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\") " pod="openstack/ceilometer-0" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.857583 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42021f6e-0d10-4792-a0c1-14ae6fbe581d-log-httpd\") pod \"ceilometer-0\" (UID: \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\") " pod="openstack/ceilometer-0" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.861118 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42021f6e-0d10-4792-a0c1-14ae6fbe581d-run-httpd\") pod \"ceilometer-0\" (UID: \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\") " pod="openstack/ceilometer-0" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.862617 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42021f6e-0d10-4792-a0c1-14ae6fbe581d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\") " pod="openstack/ceilometer-0" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.866109 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42021f6e-0d10-4792-a0c1-14ae6fbe581d-scripts\") pod \"ceilometer-0\" (UID: \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\") " pod="openstack/ceilometer-0" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.868490 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42021f6e-0d10-4792-a0c1-14ae6fbe581d-config-data\") pod \"ceilometer-0\" (UID: \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\") " pod="openstack/ceilometer-0" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.869948 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42021f6e-0d10-4792-a0c1-14ae6fbe581d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\") " pod="openstack/ceilometer-0" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.884651 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs8z7\" (UniqueName: \"kubernetes.io/projected/42021f6e-0d10-4792-a0c1-14ae6fbe581d-kube-api-access-xs8z7\") pod \"ceilometer-0\" (UID: \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\") " pod="openstack/ceilometer-0" Apr 04 02:26:25 crc kubenswrapper[4681]: I0404 02:26:25.996654 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.204919 4681 scope.go:117] "RemoveContainer" containerID="a115e63e478f2dcd96d6a1ea5579c4abd8544184899aee1395f08415f92823da" Apr 04 02:26:26 crc kubenswrapper[4681]: E0404 02:26:26.205646 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.317835 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"e453c2ba-d2af-4ad5-8f25-91b386e9f9a6","Type":"ContainerStarted","Data":"0bdc788a1f4786b349805661c3541fbc0375a2bbc99af9d5a2887d65bee9474d"} Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.323172 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-59c75684f5-bch4x"] Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.325499 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c75684f5-bch4x" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.333382 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.333385 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8be34f0e-92b9-49f8-8164-090a9e4260e2","Type":"ContainerDied","Data":"c800834743526457f90afe5cb822d44b83ee9867415b2af6827fa26b57761c2a"} Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.333441 4681 scope.go:117] "RemoveContainer" containerID="af6ab8eab99e4a5010566af8d91fdcaa5f24a7d0963f5ac0f3d9bff287632a29" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.341551 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59c75684f5-bch4x"] Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.348630 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-8456d9bb7c-dcjw6" event={"ID":"cb09ea7e-aac7-4a55-962c-ca71e66e26a8","Type":"ContainerStarted","Data":"e969d568ff5ed0706cac40c30989cafd8c337a1f44a165f7c509d5cbfbd4a502"} Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.348682 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-8456d9bb7c-dcjw6" event={"ID":"cb09ea7e-aac7-4a55-962c-ca71e66e26a8","Type":"ContainerStarted","Data":"144445b9c91091f7f0a1fca6c326550c3bb86abc54447def443e61c02dd9732c"} Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.352353 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.57897218 podStartE2EDuration="13.352335075s" podCreationTimestamp="2026-04-04 02:26:13 +0000 UTC" firstStartedPulling="2026-04-04 02:26:14.28312812 +0000 UTC m=+1853.948903240" lastFinishedPulling="2026-04-04 02:26:25.056491015 +0000 UTC m=+1864.722266135" observedRunningTime="2026-04-04 02:26:26.336537352 +0000 UTC m=+1866.002312472" watchObservedRunningTime="2026-04-04 02:26:26.352335075 +0000 UTC m=+1866.018110195" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.373186 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7b18d01-a152-407d-94a5-993382ffd32f-config\") pod \"dnsmasq-dns-59c75684f5-bch4x\" (UID: \"c7b18d01-a152-407d-94a5-993382ffd32f\") " pod="openstack/dnsmasq-dns-59c75684f5-bch4x" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.373346 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7b18d01-a152-407d-94a5-993382ffd32f-dns-svc\") pod \"dnsmasq-dns-59c75684f5-bch4x\" (UID: \"c7b18d01-a152-407d-94a5-993382ffd32f\") " pod="openstack/dnsmasq-dns-59c75684f5-bch4x" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.373376 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7b18d01-a152-407d-94a5-993382ffd32f-ovsdbserver-nb\") pod \"dnsmasq-dns-59c75684f5-bch4x\" (UID: \"c7b18d01-a152-407d-94a5-993382ffd32f\") " pod="openstack/dnsmasq-dns-59c75684f5-bch4x" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.373442 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7b18d01-a152-407d-94a5-993382ffd32f-dns-swift-storage-0\") pod \"dnsmasq-dns-59c75684f5-bch4x\" (UID: \"c7b18d01-a152-407d-94a5-993382ffd32f\") " pod="openstack/dnsmasq-dns-59c75684f5-bch4x" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.373461 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7b18d01-a152-407d-94a5-993382ffd32f-ovsdbserver-sb\") pod \"dnsmasq-dns-59c75684f5-bch4x\" (UID: \"c7b18d01-a152-407d-94a5-993382ffd32f\") " pod="openstack/dnsmasq-dns-59c75684f5-bch4x" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.373500 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf99b\" (UniqueName: \"kubernetes.io/projected/c7b18d01-a152-407d-94a5-993382ffd32f-kube-api-access-xf99b\") pod \"dnsmasq-dns-59c75684f5-bch4x\" (UID: \"c7b18d01-a152-407d-94a5-993382ffd32f\") " pod="openstack/dnsmasq-dns-59c75684f5-bch4x" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.416710 4681 scope.go:117] "RemoveContainer" containerID="39e599d616688a4841b173fd90132bdf99361bbd442fa8675d3b765680b8be26" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.456449 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.476291 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7b18d01-a152-407d-94a5-993382ffd32f-dns-svc\") pod \"dnsmasq-dns-59c75684f5-bch4x\" (UID: \"c7b18d01-a152-407d-94a5-993382ffd32f\") " pod="openstack/dnsmasq-dns-59c75684f5-bch4x" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.476594 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7b18d01-a152-407d-94a5-993382ffd32f-ovsdbserver-nb\") pod \"dnsmasq-dns-59c75684f5-bch4x\" (UID: \"c7b18d01-a152-407d-94a5-993382ffd32f\") " pod="openstack/dnsmasq-dns-59c75684f5-bch4x" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.476811 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7b18d01-a152-407d-94a5-993382ffd32f-dns-swift-storage-0\") pod \"dnsmasq-dns-59c75684f5-bch4x\" (UID: \"c7b18d01-a152-407d-94a5-993382ffd32f\") " pod="openstack/dnsmasq-dns-59c75684f5-bch4x" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.476844 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7b18d01-a152-407d-94a5-993382ffd32f-ovsdbserver-sb\") pod \"dnsmasq-dns-59c75684f5-bch4x\" (UID: \"c7b18d01-a152-407d-94a5-993382ffd32f\") " pod="openstack/dnsmasq-dns-59c75684f5-bch4x" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.476881 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf99b\" (UniqueName: \"kubernetes.io/projected/c7b18d01-a152-407d-94a5-993382ffd32f-kube-api-access-xf99b\") pod \"dnsmasq-dns-59c75684f5-bch4x\" (UID: \"c7b18d01-a152-407d-94a5-993382ffd32f\") " pod="openstack/dnsmasq-dns-59c75684f5-bch4x" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.476956 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7b18d01-a152-407d-94a5-993382ffd32f-config\") pod \"dnsmasq-dns-59c75684f5-bch4x\" (UID: \"c7b18d01-a152-407d-94a5-993382ffd32f\") " pod="openstack/dnsmasq-dns-59c75684f5-bch4x" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.477081 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7b18d01-a152-407d-94a5-993382ffd32f-ovsdbserver-nb\") pod \"dnsmasq-dns-59c75684f5-bch4x\" (UID: \"c7b18d01-a152-407d-94a5-993382ffd32f\") " pod="openstack/dnsmasq-dns-59c75684f5-bch4x" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.477863 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7b18d01-a152-407d-94a5-993382ffd32f-config\") pod \"dnsmasq-dns-59c75684f5-bch4x\" (UID: \"c7b18d01-a152-407d-94a5-993382ffd32f\") " pod="openstack/dnsmasq-dns-59c75684f5-bch4x" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.478355 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7b18d01-a152-407d-94a5-993382ffd32f-dns-svc\") pod \"dnsmasq-dns-59c75684f5-bch4x\" (UID: \"c7b18d01-a152-407d-94a5-993382ffd32f\") " pod="openstack/dnsmasq-dns-59c75684f5-bch4x" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.478432 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7b18d01-a152-407d-94a5-993382ffd32f-dns-swift-storage-0\") pod \"dnsmasq-dns-59c75684f5-bch4x\" (UID: \"c7b18d01-a152-407d-94a5-993382ffd32f\") " pod="openstack/dnsmasq-dns-59c75684f5-bch4x" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.481130 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.481710 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7b18d01-a152-407d-94a5-993382ffd32f-ovsdbserver-sb\") pod \"dnsmasq-dns-59c75684f5-bch4x\" (UID: \"c7b18d01-a152-407d-94a5-993382ffd32f\") " pod="openstack/dnsmasq-dns-59c75684f5-bch4x" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.503380 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf99b\" (UniqueName: \"kubernetes.io/projected/c7b18d01-a152-407d-94a5-993382ffd32f-kube-api-access-xf99b\") pod \"dnsmasq-dns-59c75684f5-bch4x\" (UID: \"c7b18d01-a152-407d-94a5-993382ffd32f\") " pod="openstack/dnsmasq-dns-59c75684f5-bch4x" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.510725 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-f7f9d8bd4-j5zm6"] Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.512374 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f7f9d8bd4-j5zm6" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.518823 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.519593 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-5gjfq" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.519898 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.520104 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.535860 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f7f9d8bd4-j5zm6"] Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.551949 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.553901 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.556599 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.568873 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.636000 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:26:26 crc kubenswrapper[4681]: W0404 02:26:26.637428 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42021f6e_0d10_4792_a0c1_14ae6fbe581d.slice/crio-56dcee75d3f0de349386285f69dd256130c06db337d5e693802e18f35170a034 WatchSource:0}: Error finding container 56dcee75d3f0de349386285f69dd256130c06db337d5e693802e18f35170a034: Status 404 returned error can't find the container with id 56dcee75d3f0de349386285f69dd256130c06db337d5e693802e18f35170a034 Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.664376 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c75684f5-bch4x" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.683262 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df8847f1-00d6-45d1-a106-b2c8c69abb35-scripts\") pod \"cinder-scheduler-0\" (UID: \"df8847f1-00d6-45d1-a106-b2c8c69abb35\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.683370 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/df8847f1-00d6-45d1-a106-b2c8c69abb35-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"df8847f1-00d6-45d1-a106-b2c8c69abb35\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.683403 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/838b96bb-fdff-4688-9737-aa60034d9538-ovndb-tls-certs\") pod \"neutron-f7f9d8bd4-j5zm6\" (UID: \"838b96bb-fdff-4688-9737-aa60034d9538\") " pod="openstack/neutron-f7f9d8bd4-j5zm6" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.683493 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df8847f1-00d6-45d1-a106-b2c8c69abb35-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"df8847f1-00d6-45d1-a106-b2c8c69abb35\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.683603 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/838b96bb-fdff-4688-9737-aa60034d9538-httpd-config\") pod \"neutron-f7f9d8bd4-j5zm6\" (UID: \"838b96bb-fdff-4688-9737-aa60034d9538\") " pod="openstack/neutron-f7f9d8bd4-j5zm6" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.683645 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df8847f1-00d6-45d1-a106-b2c8c69abb35-config-data\") pod \"cinder-scheduler-0\" (UID: \"df8847f1-00d6-45d1-a106-b2c8c69abb35\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.683678 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/838b96bb-fdff-4688-9737-aa60034d9538-combined-ca-bundle\") pod \"neutron-f7f9d8bd4-j5zm6\" (UID: \"838b96bb-fdff-4688-9737-aa60034d9538\") " pod="openstack/neutron-f7f9d8bd4-j5zm6" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.683716 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cblv\" (UniqueName: \"kubernetes.io/projected/838b96bb-fdff-4688-9737-aa60034d9538-kube-api-access-7cblv\") pod \"neutron-f7f9d8bd4-j5zm6\" (UID: \"838b96bb-fdff-4688-9737-aa60034d9538\") " pod="openstack/neutron-f7f9d8bd4-j5zm6" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.683752 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd6sw\" (UniqueName: \"kubernetes.io/projected/df8847f1-00d6-45d1-a106-b2c8c69abb35-kube-api-access-sd6sw\") pod \"cinder-scheduler-0\" (UID: \"df8847f1-00d6-45d1-a106-b2c8c69abb35\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.683821 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/838b96bb-fdff-4688-9737-aa60034d9538-config\") pod \"neutron-f7f9d8bd4-j5zm6\" (UID: \"838b96bb-fdff-4688-9737-aa60034d9538\") " pod="openstack/neutron-f7f9d8bd4-j5zm6" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.683852 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8847f1-00d6-45d1-a106-b2c8c69abb35-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"df8847f1-00d6-45d1-a106-b2c8c69abb35\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.786191 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/838b96bb-fdff-4688-9737-aa60034d9538-httpd-config\") pod \"neutron-f7f9d8bd4-j5zm6\" (UID: \"838b96bb-fdff-4688-9737-aa60034d9538\") " pod="openstack/neutron-f7f9d8bd4-j5zm6" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.786241 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df8847f1-00d6-45d1-a106-b2c8c69abb35-config-data\") pod \"cinder-scheduler-0\" (UID: \"df8847f1-00d6-45d1-a106-b2c8c69abb35\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.786287 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/838b96bb-fdff-4688-9737-aa60034d9538-combined-ca-bundle\") pod \"neutron-f7f9d8bd4-j5zm6\" (UID: \"838b96bb-fdff-4688-9737-aa60034d9538\") " pod="openstack/neutron-f7f9d8bd4-j5zm6" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.786317 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cblv\" (UniqueName: \"kubernetes.io/projected/838b96bb-fdff-4688-9737-aa60034d9538-kube-api-access-7cblv\") pod \"neutron-f7f9d8bd4-j5zm6\" (UID: \"838b96bb-fdff-4688-9737-aa60034d9538\") " pod="openstack/neutron-f7f9d8bd4-j5zm6" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.786346 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd6sw\" (UniqueName: \"kubernetes.io/projected/df8847f1-00d6-45d1-a106-b2c8c69abb35-kube-api-access-sd6sw\") pod \"cinder-scheduler-0\" (UID: \"df8847f1-00d6-45d1-a106-b2c8c69abb35\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.786379 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/838b96bb-fdff-4688-9737-aa60034d9538-config\") pod \"neutron-f7f9d8bd4-j5zm6\" (UID: \"838b96bb-fdff-4688-9737-aa60034d9538\") " pod="openstack/neutron-f7f9d8bd4-j5zm6" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.786406 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8847f1-00d6-45d1-a106-b2c8c69abb35-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"df8847f1-00d6-45d1-a106-b2c8c69abb35\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.786481 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df8847f1-00d6-45d1-a106-b2c8c69abb35-scripts\") pod \"cinder-scheduler-0\" (UID: \"df8847f1-00d6-45d1-a106-b2c8c69abb35\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.786584 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/df8847f1-00d6-45d1-a106-b2c8c69abb35-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"df8847f1-00d6-45d1-a106-b2c8c69abb35\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.786614 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/838b96bb-fdff-4688-9737-aa60034d9538-ovndb-tls-certs\") pod \"neutron-f7f9d8bd4-j5zm6\" (UID: \"838b96bb-fdff-4688-9737-aa60034d9538\") " pod="openstack/neutron-f7f9d8bd4-j5zm6" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.786682 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df8847f1-00d6-45d1-a106-b2c8c69abb35-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"df8847f1-00d6-45d1-a106-b2c8c69abb35\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.796122 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/838b96bb-fdff-4688-9737-aa60034d9538-combined-ca-bundle\") pod \"neutron-f7f9d8bd4-j5zm6\" (UID: \"838b96bb-fdff-4688-9737-aa60034d9538\") " pod="openstack/neutron-f7f9d8bd4-j5zm6" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.797343 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/df8847f1-00d6-45d1-a106-b2c8c69abb35-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"df8847f1-00d6-45d1-a106-b2c8c69abb35\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.799167 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/838b96bb-fdff-4688-9737-aa60034d9538-httpd-config\") pod \"neutron-f7f9d8bd4-j5zm6\" (UID: \"838b96bb-fdff-4688-9737-aa60034d9538\") " pod="openstack/neutron-f7f9d8bd4-j5zm6" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.806841 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/838b96bb-fdff-4688-9737-aa60034d9538-ovndb-tls-certs\") pod \"neutron-f7f9d8bd4-j5zm6\" (UID: \"838b96bb-fdff-4688-9737-aa60034d9538\") " pod="openstack/neutron-f7f9d8bd4-j5zm6" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.807498 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df8847f1-00d6-45d1-a106-b2c8c69abb35-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"df8847f1-00d6-45d1-a106-b2c8c69abb35\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.812763 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df8847f1-00d6-45d1-a106-b2c8c69abb35-scripts\") pod \"cinder-scheduler-0\" (UID: \"df8847f1-00d6-45d1-a106-b2c8c69abb35\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.816917 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cblv\" (UniqueName: \"kubernetes.io/projected/838b96bb-fdff-4688-9737-aa60034d9538-kube-api-access-7cblv\") pod \"neutron-f7f9d8bd4-j5zm6\" (UID: \"838b96bb-fdff-4688-9737-aa60034d9538\") " pod="openstack/neutron-f7f9d8bd4-j5zm6" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.822937 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd6sw\" (UniqueName: \"kubernetes.io/projected/df8847f1-00d6-45d1-a106-b2c8c69abb35-kube-api-access-sd6sw\") pod \"cinder-scheduler-0\" (UID: \"df8847f1-00d6-45d1-a106-b2c8c69abb35\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.823169 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/838b96bb-fdff-4688-9737-aa60034d9538-config\") pod \"neutron-f7f9d8bd4-j5zm6\" (UID: \"838b96bb-fdff-4688-9737-aa60034d9538\") " pod="openstack/neutron-f7f9d8bd4-j5zm6" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.823179 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/df8847f1-00d6-45d1-a106-b2c8c69abb35-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"df8847f1-00d6-45d1-a106-b2c8c69abb35\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.825993 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df8847f1-00d6-45d1-a106-b2c8c69abb35-config-data\") pod \"cinder-scheduler-0\" (UID: \"df8847f1-00d6-45d1-a106-b2c8c69abb35\") " pod="openstack/cinder-scheduler-0" Apr 04 02:26:26 crc kubenswrapper[4681]: I0404 02:26:26.844821 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f7f9d8bd4-j5zm6" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:26.905830 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:27.275940 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c366f78-0d36-4ad8-b037-f3156da30c73" path="/var/lib/kubelet/pods/4c366f78-0d36-4ad8-b037-f3156da30c73/volumes" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:27.397251 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-8456d9bb7c-dcjw6" podStartSLOduration=8.397229738 podStartE2EDuration="8.397229738s" podCreationTimestamp="2026-04-04 02:26:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:26:27.393714822 +0000 UTC m=+1867.059489952" watchObservedRunningTime="2026-04-04 02:26:27.397229738 +0000 UTC m=+1867.063004858" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:27.820145 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8be34f0e-92b9-49f8-8164-090a9e4260e2" path="/var/lib/kubelet/pods/8be34f0e-92b9-49f8-8164-090a9e4260e2/volumes" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:27.821398 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed63431c-61d3-47d4-84a4-9eca959a780f" path="/var/lib/kubelet/pods/ed63431c-61d3-47d4-84a4-9eca959a780f/volumes" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:27.822015 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-8456d9bb7c-dcjw6" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:27.822043 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-8456d9bb7c-dcjw6" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:27.822076 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:27.822087 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-8456d9bb7c-dcjw6" event={"ID":"cb09ea7e-aac7-4a55-962c-ca71e66e26a8","Type":"ContainerStarted","Data":"92aba2399814c2cffdaccd7db11dcc38bd06058ea1acb2b19fa18398e4acf344"} Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:27.822101 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42021f6e-0d10-4792-a0c1-14ae6fbe581d","Type":"ContainerStarted","Data":"56dcee75d3f0de349386285f69dd256130c06db337d5e693802e18f35170a034"} Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:28.239668 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:28.239987 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:28.240854 4681 scope.go:117] "RemoveContainer" containerID="13815f30743a40f27c1344893fb0ce4d8bd89907e3e353cbbdd634d8cd72ea30" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:28.387081 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42021f6e-0d10-4792-a0c1-14ae6fbe581d","Type":"ContainerStarted","Data":"ce69e31748b18970cbda3ebe588df511848e5a3bc908a2219ebae25f0265c976"} Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:28.930280 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-554fd9954f-c5kv8"] Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:28.931873 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-554fd9954f-c5kv8" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:28.936456 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:28.936777 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:29.000364 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-554fd9954f-c5kv8"] Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:29.035574 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/99648c0a-d8f3-41f8-a03d-7a21a4a84156-httpd-config\") pod \"neutron-554fd9954f-c5kv8\" (UID: \"99648c0a-d8f3-41f8-a03d-7a21a4a84156\") " pod="openstack/neutron-554fd9954f-c5kv8" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:29.035665 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/99648c0a-d8f3-41f8-a03d-7a21a4a84156-config\") pod \"neutron-554fd9954f-c5kv8\" (UID: \"99648c0a-d8f3-41f8-a03d-7a21a4a84156\") " pod="openstack/neutron-554fd9954f-c5kv8" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:29.035707 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/99648c0a-d8f3-41f8-a03d-7a21a4a84156-ovndb-tls-certs\") pod \"neutron-554fd9954f-c5kv8\" (UID: \"99648c0a-d8f3-41f8-a03d-7a21a4a84156\") " pod="openstack/neutron-554fd9954f-c5kv8" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:29.035766 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99648c0a-d8f3-41f8-a03d-7a21a4a84156-combined-ca-bundle\") pod \"neutron-554fd9954f-c5kv8\" (UID: \"99648c0a-d8f3-41f8-a03d-7a21a4a84156\") " pod="openstack/neutron-554fd9954f-c5kv8" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:29.035886 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/99648c0a-d8f3-41f8-a03d-7a21a4a84156-internal-tls-certs\") pod \"neutron-554fd9954f-c5kv8\" (UID: \"99648c0a-d8f3-41f8-a03d-7a21a4a84156\") " pod="openstack/neutron-554fd9954f-c5kv8" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:29.035921 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/99648c0a-d8f3-41f8-a03d-7a21a4a84156-public-tls-certs\") pod \"neutron-554fd9954f-c5kv8\" (UID: \"99648c0a-d8f3-41f8-a03d-7a21a4a84156\") " pod="openstack/neutron-554fd9954f-c5kv8" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:29.035968 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6shrp\" (UniqueName: \"kubernetes.io/projected/99648c0a-d8f3-41f8-a03d-7a21a4a84156-kube-api-access-6shrp\") pod \"neutron-554fd9954f-c5kv8\" (UID: \"99648c0a-d8f3-41f8-a03d-7a21a4a84156\") " pod="openstack/neutron-554fd9954f-c5kv8" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:29.137874 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/99648c0a-d8f3-41f8-a03d-7a21a4a84156-config\") pod \"neutron-554fd9954f-c5kv8\" (UID: \"99648c0a-d8f3-41f8-a03d-7a21a4a84156\") " pod="openstack/neutron-554fd9954f-c5kv8" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:29.137945 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/99648c0a-d8f3-41f8-a03d-7a21a4a84156-ovndb-tls-certs\") pod \"neutron-554fd9954f-c5kv8\" (UID: \"99648c0a-d8f3-41f8-a03d-7a21a4a84156\") " pod="openstack/neutron-554fd9954f-c5kv8" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:29.138009 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99648c0a-d8f3-41f8-a03d-7a21a4a84156-combined-ca-bundle\") pod \"neutron-554fd9954f-c5kv8\" (UID: \"99648c0a-d8f3-41f8-a03d-7a21a4a84156\") " pod="openstack/neutron-554fd9954f-c5kv8" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:29.138127 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/99648c0a-d8f3-41f8-a03d-7a21a4a84156-internal-tls-certs\") pod \"neutron-554fd9954f-c5kv8\" (UID: \"99648c0a-d8f3-41f8-a03d-7a21a4a84156\") " pod="openstack/neutron-554fd9954f-c5kv8" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:29.138165 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/99648c0a-d8f3-41f8-a03d-7a21a4a84156-public-tls-certs\") pod \"neutron-554fd9954f-c5kv8\" (UID: \"99648c0a-d8f3-41f8-a03d-7a21a4a84156\") " pod="openstack/neutron-554fd9954f-c5kv8" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:29.138243 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6shrp\" (UniqueName: \"kubernetes.io/projected/99648c0a-d8f3-41f8-a03d-7a21a4a84156-kube-api-access-6shrp\") pod \"neutron-554fd9954f-c5kv8\" (UID: \"99648c0a-d8f3-41f8-a03d-7a21a4a84156\") " pod="openstack/neutron-554fd9954f-c5kv8" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:29.138360 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/99648c0a-d8f3-41f8-a03d-7a21a4a84156-httpd-config\") pod \"neutron-554fd9954f-c5kv8\" (UID: \"99648c0a-d8f3-41f8-a03d-7a21a4a84156\") " pod="openstack/neutron-554fd9954f-c5kv8" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:29.154988 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/99648c0a-d8f3-41f8-a03d-7a21a4a84156-internal-tls-certs\") pod \"neutron-554fd9954f-c5kv8\" (UID: \"99648c0a-d8f3-41f8-a03d-7a21a4a84156\") " pod="openstack/neutron-554fd9954f-c5kv8" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:29.155120 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/99648c0a-d8f3-41f8-a03d-7a21a4a84156-config\") pod \"neutron-554fd9954f-c5kv8\" (UID: \"99648c0a-d8f3-41f8-a03d-7a21a4a84156\") " pod="openstack/neutron-554fd9954f-c5kv8" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:29.156827 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99648c0a-d8f3-41f8-a03d-7a21a4a84156-combined-ca-bundle\") pod \"neutron-554fd9954f-c5kv8\" (UID: \"99648c0a-d8f3-41f8-a03d-7a21a4a84156\") " pod="openstack/neutron-554fd9954f-c5kv8" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:29.158435 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/99648c0a-d8f3-41f8-a03d-7a21a4a84156-httpd-config\") pod \"neutron-554fd9954f-c5kv8\" (UID: \"99648c0a-d8f3-41f8-a03d-7a21a4a84156\") " pod="openstack/neutron-554fd9954f-c5kv8" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:29.160623 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/99648c0a-d8f3-41f8-a03d-7a21a4a84156-ovndb-tls-certs\") pod \"neutron-554fd9954f-c5kv8\" (UID: \"99648c0a-d8f3-41f8-a03d-7a21a4a84156\") " pod="openstack/neutron-554fd9954f-c5kv8" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:29.160745 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/99648c0a-d8f3-41f8-a03d-7a21a4a84156-public-tls-certs\") pod \"neutron-554fd9954f-c5kv8\" (UID: \"99648c0a-d8f3-41f8-a03d-7a21a4a84156\") " pod="openstack/neutron-554fd9954f-c5kv8" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:29.208573 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6shrp\" (UniqueName: \"kubernetes.io/projected/99648c0a-d8f3-41f8-a03d-7a21a4a84156-kube-api-access-6shrp\") pod \"neutron-554fd9954f-c5kv8\" (UID: \"99648c0a-d8f3-41f8-a03d-7a21a4a84156\") " pod="openstack/neutron-554fd9954f-c5kv8" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:29.255522 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-554fd9954f-c5kv8" Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:29.437630 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"442b54de-22a7-4121-aab3-5365d4e0872d","Type":"ContainerStarted","Data":"ca827116b7223772db07f624e9097d1dfb0c263672af3a0abc4d8ad0f3cb0e48"} Apr 04 02:26:29 crc kubenswrapper[4681]: I0404 02:26:29.993832 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-59c75684f5-bch4x"] Apr 04 02:26:30 crc kubenswrapper[4681]: I0404 02:26:30.487709 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Apr 04 02:26:30 crc kubenswrapper[4681]: I0404 02:26:30.488038 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42021f6e-0d10-4792-a0c1-14ae6fbe581d","Type":"ContainerStarted","Data":"01a8296f6c0ad0b9a1307b68fbec9f2069bec3878a9f32dd7c5d15e69d92ac5b"} Apr 04 02:26:30 crc kubenswrapper[4681]: I0404 02:26:30.493238 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c75684f5-bch4x" event={"ID":"c7b18d01-a152-407d-94a5-993382ffd32f","Type":"ContainerStarted","Data":"1a37f8c308fab2ba97c7919fa47e010ff2e51eb483df45bb49e0f50f8c0e2d92"} Apr 04 02:26:30 crc kubenswrapper[4681]: I0404 02:26:30.601208 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-554fd9954f-c5kv8"] Apr 04 02:26:31 crc kubenswrapper[4681]: I0404 02:26:31.248465 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f7f9d8bd4-j5zm6"] Apr 04 02:26:31 crc kubenswrapper[4681]: I0404 02:26:31.511815 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-554fd9954f-c5kv8" event={"ID":"99648c0a-d8f3-41f8-a03d-7a21a4a84156","Type":"ContainerStarted","Data":"33badb46f3cb6ba0714e112bd94f7329cf8fce2c1b6c4d345c085438e01153c9"} Apr 04 02:26:31 crc kubenswrapper[4681]: I0404 02:26:31.511869 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-554fd9954f-c5kv8" event={"ID":"99648c0a-d8f3-41f8-a03d-7a21a4a84156","Type":"ContainerStarted","Data":"d9ab44d15bf9a1d1f7f18371bfb31de5282f224cc5b7bc11e544a380d732d7db"} Apr 04 02:26:31 crc kubenswrapper[4681]: I0404 02:26:31.511882 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-554fd9954f-c5kv8" event={"ID":"99648c0a-d8f3-41f8-a03d-7a21a4a84156","Type":"ContainerStarted","Data":"4f6e2098c54517a34e01814ca91e45296d8d53a6a0d876b70e411cc5ba932edf"} Apr 04 02:26:31 crc kubenswrapper[4681]: I0404 02:26:31.512388 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-554fd9954f-c5kv8" Apr 04 02:26:31 crc kubenswrapper[4681]: I0404 02:26:31.528974 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"df8847f1-00d6-45d1-a106-b2c8c69abb35","Type":"ContainerStarted","Data":"f22446e8b78b1f0c0e968f96187ccd7ef65ecb1cbd24824e1388a57888340d69"} Apr 04 02:26:31 crc kubenswrapper[4681]: I0404 02:26:31.540489 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-554fd9954f-c5kv8" podStartSLOduration=3.540466262 podStartE2EDuration="3.540466262s" podCreationTimestamp="2026-04-04 02:26:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:26:31.533529092 +0000 UTC m=+1871.199304212" watchObservedRunningTime="2026-04-04 02:26:31.540466262 +0000 UTC m=+1871.206241382" Apr 04 02:26:31 crc kubenswrapper[4681]: I0404 02:26:31.543545 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f7f9d8bd4-j5zm6" event={"ID":"838b96bb-fdff-4688-9737-aa60034d9538","Type":"ContainerStarted","Data":"48d1faae73814fccab342495ee67c1f8b0d95df949a5a9fdca1c14a0f68e6457"} Apr 04 02:26:31 crc kubenswrapper[4681]: I0404 02:26:31.549890 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42021f6e-0d10-4792-a0c1-14ae6fbe581d","Type":"ContainerStarted","Data":"d64ed9f84d1ef1f17afdc9d256c417b44ade742352fd5f93363deb484b89da12"} Apr 04 02:26:31 crc kubenswrapper[4681]: I0404 02:26:31.552462 4681 generic.go:334] "Generic (PLEG): container finished" podID="c7b18d01-a152-407d-94a5-993382ffd32f" containerID="b11b15c807303d02a07c1c177900ee213efa673f59eb41ff80f451e08936052a" exitCode=0 Apr 04 02:26:31 crc kubenswrapper[4681]: I0404 02:26:31.552507 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c75684f5-bch4x" event={"ID":"c7b18d01-a152-407d-94a5-993382ffd32f","Type":"ContainerDied","Data":"b11b15c807303d02a07c1c177900ee213efa673f59eb41ff80f451e08936052a"} Apr 04 02:26:32 crc kubenswrapper[4681]: I0404 02:26:32.478826 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:26:32 crc kubenswrapper[4681]: I0404 02:26:32.575740 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c75684f5-bch4x" event={"ID":"c7b18d01-a152-407d-94a5-993382ffd32f","Type":"ContainerStarted","Data":"5a037ab7c6105493761488a21dba9444577bf51cd20cd68c1034da93d6eff562"} Apr 04 02:26:32 crc kubenswrapper[4681]: I0404 02:26:32.575834 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-59c75684f5-bch4x" Apr 04 02:26:32 crc kubenswrapper[4681]: I0404 02:26:32.579467 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"df8847f1-00d6-45d1-a106-b2c8c69abb35","Type":"ContainerStarted","Data":"b4c468b1ac4758684ec9992870cb19eb9c5aea599eac1fa4acf788ebfdc36adb"} Apr 04 02:26:32 crc kubenswrapper[4681]: I0404 02:26:32.594591 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f7f9d8bd4-j5zm6" event={"ID":"838b96bb-fdff-4688-9737-aa60034d9538","Type":"ContainerStarted","Data":"aecfcd3810bcc4dad67b72df2fc14884155f796fae37d75380459f9f72523479"} Apr 04 02:26:32 crc kubenswrapper[4681]: I0404 02:26:32.594644 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f7f9d8bd4-j5zm6" event={"ID":"838b96bb-fdff-4688-9737-aa60034d9538","Type":"ContainerStarted","Data":"b2222ed78e6d8f588226b2399a03c39faddeeb5eb23772118a56a534823a6175"} Apr 04 02:26:32 crc kubenswrapper[4681]: I0404 02:26:32.598978 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-59c75684f5-bch4x" podStartSLOduration=6.598961068 podStartE2EDuration="6.598961068s" podCreationTimestamp="2026-04-04 02:26:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:26:32.597485768 +0000 UTC m=+1872.263260888" watchObservedRunningTime="2026-04-04 02:26:32.598961068 +0000 UTC m=+1872.264736198" Apr 04 02:26:32 crc kubenswrapper[4681]: I0404 02:26:32.629605 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-f7f9d8bd4-j5zm6" podStartSLOduration=6.629584279 podStartE2EDuration="6.629584279s" podCreationTimestamp="2026-04-04 02:26:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:26:32.616449168 +0000 UTC m=+1872.282224298" watchObservedRunningTime="2026-04-04 02:26:32.629584279 +0000 UTC m=+1872.295359399" Apr 04 02:26:33 crc kubenswrapper[4681]: I0404 02:26:33.606959 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"df8847f1-00d6-45d1-a106-b2c8c69abb35","Type":"ContainerStarted","Data":"68e48878a86264c1889391167ee551a77ad81fa672b2d14b261b69ff690acacf"} Apr 04 02:26:33 crc kubenswrapper[4681]: I0404 02:26:33.607555 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-f7f9d8bd4-j5zm6" Apr 04 02:26:33 crc kubenswrapper[4681]: I0404 02:26:33.635206 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.6351821730000005 podStartE2EDuration="7.635182173s" podCreationTimestamp="2026-04-04 02:26:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:26:33.630436082 +0000 UTC m=+1873.296211212" watchObservedRunningTime="2026-04-04 02:26:33.635182173 +0000 UTC m=+1873.300957293" Apr 04 02:26:34 crc kubenswrapper[4681]: I0404 02:26:34.622714 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="42021f6e-0d10-4792-a0c1-14ae6fbe581d" containerName="ceilometer-central-agent" containerID="cri-o://ce69e31748b18970cbda3ebe588df511848e5a3bc908a2219ebae25f0265c976" gracePeriod=30 Apr 04 02:26:34 crc kubenswrapper[4681]: I0404 02:26:34.623028 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42021f6e-0d10-4792-a0c1-14ae6fbe581d","Type":"ContainerStarted","Data":"a60734de64b56f2e489c46ace9a77ce201e384e3b150ed2ad437a43048b12d22"} Apr 04 02:26:34 crc kubenswrapper[4681]: I0404 02:26:34.623597 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Apr 04 02:26:34 crc kubenswrapper[4681]: I0404 02:26:34.623851 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="42021f6e-0d10-4792-a0c1-14ae6fbe581d" containerName="proxy-httpd" containerID="cri-o://a60734de64b56f2e489c46ace9a77ce201e384e3b150ed2ad437a43048b12d22" gracePeriod=30 Apr 04 02:26:34 crc kubenswrapper[4681]: I0404 02:26:34.623915 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="42021f6e-0d10-4792-a0c1-14ae6fbe581d" containerName="sg-core" containerID="cri-o://d64ed9f84d1ef1f17afdc9d256c417b44ade742352fd5f93363deb484b89da12" gracePeriod=30 Apr 04 02:26:34 crc kubenswrapper[4681]: I0404 02:26:34.623953 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="42021f6e-0d10-4792-a0c1-14ae6fbe581d" containerName="ceilometer-notification-agent" containerID="cri-o://01a8296f6c0ad0b9a1307b68fbec9f2069bec3878a9f32dd7c5d15e69d92ac5b" gracePeriod=30 Apr 04 02:26:34 crc kubenswrapper[4681]: I0404 02:26:34.657286 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.4423545779999998 podStartE2EDuration="9.657248559s" podCreationTimestamp="2026-04-04 02:26:25 +0000 UTC" firstStartedPulling="2026-04-04 02:26:26.639292783 +0000 UTC m=+1866.305067913" lastFinishedPulling="2026-04-04 02:26:32.854186774 +0000 UTC m=+1872.519961894" observedRunningTime="2026-04-04 02:26:34.649693881 +0000 UTC m=+1874.315469021" watchObservedRunningTime="2026-04-04 02:26:34.657248559 +0000 UTC m=+1874.323023679" Apr 04 02:26:35 crc kubenswrapper[4681]: I0404 02:26:35.320766 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-8456d9bb7c-dcjw6" Apr 04 02:26:35 crc kubenswrapper[4681]: I0404 02:26:35.322108 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-8456d9bb7c-dcjw6" Apr 04 02:26:35 crc kubenswrapper[4681]: I0404 02:26:35.634647 4681 generic.go:334] "Generic (PLEG): container finished" podID="42021f6e-0d10-4792-a0c1-14ae6fbe581d" containerID="a60734de64b56f2e489c46ace9a77ce201e384e3b150ed2ad437a43048b12d22" exitCode=0 Apr 04 02:26:35 crc kubenswrapper[4681]: I0404 02:26:35.634688 4681 generic.go:334] "Generic (PLEG): container finished" podID="42021f6e-0d10-4792-a0c1-14ae6fbe581d" containerID="d64ed9f84d1ef1f17afdc9d256c417b44ade742352fd5f93363deb484b89da12" exitCode=2 Apr 04 02:26:35 crc kubenswrapper[4681]: I0404 02:26:35.634699 4681 generic.go:334] "Generic (PLEG): container finished" podID="42021f6e-0d10-4792-a0c1-14ae6fbe581d" containerID="01a8296f6c0ad0b9a1307b68fbec9f2069bec3878a9f32dd7c5d15e69d92ac5b" exitCode=0 Apr 04 02:26:35 crc kubenswrapper[4681]: I0404 02:26:35.634687 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42021f6e-0d10-4792-a0c1-14ae6fbe581d","Type":"ContainerDied","Data":"a60734de64b56f2e489c46ace9a77ce201e384e3b150ed2ad437a43048b12d22"} Apr 04 02:26:35 crc kubenswrapper[4681]: I0404 02:26:35.634735 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42021f6e-0d10-4792-a0c1-14ae6fbe581d","Type":"ContainerDied","Data":"d64ed9f84d1ef1f17afdc9d256c417b44ade742352fd5f93363deb484b89da12"} Apr 04 02:26:35 crc kubenswrapper[4681]: I0404 02:26:35.634752 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42021f6e-0d10-4792-a0c1-14ae6fbe581d","Type":"ContainerDied","Data":"01a8296f6c0ad0b9a1307b68fbec9f2069bec3878a9f32dd7c5d15e69d92ac5b"} Apr 04 02:26:36 crc kubenswrapper[4681]: I0404 02:26:36.646773 4681 generic.go:334] "Generic (PLEG): container finished" podID="42021f6e-0d10-4792-a0c1-14ae6fbe581d" containerID="ce69e31748b18970cbda3ebe588df511848e5a3bc908a2219ebae25f0265c976" exitCode=0 Apr 04 02:26:36 crc kubenswrapper[4681]: I0404 02:26:36.647444 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42021f6e-0d10-4792-a0c1-14ae6fbe581d","Type":"ContainerDied","Data":"ce69e31748b18970cbda3ebe588df511848e5a3bc908a2219ebae25f0265c976"} Apr 04 02:26:36 crc kubenswrapper[4681]: I0404 02:26:36.667468 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-59c75684f5-bch4x" Apr 04 02:26:36 crc kubenswrapper[4681]: I0404 02:26:36.756154 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b9b888545-tqsw9"] Apr 04 02:26:36 crc kubenswrapper[4681]: I0404 02:26:36.757034 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" podUID="4d424526-1e00-412b-aeec-97b628067dcc" containerName="dnsmasq-dns" containerID="cri-o://768ed4fea9c3befa39601443fd5b7b7889f1efb932f17cff0bd9fb7ae963924d" gracePeriod=10 Apr 04 02:26:36 crc kubenswrapper[4681]: I0404 02:26:36.906567 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.112871 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.224079 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.397065 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42021f6e-0d10-4792-a0c1-14ae6fbe581d-config-data\") pod \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\" (UID: \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\") " Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.397128 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42021f6e-0d10-4792-a0c1-14ae6fbe581d-log-httpd\") pod \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\" (UID: \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\") " Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.397203 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42021f6e-0d10-4792-a0c1-14ae6fbe581d-run-httpd\") pod \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\" (UID: \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\") " Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.397254 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42021f6e-0d10-4792-a0c1-14ae6fbe581d-combined-ca-bundle\") pod \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\" (UID: \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\") " Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.397454 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42021f6e-0d10-4792-a0c1-14ae6fbe581d-scripts\") pod \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\" (UID: \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\") " Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.397666 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42021f6e-0d10-4792-a0c1-14ae6fbe581d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "42021f6e-0d10-4792-a0c1-14ae6fbe581d" (UID: "42021f6e-0d10-4792-a0c1-14ae6fbe581d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.397731 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42021f6e-0d10-4792-a0c1-14ae6fbe581d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "42021f6e-0d10-4792-a0c1-14ae6fbe581d" (UID: "42021f6e-0d10-4792-a0c1-14ae6fbe581d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.397959 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42021f6e-0d10-4792-a0c1-14ae6fbe581d-sg-core-conf-yaml\") pod \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\" (UID: \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\") " Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.398052 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs8z7\" (UniqueName: \"kubernetes.io/projected/42021f6e-0d10-4792-a0c1-14ae6fbe581d-kube-api-access-xs8z7\") pod \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\" (UID: \"42021f6e-0d10-4792-a0c1-14ae6fbe581d\") " Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.398895 4681 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42021f6e-0d10-4792-a0c1-14ae6fbe581d-run-httpd\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.398919 4681 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/42021f6e-0d10-4792-a0c1-14ae6fbe581d-log-httpd\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.403597 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42021f6e-0d10-4792-a0c1-14ae6fbe581d-kube-api-access-xs8z7" (OuterVolumeSpecName: "kube-api-access-xs8z7") pod "42021f6e-0d10-4792-a0c1-14ae6fbe581d" (UID: "42021f6e-0d10-4792-a0c1-14ae6fbe581d"). InnerVolumeSpecName "kube-api-access-xs8z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.416125 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42021f6e-0d10-4792-a0c1-14ae6fbe581d-scripts" (OuterVolumeSpecName: "scripts") pod "42021f6e-0d10-4792-a0c1-14ae6fbe581d" (UID: "42021f6e-0d10-4792-a0c1-14ae6fbe581d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.439590 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42021f6e-0d10-4792-a0c1-14ae6fbe581d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "42021f6e-0d10-4792-a0c1-14ae6fbe581d" (UID: "42021f6e-0d10-4792-a0c1-14ae6fbe581d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.446465 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" podUID="4d424526-1e00-412b-aeec-97b628067dcc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.183:5353: connect: connection refused" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.492409 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42021f6e-0d10-4792-a0c1-14ae6fbe581d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42021f6e-0d10-4792-a0c1-14ae6fbe581d" (UID: "42021f6e-0d10-4792-a0c1-14ae6fbe581d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.500955 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xs8z7\" (UniqueName: \"kubernetes.io/projected/42021f6e-0d10-4792-a0c1-14ae6fbe581d-kube-api-access-xs8z7\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.501012 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42021f6e-0d10-4792-a0c1-14ae6fbe581d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.501022 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42021f6e-0d10-4792-a0c1-14ae6fbe581d-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.501032 4681 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/42021f6e-0d10-4792-a0c1-14ae6fbe581d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.512478 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42021f6e-0d10-4792-a0c1-14ae6fbe581d-config-data" (OuterVolumeSpecName: "config-data") pod "42021f6e-0d10-4792-a0c1-14ae6fbe581d" (UID: "42021f6e-0d10-4792-a0c1-14ae6fbe581d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.602857 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42021f6e-0d10-4792-a0c1-14ae6fbe581d-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.658969 4681 generic.go:334] "Generic (PLEG): container finished" podID="4d424526-1e00-412b-aeec-97b628067dcc" containerID="768ed4fea9c3befa39601443fd5b7b7889f1efb932f17cff0bd9fb7ae963924d" exitCode=0 Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.659436 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" event={"ID":"4d424526-1e00-412b-aeec-97b628067dcc","Type":"ContainerDied","Data":"768ed4fea9c3befa39601443fd5b7b7889f1efb932f17cff0bd9fb7ae963924d"} Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.662173 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"42021f6e-0d10-4792-a0c1-14ae6fbe581d","Type":"ContainerDied","Data":"56dcee75d3f0de349386285f69dd256130c06db337d5e693802e18f35170a034"} Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.662200 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.662224 4681 scope.go:117] "RemoveContainer" containerID="a60734de64b56f2e489c46ace9a77ce201e384e3b150ed2ad437a43048b12d22" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.695805 4681 scope.go:117] "RemoveContainer" containerID="d64ed9f84d1ef1f17afdc9d256c417b44ade742352fd5f93363deb484b89da12" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.733357 4681 scope.go:117] "RemoveContainer" containerID="01a8296f6c0ad0b9a1307b68fbec9f2069bec3878a9f32dd7c5d15e69d92ac5b" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.734778 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.750623 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.758530 4681 scope.go:117] "RemoveContainer" containerID="ce69e31748b18970cbda3ebe588df511848e5a3bc908a2219ebae25f0265c976" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.761901 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:26:37 crc kubenswrapper[4681]: E0404 02:26:37.762327 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42021f6e-0d10-4792-a0c1-14ae6fbe581d" containerName="proxy-httpd" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.762359 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="42021f6e-0d10-4792-a0c1-14ae6fbe581d" containerName="proxy-httpd" Apr 04 02:26:37 crc kubenswrapper[4681]: E0404 02:26:37.762378 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42021f6e-0d10-4792-a0c1-14ae6fbe581d" containerName="ceilometer-notification-agent" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.762385 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="42021f6e-0d10-4792-a0c1-14ae6fbe581d" containerName="ceilometer-notification-agent" Apr 04 02:26:37 crc kubenswrapper[4681]: E0404 02:26:37.762418 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42021f6e-0d10-4792-a0c1-14ae6fbe581d" containerName="ceilometer-central-agent" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.762425 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="42021f6e-0d10-4792-a0c1-14ae6fbe581d" containerName="ceilometer-central-agent" Apr 04 02:26:37 crc kubenswrapper[4681]: E0404 02:26:37.762438 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42021f6e-0d10-4792-a0c1-14ae6fbe581d" containerName="sg-core" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.762445 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="42021f6e-0d10-4792-a0c1-14ae6fbe581d" containerName="sg-core" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.762607 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="42021f6e-0d10-4792-a0c1-14ae6fbe581d" containerName="ceilometer-central-agent" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.762628 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="42021f6e-0d10-4792-a0c1-14ae6fbe581d" containerName="ceilometer-notification-agent" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.762643 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="42021f6e-0d10-4792-a0c1-14ae6fbe581d" containerName="sg-core" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.762657 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="42021f6e-0d10-4792-a0c1-14ae6fbe581d" containerName="proxy-httpd" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.764604 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.774117 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.774231 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.779610 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.912078 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51ce35e6-2f7c-4013-a980-ec9378099292-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"51ce35e6-2f7c-4013-a980-ec9378099292\") " pod="openstack/ceilometer-0" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.912126 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ce35e6-2f7c-4013-a980-ec9378099292-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"51ce35e6-2f7c-4013-a980-ec9378099292\") " pod="openstack/ceilometer-0" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.912206 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ce35e6-2f7c-4013-a980-ec9378099292-config-data\") pod \"ceilometer-0\" (UID: \"51ce35e6-2f7c-4013-a980-ec9378099292\") " pod="openstack/ceilometer-0" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.912454 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghl4s\" (UniqueName: \"kubernetes.io/projected/51ce35e6-2f7c-4013-a980-ec9378099292-kube-api-access-ghl4s\") pod \"ceilometer-0\" (UID: \"51ce35e6-2f7c-4013-a980-ec9378099292\") " pod="openstack/ceilometer-0" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.912601 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51ce35e6-2f7c-4013-a980-ec9378099292-scripts\") pod \"ceilometer-0\" (UID: \"51ce35e6-2f7c-4013-a980-ec9378099292\") " pod="openstack/ceilometer-0" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.912798 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51ce35e6-2f7c-4013-a980-ec9378099292-log-httpd\") pod \"ceilometer-0\" (UID: \"51ce35e6-2f7c-4013-a980-ec9378099292\") " pod="openstack/ceilometer-0" Apr 04 02:26:37 crc kubenswrapper[4681]: I0404 02:26:37.912869 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51ce35e6-2f7c-4013-a980-ec9378099292-run-httpd\") pod \"ceilometer-0\" (UID: \"51ce35e6-2f7c-4013-a980-ec9378099292\") " pod="openstack/ceilometer-0" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.015512 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51ce35e6-2f7c-4013-a980-ec9378099292-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"51ce35e6-2f7c-4013-a980-ec9378099292\") " pod="openstack/ceilometer-0" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.015594 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ce35e6-2f7c-4013-a980-ec9378099292-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"51ce35e6-2f7c-4013-a980-ec9378099292\") " pod="openstack/ceilometer-0" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.015663 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ce35e6-2f7c-4013-a980-ec9378099292-config-data\") pod \"ceilometer-0\" (UID: \"51ce35e6-2f7c-4013-a980-ec9378099292\") " pod="openstack/ceilometer-0" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.015766 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghl4s\" (UniqueName: \"kubernetes.io/projected/51ce35e6-2f7c-4013-a980-ec9378099292-kube-api-access-ghl4s\") pod \"ceilometer-0\" (UID: \"51ce35e6-2f7c-4013-a980-ec9378099292\") " pod="openstack/ceilometer-0" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.015840 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51ce35e6-2f7c-4013-a980-ec9378099292-scripts\") pod \"ceilometer-0\" (UID: \"51ce35e6-2f7c-4013-a980-ec9378099292\") " pod="openstack/ceilometer-0" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.015931 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51ce35e6-2f7c-4013-a980-ec9378099292-log-httpd\") pod \"ceilometer-0\" (UID: \"51ce35e6-2f7c-4013-a980-ec9378099292\") " pod="openstack/ceilometer-0" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.015966 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51ce35e6-2f7c-4013-a980-ec9378099292-run-httpd\") pod \"ceilometer-0\" (UID: \"51ce35e6-2f7c-4013-a980-ec9378099292\") " pod="openstack/ceilometer-0" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.016455 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51ce35e6-2f7c-4013-a980-ec9378099292-log-httpd\") pod \"ceilometer-0\" (UID: \"51ce35e6-2f7c-4013-a980-ec9378099292\") " pod="openstack/ceilometer-0" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.016709 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51ce35e6-2f7c-4013-a980-ec9378099292-run-httpd\") pod \"ceilometer-0\" (UID: \"51ce35e6-2f7c-4013-a980-ec9378099292\") " pod="openstack/ceilometer-0" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.020239 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ce35e6-2f7c-4013-a980-ec9378099292-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"51ce35e6-2f7c-4013-a980-ec9378099292\") " pod="openstack/ceilometer-0" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.021406 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51ce35e6-2f7c-4013-a980-ec9378099292-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"51ce35e6-2f7c-4013-a980-ec9378099292\") " pod="openstack/ceilometer-0" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.022044 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ce35e6-2f7c-4013-a980-ec9378099292-config-data\") pod \"ceilometer-0\" (UID: \"51ce35e6-2f7c-4013-a980-ec9378099292\") " pod="openstack/ceilometer-0" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.025958 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51ce35e6-2f7c-4013-a980-ec9378099292-scripts\") pod \"ceilometer-0\" (UID: \"51ce35e6-2f7c-4013-a980-ec9378099292\") " pod="openstack/ceilometer-0" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.045964 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghl4s\" (UniqueName: \"kubernetes.io/projected/51ce35e6-2f7c-4013-a980-ec9378099292-kube-api-access-ghl4s\") pod \"ceilometer-0\" (UID: \"51ce35e6-2f7c-4013-a980-ec9378099292\") " pod="openstack/ceilometer-0" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.083932 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.242484 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.280849 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.553221 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.605984 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.676581 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" event={"ID":"4d424526-1e00-412b-aeec-97b628067dcc","Type":"ContainerDied","Data":"3f27dab3cabf2ce11240690c99a4ddd63e883fc3548511fdedcb6808657af66b"} Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.676612 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b9b888545-tqsw9" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.676643 4681 scope.go:117] "RemoveContainer" containerID="768ed4fea9c3befa39601443fd5b7b7889f1efb932f17cff0bd9fb7ae963924d" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.679996 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51ce35e6-2f7c-4013-a980-ec9378099292","Type":"ContainerStarted","Data":"427b3dd696687ae35765da4e1b142e80bb1c9f4af9b32c19afd3d8f59b0232aa"} Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.680147 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.703433 4681 scope.go:117] "RemoveContainer" containerID="a2ff3f13ed68c53268eafa1f89d7a2c93b07c7d1d86b47e17477c61317bdc131" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.709540 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.733224 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d424526-1e00-412b-aeec-97b628067dcc-config\") pod \"4d424526-1e00-412b-aeec-97b628067dcc\" (UID: \"4d424526-1e00-412b-aeec-97b628067dcc\") " Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.733333 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d424526-1e00-412b-aeec-97b628067dcc-ovsdbserver-nb\") pod \"4d424526-1e00-412b-aeec-97b628067dcc\" (UID: \"4d424526-1e00-412b-aeec-97b628067dcc\") " Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.733429 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wq7xz\" (UniqueName: \"kubernetes.io/projected/4d424526-1e00-412b-aeec-97b628067dcc-kube-api-access-wq7xz\") pod \"4d424526-1e00-412b-aeec-97b628067dcc\" (UID: \"4d424526-1e00-412b-aeec-97b628067dcc\") " Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.733471 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d424526-1e00-412b-aeec-97b628067dcc-ovsdbserver-sb\") pod \"4d424526-1e00-412b-aeec-97b628067dcc\" (UID: \"4d424526-1e00-412b-aeec-97b628067dcc\") " Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.733505 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d424526-1e00-412b-aeec-97b628067dcc-dns-svc\") pod \"4d424526-1e00-412b-aeec-97b628067dcc\" (UID: \"4d424526-1e00-412b-aeec-97b628067dcc\") " Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.733531 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d424526-1e00-412b-aeec-97b628067dcc-dns-swift-storage-0\") pod \"4d424526-1e00-412b-aeec-97b628067dcc\" (UID: \"4d424526-1e00-412b-aeec-97b628067dcc\") " Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.745759 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d424526-1e00-412b-aeec-97b628067dcc-kube-api-access-wq7xz" (OuterVolumeSpecName: "kube-api-access-wq7xz") pod "4d424526-1e00-412b-aeec-97b628067dcc" (UID: "4d424526-1e00-412b-aeec-97b628067dcc"). InnerVolumeSpecName "kube-api-access-wq7xz". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.746395 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wq7xz\" (UniqueName: \"kubernetes.io/projected/4d424526-1e00-412b-aeec-97b628067dcc-kube-api-access-wq7xz\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.794026 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d424526-1e00-412b-aeec-97b628067dcc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4d424526-1e00-412b-aeec-97b628067dcc" (UID: "4d424526-1e00-412b-aeec-97b628067dcc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.794560 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d424526-1e00-412b-aeec-97b628067dcc-config" (OuterVolumeSpecName: "config") pod "4d424526-1e00-412b-aeec-97b628067dcc" (UID: "4d424526-1e00-412b-aeec-97b628067dcc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.797816 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d424526-1e00-412b-aeec-97b628067dcc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4d424526-1e00-412b-aeec-97b628067dcc" (UID: "4d424526-1e00-412b-aeec-97b628067dcc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.811833 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d424526-1e00-412b-aeec-97b628067dcc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4d424526-1e00-412b-aeec-97b628067dcc" (UID: "4d424526-1e00-412b-aeec-97b628067dcc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.817229 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d424526-1e00-412b-aeec-97b628067dcc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4d424526-1e00-412b-aeec-97b628067dcc" (UID: "4d424526-1e00-412b-aeec-97b628067dcc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.848500 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4d424526-1e00-412b-aeec-97b628067dcc-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.848537 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4d424526-1e00-412b-aeec-97b628067dcc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.848547 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4d424526-1e00-412b-aeec-97b628067dcc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.848555 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4d424526-1e00-412b-aeec-97b628067dcc-dns-svc\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:38 crc kubenswrapper[4681]: I0404 02:26:38.848564 4681 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4d424526-1e00-412b-aeec-97b628067dcc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:39 crc kubenswrapper[4681]: I0404 02:26:39.008160 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b9b888545-tqsw9"] Apr 04 02:26:39 crc kubenswrapper[4681]: I0404 02:26:39.017178 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b9b888545-tqsw9"] Apr 04 02:26:39 crc kubenswrapper[4681]: I0404 02:26:39.213115 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42021f6e-0d10-4792-a0c1-14ae6fbe581d" path="/var/lib/kubelet/pods/42021f6e-0d10-4792-a0c1-14ae6fbe581d/volumes" Apr 04 02:26:39 crc kubenswrapper[4681]: I0404 02:26:39.214313 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d424526-1e00-412b-aeec-97b628067dcc" path="/var/lib/kubelet/pods/4d424526-1e00-412b-aeec-97b628067dcc/volumes" Apr 04 02:26:41 crc kubenswrapper[4681]: I0404 02:26:41.211808 4681 scope.go:117] "RemoveContainer" containerID="a115e63e478f2dcd96d6a1ea5579c4abd8544184899aee1395f08415f92823da" Apr 04 02:26:41 crc kubenswrapper[4681]: E0404 02:26:41.212416 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:26:43 crc kubenswrapper[4681]: I0404 02:26:43.804760 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51ce35e6-2f7c-4013-a980-ec9378099292","Type":"ContainerStarted","Data":"501a283427a89ee1724c49da0ccc3c0d3ef755362f51d1fdcfa0a1950b614d83"} Apr 04 02:26:43 crc kubenswrapper[4681]: I0404 02:26:43.806165 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51ce35e6-2f7c-4013-a980-ec9378099292","Type":"ContainerStarted","Data":"5853721a4ea3ee9282934db6901495b8247b930d117e6e7feb74f3bbcd07a87f"} Apr 04 02:26:45 crc kubenswrapper[4681]: I0404 02:26:45.353697 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Apr 04 02:26:45 crc kubenswrapper[4681]: I0404 02:26:45.355543 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="442b54de-22a7-4121-aab3-5365d4e0872d" containerName="watcher-decision-engine" containerID="cri-o://ca827116b7223772db07f624e9097d1dfb0c263672af3a0abc4d8ad0f3cb0e48" gracePeriod=30 Apr 04 02:26:45 crc kubenswrapper[4681]: I0404 02:26:45.372500 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Apr 04 02:26:45 crc kubenswrapper[4681]: I0404 02:26:45.372890 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="4edd5da9-fac6-4908-9036-dca43081ea71" containerName="watcher-api-log" containerID="cri-o://8fa9d5cdca90012952f0955ee8036a245dcd2d90f3de93af8a23434c9c6823b1" gracePeriod=30 Apr 04 02:26:45 crc kubenswrapper[4681]: I0404 02:26:45.373288 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="4edd5da9-fac6-4908-9036-dca43081ea71" containerName="watcher-api" containerID="cri-o://9c4d8c127d615e4b91cccd2399a7ae10bbe006ecd4f7e254d1c65c1445b9fbcb" gracePeriod=30 Apr 04 02:26:45 crc kubenswrapper[4681]: I0404 02:26:45.390629 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Apr 04 02:26:45 crc kubenswrapper[4681]: I0404 02:26:45.390872 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="b96bc277-5d81-4864-86b1-aeeab7142a0b" containerName="watcher-applier" containerID="cri-o://716c1d6e5658216496cfad2d22c5716f62293c9a2ecd1e5f2ced72aaee406e87" gracePeriod=30 Apr 04 02:26:45 crc kubenswrapper[4681]: I0404 02:26:45.825589 4681 generic.go:334] "Generic (PLEG): container finished" podID="4edd5da9-fac6-4908-9036-dca43081ea71" containerID="8fa9d5cdca90012952f0955ee8036a245dcd2d90f3de93af8a23434c9c6823b1" exitCode=143 Apr 04 02:26:45 crc kubenswrapper[4681]: I0404 02:26:45.825638 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"4edd5da9-fac6-4908-9036-dca43081ea71","Type":"ContainerDied","Data":"8fa9d5cdca90012952f0955ee8036a245dcd2d90f3de93af8a23434c9c6823b1"} Apr 04 02:26:46 crc kubenswrapper[4681]: I0404 02:26:46.858398 4681 generic.go:334] "Generic (PLEG): container finished" podID="4edd5da9-fac6-4908-9036-dca43081ea71" containerID="9c4d8c127d615e4b91cccd2399a7ae10bbe006ecd4f7e254d1c65c1445b9fbcb" exitCode=0 Apr 04 02:26:46 crc kubenswrapper[4681]: I0404 02:26:46.858487 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"4edd5da9-fac6-4908-9036-dca43081ea71","Type":"ContainerDied","Data":"9c4d8c127d615e4b91cccd2399a7ae10bbe006ecd4f7e254d1c65c1445b9fbcb"} Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.296225 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.454145 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4edd5da9-fac6-4908-9036-dca43081ea71-combined-ca-bundle\") pod \"4edd5da9-fac6-4908-9036-dca43081ea71\" (UID: \"4edd5da9-fac6-4908-9036-dca43081ea71\") " Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.454185 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4edd5da9-fac6-4908-9036-dca43081ea71-public-tls-certs\") pod \"4edd5da9-fac6-4908-9036-dca43081ea71\" (UID: \"4edd5da9-fac6-4908-9036-dca43081ea71\") " Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.454229 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4edd5da9-fac6-4908-9036-dca43081ea71-config-data\") pod \"4edd5da9-fac6-4908-9036-dca43081ea71\" (UID: \"4edd5da9-fac6-4908-9036-dca43081ea71\") " Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.454317 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4edd5da9-fac6-4908-9036-dca43081ea71-internal-tls-certs\") pod \"4edd5da9-fac6-4908-9036-dca43081ea71\" (UID: \"4edd5da9-fac6-4908-9036-dca43081ea71\") " Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.454357 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4edd5da9-fac6-4908-9036-dca43081ea71-custom-prometheus-ca\") pod \"4edd5da9-fac6-4908-9036-dca43081ea71\" (UID: \"4edd5da9-fac6-4908-9036-dca43081ea71\") " Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.454419 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4edd5da9-fac6-4908-9036-dca43081ea71-logs\") pod \"4edd5da9-fac6-4908-9036-dca43081ea71\" (UID: \"4edd5da9-fac6-4908-9036-dca43081ea71\") " Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.454457 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5vhj\" (UniqueName: \"kubernetes.io/projected/4edd5da9-fac6-4908-9036-dca43081ea71-kube-api-access-q5vhj\") pod \"4edd5da9-fac6-4908-9036-dca43081ea71\" (UID: \"4edd5da9-fac6-4908-9036-dca43081ea71\") " Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.455234 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4edd5da9-fac6-4908-9036-dca43081ea71-logs" (OuterVolumeSpecName: "logs") pod "4edd5da9-fac6-4908-9036-dca43081ea71" (UID: "4edd5da9-fac6-4908-9036-dca43081ea71"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.469092 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4edd5da9-fac6-4908-9036-dca43081ea71-kube-api-access-q5vhj" (OuterVolumeSpecName: "kube-api-access-q5vhj") pod "4edd5da9-fac6-4908-9036-dca43081ea71" (UID: "4edd5da9-fac6-4908-9036-dca43081ea71"). InnerVolumeSpecName "kube-api-access-q5vhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.508536 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4edd5da9-fac6-4908-9036-dca43081ea71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4edd5da9-fac6-4908-9036-dca43081ea71" (UID: "4edd5da9-fac6-4908-9036-dca43081ea71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.526582 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4edd5da9-fac6-4908-9036-dca43081ea71-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "4edd5da9-fac6-4908-9036-dca43081ea71" (UID: "4edd5da9-fac6-4908-9036-dca43081ea71"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.536680 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4edd5da9-fac6-4908-9036-dca43081ea71-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4edd5da9-fac6-4908-9036-dca43081ea71" (UID: "4edd5da9-fac6-4908-9036-dca43081ea71"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.540511 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4edd5da9-fac6-4908-9036-dca43081ea71-config-data" (OuterVolumeSpecName: "config-data") pod "4edd5da9-fac6-4908-9036-dca43081ea71" (UID: "4edd5da9-fac6-4908-9036-dca43081ea71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.547325 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4edd5da9-fac6-4908-9036-dca43081ea71-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4edd5da9-fac6-4908-9036-dca43081ea71" (UID: "4edd5da9-fac6-4908-9036-dca43081ea71"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.556788 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5vhj\" (UniqueName: \"kubernetes.io/projected/4edd5da9-fac6-4908-9036-dca43081ea71-kube-api-access-q5vhj\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.556821 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4edd5da9-fac6-4908-9036-dca43081ea71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.556852 4681 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4edd5da9-fac6-4908-9036-dca43081ea71-public-tls-certs\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.556863 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4edd5da9-fac6-4908-9036-dca43081ea71-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.556872 4681 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4edd5da9-fac6-4908-9036-dca43081ea71-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.556882 4681 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4edd5da9-fac6-4908-9036-dca43081ea71-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.556892 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4edd5da9-fac6-4908-9036-dca43081ea71-logs\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.870845 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.871318 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"4edd5da9-fac6-4908-9036-dca43081ea71","Type":"ContainerDied","Data":"26c152499389a5c46d55e752b35647d987efcfc3ce8bd799faba70b2a2d8765c"} Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.871376 4681 scope.go:117] "RemoveContainer" containerID="9c4d8c127d615e4b91cccd2399a7ae10bbe006ecd4f7e254d1c65c1445b9fbcb" Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.877843 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51ce35e6-2f7c-4013-a980-ec9378099292","Type":"ContainerStarted","Data":"2f72faad5dff891f6f91aa187215912aba5d0206d4112c5dbb28b433ede8cacc"} Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.915564 4681 scope.go:117] "RemoveContainer" containerID="8fa9d5cdca90012952f0955ee8036a245dcd2d90f3de93af8a23434c9c6823b1" Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.915865 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.929976 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.941195 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Apr 04 02:26:47 crc kubenswrapper[4681]: E0404 02:26:47.941698 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4edd5da9-fac6-4908-9036-dca43081ea71" containerName="watcher-api" Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.941726 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4edd5da9-fac6-4908-9036-dca43081ea71" containerName="watcher-api" Apr 04 02:26:47 crc kubenswrapper[4681]: E0404 02:26:47.941767 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4edd5da9-fac6-4908-9036-dca43081ea71" containerName="watcher-api-log" Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.941780 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4edd5da9-fac6-4908-9036-dca43081ea71" containerName="watcher-api-log" Apr 04 02:26:47 crc kubenswrapper[4681]: E0404 02:26:47.941796 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d424526-1e00-412b-aeec-97b628067dcc" containerName="dnsmasq-dns" Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.941804 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d424526-1e00-412b-aeec-97b628067dcc" containerName="dnsmasq-dns" Apr 04 02:26:47 crc kubenswrapper[4681]: E0404 02:26:47.941821 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d424526-1e00-412b-aeec-97b628067dcc" containerName="init" Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.941830 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d424526-1e00-412b-aeec-97b628067dcc" containerName="init" Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.942035 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4edd5da9-fac6-4908-9036-dca43081ea71" containerName="watcher-api" Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.942052 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d424526-1e00-412b-aeec-97b628067dcc" containerName="dnsmasq-dns" Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.942069 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4edd5da9-fac6-4908-9036-dca43081ea71" containerName="watcher-api-log" Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.943368 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.951913 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.951940 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.951987 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Apr 04 02:26:47 crc kubenswrapper[4681]: I0404 02:26:47.958905 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Apr 04 02:26:48 crc kubenswrapper[4681]: E0404 02:26:48.026932 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="716c1d6e5658216496cfad2d22c5716f62293c9a2ecd1e5f2ced72aaee406e87" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Apr 04 02:26:48 crc kubenswrapper[4681]: E0404 02:26:48.031489 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="716c1d6e5658216496cfad2d22c5716f62293c9a2ecd1e5f2ced72aaee406e87" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Apr 04 02:26:48 crc kubenswrapper[4681]: E0404 02:26:48.034380 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="716c1d6e5658216496cfad2d22c5716f62293c9a2ecd1e5f2ced72aaee406e87" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Apr 04 02:26:48 crc kubenswrapper[4681]: E0404 02:26:48.034446 4681 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="b96bc277-5d81-4864-86b1-aeeab7142a0b" containerName="watcher-applier" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.065311 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f43afd0-4f66-4841-a564-7f47a84be4b1-config-data\") pod \"watcher-api-0\" (UID: \"7f43afd0-4f66-4841-a564-7f47a84be4b1\") " pod="openstack/watcher-api-0" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.065394 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f43afd0-4f66-4841-a564-7f47a84be4b1-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"7f43afd0-4f66-4841-a564-7f47a84be4b1\") " pod="openstack/watcher-api-0" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.065455 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t8jh\" (UniqueName: \"kubernetes.io/projected/7f43afd0-4f66-4841-a564-7f47a84be4b1-kube-api-access-6t8jh\") pod \"watcher-api-0\" (UID: \"7f43afd0-4f66-4841-a564-7f47a84be4b1\") " pod="openstack/watcher-api-0" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.065491 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f43afd0-4f66-4841-a564-7f47a84be4b1-public-tls-certs\") pod \"watcher-api-0\" (UID: \"7f43afd0-4f66-4841-a564-7f47a84be4b1\") " pod="openstack/watcher-api-0" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.065541 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f43afd0-4f66-4841-a564-7f47a84be4b1-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"7f43afd0-4f66-4841-a564-7f47a84be4b1\") " pod="openstack/watcher-api-0" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.065606 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7f43afd0-4f66-4841-a564-7f47a84be4b1-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"7f43afd0-4f66-4841-a564-7f47a84be4b1\") " pod="openstack/watcher-api-0" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.065677 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f43afd0-4f66-4841-a564-7f47a84be4b1-logs\") pod \"watcher-api-0\" (UID: \"7f43afd0-4f66-4841-a564-7f47a84be4b1\") " pod="openstack/watcher-api-0" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.167997 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f43afd0-4f66-4841-a564-7f47a84be4b1-config-data\") pod \"watcher-api-0\" (UID: \"7f43afd0-4f66-4841-a564-7f47a84be4b1\") " pod="openstack/watcher-api-0" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.168095 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f43afd0-4f66-4841-a564-7f47a84be4b1-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"7f43afd0-4f66-4841-a564-7f47a84be4b1\") " pod="openstack/watcher-api-0" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.168145 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t8jh\" (UniqueName: \"kubernetes.io/projected/7f43afd0-4f66-4841-a564-7f47a84be4b1-kube-api-access-6t8jh\") pod \"watcher-api-0\" (UID: \"7f43afd0-4f66-4841-a564-7f47a84be4b1\") " pod="openstack/watcher-api-0" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.168177 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f43afd0-4f66-4841-a564-7f47a84be4b1-public-tls-certs\") pod \"watcher-api-0\" (UID: \"7f43afd0-4f66-4841-a564-7f47a84be4b1\") " pod="openstack/watcher-api-0" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.168216 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f43afd0-4f66-4841-a564-7f47a84be4b1-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"7f43afd0-4f66-4841-a564-7f47a84be4b1\") " pod="openstack/watcher-api-0" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.168291 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7f43afd0-4f66-4841-a564-7f47a84be4b1-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"7f43afd0-4f66-4841-a564-7f47a84be4b1\") " pod="openstack/watcher-api-0" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.168350 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f43afd0-4f66-4841-a564-7f47a84be4b1-logs\") pod \"watcher-api-0\" (UID: \"7f43afd0-4f66-4841-a564-7f47a84be4b1\") " pod="openstack/watcher-api-0" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.168981 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f43afd0-4f66-4841-a564-7f47a84be4b1-logs\") pod \"watcher-api-0\" (UID: \"7f43afd0-4f66-4841-a564-7f47a84be4b1\") " pod="openstack/watcher-api-0" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.172779 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f43afd0-4f66-4841-a564-7f47a84be4b1-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"7f43afd0-4f66-4841-a564-7f47a84be4b1\") " pod="openstack/watcher-api-0" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.176199 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f43afd0-4f66-4841-a564-7f47a84be4b1-config-data\") pod \"watcher-api-0\" (UID: \"7f43afd0-4f66-4841-a564-7f47a84be4b1\") " pod="openstack/watcher-api-0" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.176750 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f43afd0-4f66-4841-a564-7f47a84be4b1-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"7f43afd0-4f66-4841-a564-7f47a84be4b1\") " pod="openstack/watcher-api-0" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.177115 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7f43afd0-4f66-4841-a564-7f47a84be4b1-public-tls-certs\") pod \"watcher-api-0\" (UID: \"7f43afd0-4f66-4841-a564-7f47a84be4b1\") " pod="openstack/watcher-api-0" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.177181 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/7f43afd0-4f66-4841-a564-7f47a84be4b1-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"7f43afd0-4f66-4841-a564-7f47a84be4b1\") " pod="openstack/watcher-api-0" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.186081 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t8jh\" (UniqueName: \"kubernetes.io/projected/7f43afd0-4f66-4841-a564-7f47a84be4b1-kube-api-access-6t8jh\") pod \"watcher-api-0\" (UID: \"7f43afd0-4f66-4841-a564-7f47a84be4b1\") " pod="openstack/watcher-api-0" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.268618 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.698723 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Apr 04 02:26:48 crc kubenswrapper[4681]: W0404 02:26:48.833190 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f43afd0_4f66_4841_a564_7f47a84be4b1.slice/crio-c83cddda568d4de12a1701e0147cac924ae82adabeb09001989e3d4c72f539a3 WatchSource:0}: Error finding container c83cddda568d4de12a1701e0147cac924ae82adabeb09001989e3d4c72f539a3: Status 404 returned error can't find the container with id c83cddda568d4de12a1701e0147cac924ae82adabeb09001989e3d4c72f539a3 Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.839473 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.884596 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7xl6d\" (UniqueName: \"kubernetes.io/projected/b96bc277-5d81-4864-86b1-aeeab7142a0b-kube-api-access-7xl6d\") pod \"b96bc277-5d81-4864-86b1-aeeab7142a0b\" (UID: \"b96bc277-5d81-4864-86b1-aeeab7142a0b\") " Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.885001 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b96bc277-5d81-4864-86b1-aeeab7142a0b-logs\") pod \"b96bc277-5d81-4864-86b1-aeeab7142a0b\" (UID: \"b96bc277-5d81-4864-86b1-aeeab7142a0b\") " Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.885060 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b96bc277-5d81-4864-86b1-aeeab7142a0b-combined-ca-bundle\") pod \"b96bc277-5d81-4864-86b1-aeeab7142a0b\" (UID: \"b96bc277-5d81-4864-86b1-aeeab7142a0b\") " Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.885344 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b96bc277-5d81-4864-86b1-aeeab7142a0b-config-data\") pod \"b96bc277-5d81-4864-86b1-aeeab7142a0b\" (UID: \"b96bc277-5d81-4864-86b1-aeeab7142a0b\") " Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.885392 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b96bc277-5d81-4864-86b1-aeeab7142a0b-logs" (OuterVolumeSpecName: "logs") pod "b96bc277-5d81-4864-86b1-aeeab7142a0b" (UID: "b96bc277-5d81-4864-86b1-aeeab7142a0b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.885934 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b96bc277-5d81-4864-86b1-aeeab7142a0b-logs\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.891293 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b96bc277-5d81-4864-86b1-aeeab7142a0b-kube-api-access-7xl6d" (OuterVolumeSpecName: "kube-api-access-7xl6d") pod "b96bc277-5d81-4864-86b1-aeeab7142a0b" (UID: "b96bc277-5d81-4864-86b1-aeeab7142a0b"). InnerVolumeSpecName "kube-api-access-7xl6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.913613 4681 generic.go:334] "Generic (PLEG): container finished" podID="b96bc277-5d81-4864-86b1-aeeab7142a0b" containerID="716c1d6e5658216496cfad2d22c5716f62293c9a2ecd1e5f2ced72aaee406e87" exitCode=0 Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.913732 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.913776 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"b96bc277-5d81-4864-86b1-aeeab7142a0b","Type":"ContainerDied","Data":"716c1d6e5658216496cfad2d22c5716f62293c9a2ecd1e5f2ced72aaee406e87"} Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.914045 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"b96bc277-5d81-4864-86b1-aeeab7142a0b","Type":"ContainerDied","Data":"0af659ced1fc306ce817fc64cc24269fac2b32876a5cc1e4399717ec793a845a"} Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.914067 4681 scope.go:117] "RemoveContainer" containerID="716c1d6e5658216496cfad2d22c5716f62293c9a2ecd1e5f2ced72aaee406e87" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.918374 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7f43afd0-4f66-4841-a564-7f47a84be4b1","Type":"ContainerStarted","Data":"c83cddda568d4de12a1701e0147cac924ae82adabeb09001989e3d4c72f539a3"} Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.926050 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b96bc277-5d81-4864-86b1-aeeab7142a0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b96bc277-5d81-4864-86b1-aeeab7142a0b" (UID: "b96bc277-5d81-4864-86b1-aeeab7142a0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.947344 4681 scope.go:117] "RemoveContainer" containerID="716c1d6e5658216496cfad2d22c5716f62293c9a2ecd1e5f2ced72aaee406e87" Apr 04 02:26:48 crc kubenswrapper[4681]: E0404 02:26:48.947887 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"716c1d6e5658216496cfad2d22c5716f62293c9a2ecd1e5f2ced72aaee406e87\": container with ID starting with 716c1d6e5658216496cfad2d22c5716f62293c9a2ecd1e5f2ced72aaee406e87 not found: ID does not exist" containerID="716c1d6e5658216496cfad2d22c5716f62293c9a2ecd1e5f2ced72aaee406e87" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.947936 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"716c1d6e5658216496cfad2d22c5716f62293c9a2ecd1e5f2ced72aaee406e87"} err="failed to get container status \"716c1d6e5658216496cfad2d22c5716f62293c9a2ecd1e5f2ced72aaee406e87\": rpc error: code = NotFound desc = could not find container \"716c1d6e5658216496cfad2d22c5716f62293c9a2ecd1e5f2ced72aaee406e87\": container with ID starting with 716c1d6e5658216496cfad2d22c5716f62293c9a2ecd1e5f2ced72aaee406e87 not found: ID does not exist" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.954506 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b96bc277-5d81-4864-86b1-aeeab7142a0b-config-data" (OuterVolumeSpecName: "config-data") pod "b96bc277-5d81-4864-86b1-aeeab7142a0b" (UID: "b96bc277-5d81-4864-86b1-aeeab7142a0b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.987921 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b96bc277-5d81-4864-86b1-aeeab7142a0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.987955 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b96bc277-5d81-4864-86b1-aeeab7142a0b-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:48 crc kubenswrapper[4681]: I0404 02:26:48.987964 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7xl6d\" (UniqueName: \"kubernetes.io/projected/b96bc277-5d81-4864-86b1-aeeab7142a0b-kube-api-access-7xl6d\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:49 crc kubenswrapper[4681]: I0404 02:26:49.219069 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4edd5da9-fac6-4908-9036-dca43081ea71" path="/var/lib/kubelet/pods/4edd5da9-fac6-4908-9036-dca43081ea71/volumes" Apr 04 02:26:49 crc kubenswrapper[4681]: I0404 02:26:49.271140 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Apr 04 02:26:49 crc kubenswrapper[4681]: I0404 02:26:49.288555 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Apr 04 02:26:49 crc kubenswrapper[4681]: I0404 02:26:49.302007 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Apr 04 02:26:49 crc kubenswrapper[4681]: E0404 02:26:49.313740 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b96bc277-5d81-4864-86b1-aeeab7142a0b" containerName="watcher-applier" Apr 04 02:26:49 crc kubenswrapper[4681]: I0404 02:26:49.313817 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="b96bc277-5d81-4864-86b1-aeeab7142a0b" containerName="watcher-applier" Apr 04 02:26:49 crc kubenswrapper[4681]: I0404 02:26:49.314292 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="b96bc277-5d81-4864-86b1-aeeab7142a0b" containerName="watcher-applier" Apr 04 02:26:49 crc kubenswrapper[4681]: I0404 02:26:49.315085 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Apr 04 02:26:49 crc kubenswrapper[4681]: I0404 02:26:49.317804 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Apr 04 02:26:49 crc kubenswrapper[4681]: I0404 02:26:49.323604 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Apr 04 02:26:49 crc kubenswrapper[4681]: I0404 02:26:49.498808 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8abb1419-6466-40ac-b2ec-2d6306e02026-config-data\") pod \"watcher-applier-0\" (UID: \"8abb1419-6466-40ac-b2ec-2d6306e02026\") " pod="openstack/watcher-applier-0" Apr 04 02:26:49 crc kubenswrapper[4681]: I0404 02:26:49.499164 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8abb1419-6466-40ac-b2ec-2d6306e02026-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"8abb1419-6466-40ac-b2ec-2d6306e02026\") " pod="openstack/watcher-applier-0" Apr 04 02:26:49 crc kubenswrapper[4681]: I0404 02:26:49.499202 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7jd7\" (UniqueName: \"kubernetes.io/projected/8abb1419-6466-40ac-b2ec-2d6306e02026-kube-api-access-w7jd7\") pod \"watcher-applier-0\" (UID: \"8abb1419-6466-40ac-b2ec-2d6306e02026\") " pod="openstack/watcher-applier-0" Apr 04 02:26:49 crc kubenswrapper[4681]: I0404 02:26:49.499337 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8abb1419-6466-40ac-b2ec-2d6306e02026-logs\") pod \"watcher-applier-0\" (UID: \"8abb1419-6466-40ac-b2ec-2d6306e02026\") " pod="openstack/watcher-applier-0" Apr 04 02:26:49 crc kubenswrapper[4681]: I0404 02:26:49.601642 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8abb1419-6466-40ac-b2ec-2d6306e02026-config-data\") pod \"watcher-applier-0\" (UID: \"8abb1419-6466-40ac-b2ec-2d6306e02026\") " pod="openstack/watcher-applier-0" Apr 04 02:26:49 crc kubenswrapper[4681]: I0404 02:26:49.601771 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8abb1419-6466-40ac-b2ec-2d6306e02026-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"8abb1419-6466-40ac-b2ec-2d6306e02026\") " pod="openstack/watcher-applier-0" Apr 04 02:26:49 crc kubenswrapper[4681]: I0404 02:26:49.601822 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7jd7\" (UniqueName: \"kubernetes.io/projected/8abb1419-6466-40ac-b2ec-2d6306e02026-kube-api-access-w7jd7\") pod \"watcher-applier-0\" (UID: \"8abb1419-6466-40ac-b2ec-2d6306e02026\") " pod="openstack/watcher-applier-0" Apr 04 02:26:49 crc kubenswrapper[4681]: I0404 02:26:49.601958 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8abb1419-6466-40ac-b2ec-2d6306e02026-logs\") pod \"watcher-applier-0\" (UID: \"8abb1419-6466-40ac-b2ec-2d6306e02026\") " pod="openstack/watcher-applier-0" Apr 04 02:26:49 crc kubenswrapper[4681]: I0404 02:26:49.602385 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8abb1419-6466-40ac-b2ec-2d6306e02026-logs\") pod \"watcher-applier-0\" (UID: \"8abb1419-6466-40ac-b2ec-2d6306e02026\") " pod="openstack/watcher-applier-0" Apr 04 02:26:49 crc kubenswrapper[4681]: I0404 02:26:49.607134 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8abb1419-6466-40ac-b2ec-2d6306e02026-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"8abb1419-6466-40ac-b2ec-2d6306e02026\") " pod="openstack/watcher-applier-0" Apr 04 02:26:49 crc kubenswrapper[4681]: I0404 02:26:49.607188 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8abb1419-6466-40ac-b2ec-2d6306e02026-config-data\") pod \"watcher-applier-0\" (UID: \"8abb1419-6466-40ac-b2ec-2d6306e02026\") " pod="openstack/watcher-applier-0" Apr 04 02:26:49 crc kubenswrapper[4681]: I0404 02:26:49.625280 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7jd7\" (UniqueName: \"kubernetes.io/projected/8abb1419-6466-40ac-b2ec-2d6306e02026-kube-api-access-w7jd7\") pod \"watcher-applier-0\" (UID: \"8abb1419-6466-40ac-b2ec-2d6306e02026\") " pod="openstack/watcher-applier-0" Apr 04 02:26:49 crc kubenswrapper[4681]: I0404 02:26:49.796788 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Apr 04 02:26:49 crc kubenswrapper[4681]: I0404 02:26:49.945698 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7f43afd0-4f66-4841-a564-7f47a84be4b1","Type":"ContainerStarted","Data":"9dc8ef2b1c9503291ca817100dafc6b6ddb9938399d7fa4cb676607824033933"} Apr 04 02:26:49 crc kubenswrapper[4681]: I0404 02:26:49.945958 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"7f43afd0-4f66-4841-a564-7f47a84be4b1","Type":"ContainerStarted","Data":"49913df3890cf04ce953e54faf8d4264860f73a1a5903defa50daadbc86e5627"} Apr 04 02:26:49 crc kubenswrapper[4681]: I0404 02:26:49.947849 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Apr 04 02:26:49 crc kubenswrapper[4681]: I0404 02:26:49.978080 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=2.978059251 podStartE2EDuration="2.978059251s" podCreationTimestamp="2026-04-04 02:26:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:26:49.970601176 +0000 UTC m=+1889.636376296" watchObservedRunningTime="2026-04-04 02:26:49.978059251 +0000 UTC m=+1889.643834371" Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.168684 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:26:50 crc kubenswrapper[4681]: W0404 02:26:50.326683 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8abb1419_6466_40ac_b2ec_2d6306e02026.slice/crio-7c0ca3b28f2063d70d6e317d0e48a156aaa05bc44a1b7e4c31e3ed078e32b99d WatchSource:0}: Error finding container 7c0ca3b28f2063d70d6e317d0e48a156aaa05bc44a1b7e4c31e3ed078e32b99d: Status 404 returned error can't find the container with id 7c0ca3b28f2063d70d6e317d0e48a156aaa05bc44a1b7e4c31e3ed078e32b99d Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.336404 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.587452 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.735075 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/442b54de-22a7-4121-aab3-5365d4e0872d-combined-ca-bundle\") pod \"442b54de-22a7-4121-aab3-5365d4e0872d\" (UID: \"442b54de-22a7-4121-aab3-5365d4e0872d\") " Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.735428 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/442b54de-22a7-4121-aab3-5365d4e0872d-logs\") pod \"442b54de-22a7-4121-aab3-5365d4e0872d\" (UID: \"442b54de-22a7-4121-aab3-5365d4e0872d\") " Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.735495 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/442b54de-22a7-4121-aab3-5365d4e0872d-custom-prometheus-ca\") pod \"442b54de-22a7-4121-aab3-5365d4e0872d\" (UID: \"442b54de-22a7-4121-aab3-5365d4e0872d\") " Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.735514 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/442b54de-22a7-4121-aab3-5365d4e0872d-config-data\") pod \"442b54de-22a7-4121-aab3-5365d4e0872d\" (UID: \"442b54de-22a7-4121-aab3-5365d4e0872d\") " Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.735541 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzlrx\" (UniqueName: \"kubernetes.io/projected/442b54de-22a7-4121-aab3-5365d4e0872d-kube-api-access-tzlrx\") pod \"442b54de-22a7-4121-aab3-5365d4e0872d\" (UID: \"442b54de-22a7-4121-aab3-5365d4e0872d\") " Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.736096 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/442b54de-22a7-4121-aab3-5365d4e0872d-logs" (OuterVolumeSpecName: "logs") pod "442b54de-22a7-4121-aab3-5365d4e0872d" (UID: "442b54de-22a7-4121-aab3-5365d4e0872d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.739005 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/442b54de-22a7-4121-aab3-5365d4e0872d-kube-api-access-tzlrx" (OuterVolumeSpecName: "kube-api-access-tzlrx") pod "442b54de-22a7-4121-aab3-5365d4e0872d" (UID: "442b54de-22a7-4121-aab3-5365d4e0872d"). InnerVolumeSpecName "kube-api-access-tzlrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.773045 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/442b54de-22a7-4121-aab3-5365d4e0872d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "442b54de-22a7-4121-aab3-5365d4e0872d" (UID: "442b54de-22a7-4121-aab3-5365d4e0872d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.775736 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/442b54de-22a7-4121-aab3-5365d4e0872d-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "442b54de-22a7-4121-aab3-5365d4e0872d" (UID: "442b54de-22a7-4121-aab3-5365d4e0872d"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.823394 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/442b54de-22a7-4121-aab3-5365d4e0872d-config-data" (OuterVolumeSpecName: "config-data") pod "442b54de-22a7-4121-aab3-5365d4e0872d" (UID: "442b54de-22a7-4121-aab3-5365d4e0872d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.838502 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/442b54de-22a7-4121-aab3-5365d4e0872d-logs\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.838535 4681 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/442b54de-22a7-4121-aab3-5365d4e0872d-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.838548 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/442b54de-22a7-4121-aab3-5365d4e0872d-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.838557 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzlrx\" (UniqueName: \"kubernetes.io/projected/442b54de-22a7-4121-aab3-5365d4e0872d-kube-api-access-tzlrx\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.838566 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/442b54de-22a7-4121-aab3-5365d4e0872d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.957370 4681 generic.go:334] "Generic (PLEG): container finished" podID="442b54de-22a7-4121-aab3-5365d4e0872d" containerID="ca827116b7223772db07f624e9097d1dfb0c263672af3a0abc4d8ad0f3cb0e48" exitCode=0 Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.957467 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"442b54de-22a7-4121-aab3-5365d4e0872d","Type":"ContainerDied","Data":"ca827116b7223772db07f624e9097d1dfb0c263672af3a0abc4d8ad0f3cb0e48"} Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.957502 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"442b54de-22a7-4121-aab3-5365d4e0872d","Type":"ContainerDied","Data":"ff4a8cadd09b16a4640f57a7e0f139b567cc4e1ba22a5bed7b60d64e0d95d0c8"} Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.957506 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.957523 4681 scope.go:117] "RemoveContainer" containerID="ca827116b7223772db07f624e9097d1dfb0c263672af3a0abc4d8ad0f3cb0e48" Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.964930 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"8abb1419-6466-40ac-b2ec-2d6306e02026","Type":"ContainerStarted","Data":"1e55947e871e16179a99733ffeea513bfda6b17f93659d05fd10fe06fdca3a66"} Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.964980 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"8abb1419-6466-40ac-b2ec-2d6306e02026","Type":"ContainerStarted","Data":"7c0ca3b28f2063d70d6e317d0e48a156aaa05bc44a1b7e4c31e3ed078e32b99d"} Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.969923 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51ce35e6-2f7c-4013-a980-ec9378099292","Type":"ContainerStarted","Data":"fe3a84d9fc51e6a3d0996ba3fe30bd51b031f194df4f38d72043e4469bbf91ba"} Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.970157 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51ce35e6-2f7c-4013-a980-ec9378099292" containerName="ceilometer-central-agent" containerID="cri-o://5853721a4ea3ee9282934db6901495b8247b930d117e6e7feb74f3bbcd07a87f" gracePeriod=30 Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.970304 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51ce35e6-2f7c-4013-a980-ec9378099292" containerName="proxy-httpd" containerID="cri-o://fe3a84d9fc51e6a3d0996ba3fe30bd51b031f194df4f38d72043e4469bbf91ba" gracePeriod=30 Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.970329 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51ce35e6-2f7c-4013-a980-ec9378099292" containerName="ceilometer-notification-agent" containerID="cri-o://501a283427a89ee1724c49da0ccc3c0d3ef755362f51d1fdcfa0a1950b614d83" gracePeriod=30 Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.970251 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="51ce35e6-2f7c-4013-a980-ec9378099292" containerName="sg-core" containerID="cri-o://2f72faad5dff891f6f91aa187215912aba5d0206d4112c5dbb28b433ede8cacc" gracePeriod=30 Apr 04 02:26:50 crc kubenswrapper[4681]: I0404 02:26:50.997585 4681 scope.go:117] "RemoveContainer" containerID="13815f30743a40f27c1344893fb0ce4d8bd89907e3e353cbbdd634d8cd72ea30" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.037047 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.843923256 podStartE2EDuration="14.037024791s" podCreationTimestamp="2026-04-04 02:26:37 +0000 UTC" firstStartedPulling="2026-04-04 02:26:38.553099412 +0000 UTC m=+1878.218874532" lastFinishedPulling="2026-04-04 02:26:49.746200947 +0000 UTC m=+1889.411976067" observedRunningTime="2026-04-04 02:26:51.023690764 +0000 UTC m=+1890.689465884" watchObservedRunningTime="2026-04-04 02:26:51.037024791 +0000 UTC m=+1890.702799911" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.062436 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.072398 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.084640 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Apr 04 02:26:51 crc kubenswrapper[4681]: E0404 02:26:51.085079 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="442b54de-22a7-4121-aab3-5365d4e0872d" containerName="watcher-decision-engine" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.085103 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="442b54de-22a7-4121-aab3-5365d4e0872d" containerName="watcher-decision-engine" Apr 04 02:26:51 crc kubenswrapper[4681]: E0404 02:26:51.085123 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="442b54de-22a7-4121-aab3-5365d4e0872d" containerName="watcher-decision-engine" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.085132 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="442b54de-22a7-4121-aab3-5365d4e0872d" containerName="watcher-decision-engine" Apr 04 02:26:51 crc kubenswrapper[4681]: E0404 02:26:51.085147 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="442b54de-22a7-4121-aab3-5365d4e0872d" containerName="watcher-decision-engine" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.085154 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="442b54de-22a7-4121-aab3-5365d4e0872d" containerName="watcher-decision-engine" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.085438 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="442b54de-22a7-4121-aab3-5365d4e0872d" containerName="watcher-decision-engine" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.085451 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="442b54de-22a7-4121-aab3-5365d4e0872d" containerName="watcher-decision-engine" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.085459 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="442b54de-22a7-4121-aab3-5365d4e0872d" containerName="watcher-decision-engine" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.085470 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="442b54de-22a7-4121-aab3-5365d4e0872d" containerName="watcher-decision-engine" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.086210 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.090442 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Apr 04 02:26:51 crc kubenswrapper[4681]: E0404 02:26:51.091469 4681 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod442b54de_22a7_4121_aab3_5365d4e0872d.slice/crio-ff4a8cadd09b16a4640f57a7e0f139b567cc4e1ba22a5bed7b60d64e0d95d0c8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51ce35e6_2f7c_4013_a980_ec9378099292.slice/crio-2f72faad5dff891f6f91aa187215912aba5d0206d4112c5dbb28b433ede8cacc.scope\": RecentStats: unable to find data in memory cache]" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.116437 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.145309 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3190cdec-e3a8-4aa4-81bb-bd814b96537f-logs\") pod \"watcher-decision-engine-0\" (UID: \"3190cdec-e3a8-4aa4-81bb-bd814b96537f\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.145414 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3190cdec-e3a8-4aa4-81bb-bd814b96537f-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"3190cdec-e3a8-4aa4-81bb-bd814b96537f\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.145528 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48584\" (UniqueName: \"kubernetes.io/projected/3190cdec-e3a8-4aa4-81bb-bd814b96537f-kube-api-access-48584\") pod \"watcher-decision-engine-0\" (UID: \"3190cdec-e3a8-4aa4-81bb-bd814b96537f\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.145578 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3190cdec-e3a8-4aa4-81bb-bd814b96537f-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"3190cdec-e3a8-4aa4-81bb-bd814b96537f\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.145624 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3190cdec-e3a8-4aa4-81bb-bd814b96537f-config-data\") pod \"watcher-decision-engine-0\" (UID: \"3190cdec-e3a8-4aa4-81bb-bd814b96537f\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.156700 4681 scope.go:117] "RemoveContainer" containerID="ca827116b7223772db07f624e9097d1dfb0c263672af3a0abc4d8ad0f3cb0e48" Apr 04 02:26:51 crc kubenswrapper[4681]: E0404 02:26:51.162469 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca827116b7223772db07f624e9097d1dfb0c263672af3a0abc4d8ad0f3cb0e48\": container with ID starting with ca827116b7223772db07f624e9097d1dfb0c263672af3a0abc4d8ad0f3cb0e48 not found: ID does not exist" containerID="ca827116b7223772db07f624e9097d1dfb0c263672af3a0abc4d8ad0f3cb0e48" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.162532 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca827116b7223772db07f624e9097d1dfb0c263672af3a0abc4d8ad0f3cb0e48"} err="failed to get container status \"ca827116b7223772db07f624e9097d1dfb0c263672af3a0abc4d8ad0f3cb0e48\": rpc error: code = NotFound desc = could not find container \"ca827116b7223772db07f624e9097d1dfb0c263672af3a0abc4d8ad0f3cb0e48\": container with ID starting with ca827116b7223772db07f624e9097d1dfb0c263672af3a0abc4d8ad0f3cb0e48 not found: ID does not exist" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.162567 4681 scope.go:117] "RemoveContainer" containerID="13815f30743a40f27c1344893fb0ce4d8bd89907e3e353cbbdd634d8cd72ea30" Apr 04 02:26:51 crc kubenswrapper[4681]: E0404 02:26:51.165715 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13815f30743a40f27c1344893fb0ce4d8bd89907e3e353cbbdd634d8cd72ea30\": container with ID starting with 13815f30743a40f27c1344893fb0ce4d8bd89907e3e353cbbdd634d8cd72ea30 not found: ID does not exist" containerID="13815f30743a40f27c1344893fb0ce4d8bd89907e3e353cbbdd634d8cd72ea30" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.165753 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13815f30743a40f27c1344893fb0ce4d8bd89907e3e353cbbdd634d8cd72ea30"} err="failed to get container status \"13815f30743a40f27c1344893fb0ce4d8bd89907e3e353cbbdd634d8cd72ea30\": rpc error: code = NotFound desc = could not find container \"13815f30743a40f27c1344893fb0ce4d8bd89907e3e353cbbdd634d8cd72ea30\": container with ID starting with 13815f30743a40f27c1344893fb0ce4d8bd89907e3e353cbbdd634d8cd72ea30 not found: ID does not exist" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.227523 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="442b54de-22a7-4121-aab3-5365d4e0872d" path="/var/lib/kubelet/pods/442b54de-22a7-4121-aab3-5365d4e0872d/volumes" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.228146 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b96bc277-5d81-4864-86b1-aeeab7142a0b" path="/var/lib/kubelet/pods/b96bc277-5d81-4864-86b1-aeeab7142a0b/volumes" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.246946 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3190cdec-e3a8-4aa4-81bb-bd814b96537f-logs\") pod \"watcher-decision-engine-0\" (UID: \"3190cdec-e3a8-4aa4-81bb-bd814b96537f\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.247004 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3190cdec-e3a8-4aa4-81bb-bd814b96537f-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"3190cdec-e3a8-4aa4-81bb-bd814b96537f\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.247087 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48584\" (UniqueName: \"kubernetes.io/projected/3190cdec-e3a8-4aa4-81bb-bd814b96537f-kube-api-access-48584\") pod \"watcher-decision-engine-0\" (UID: \"3190cdec-e3a8-4aa4-81bb-bd814b96537f\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.247127 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3190cdec-e3a8-4aa4-81bb-bd814b96537f-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"3190cdec-e3a8-4aa4-81bb-bd814b96537f\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.247152 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3190cdec-e3a8-4aa4-81bb-bd814b96537f-config-data\") pod \"watcher-decision-engine-0\" (UID: \"3190cdec-e3a8-4aa4-81bb-bd814b96537f\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.247426 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3190cdec-e3a8-4aa4-81bb-bd814b96537f-logs\") pod \"watcher-decision-engine-0\" (UID: \"3190cdec-e3a8-4aa4-81bb-bd814b96537f\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.253892 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3190cdec-e3a8-4aa4-81bb-bd814b96537f-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"3190cdec-e3a8-4aa4-81bb-bd814b96537f\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.255701 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3190cdec-e3a8-4aa4-81bb-bd814b96537f-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"3190cdec-e3a8-4aa4-81bb-bd814b96537f\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.267756 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3190cdec-e3a8-4aa4-81bb-bd814b96537f-config-data\") pod \"watcher-decision-engine-0\" (UID: \"3190cdec-e3a8-4aa4-81bb-bd814b96537f\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.270721 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48584\" (UniqueName: \"kubernetes.io/projected/3190cdec-e3a8-4aa4-81bb-bd814b96537f-kube-api-access-48584\") pod \"watcher-decision-engine-0\" (UID: \"3190cdec-e3a8-4aa4-81bb-bd814b96537f\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:26:51 crc kubenswrapper[4681]: I0404 02:26:51.504594 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Apr 04 02:26:52 crc kubenswrapper[4681]: I0404 02:26:52.005377 4681 generic.go:334] "Generic (PLEG): container finished" podID="51ce35e6-2f7c-4013-a980-ec9378099292" containerID="fe3a84d9fc51e6a3d0996ba3fe30bd51b031f194df4f38d72043e4469bbf91ba" exitCode=0 Apr 04 02:26:52 crc kubenswrapper[4681]: I0404 02:26:52.005705 4681 generic.go:334] "Generic (PLEG): container finished" podID="51ce35e6-2f7c-4013-a980-ec9378099292" containerID="2f72faad5dff891f6f91aa187215912aba5d0206d4112c5dbb28b433ede8cacc" exitCode=2 Apr 04 02:26:52 crc kubenswrapper[4681]: I0404 02:26:52.005720 4681 generic.go:334] "Generic (PLEG): container finished" podID="51ce35e6-2f7c-4013-a980-ec9378099292" containerID="5853721a4ea3ee9282934db6901495b8247b930d117e6e7feb74f3bbcd07a87f" exitCode=0 Apr 04 02:26:52 crc kubenswrapper[4681]: I0404 02:26:52.005775 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51ce35e6-2f7c-4013-a980-ec9378099292","Type":"ContainerDied","Data":"fe3a84d9fc51e6a3d0996ba3fe30bd51b031f194df4f38d72043e4469bbf91ba"} Apr 04 02:26:52 crc kubenswrapper[4681]: I0404 02:26:52.005808 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51ce35e6-2f7c-4013-a980-ec9378099292","Type":"ContainerDied","Data":"2f72faad5dff891f6f91aa187215912aba5d0206d4112c5dbb28b433ede8cacc"} Apr 04 02:26:52 crc kubenswrapper[4681]: I0404 02:26:52.005821 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51ce35e6-2f7c-4013-a980-ec9378099292","Type":"ContainerDied","Data":"5853721a4ea3ee9282934db6901495b8247b930d117e6e7feb74f3bbcd07a87f"} Apr 04 02:26:52 crc kubenswrapper[4681]: I0404 02:26:52.010950 4681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 04 02:26:52 crc kubenswrapper[4681]: I0404 02:26:52.058106 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=3.058075588 podStartE2EDuration="3.058075588s" podCreationTimestamp="2026-04-04 02:26:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:26:52.054290785 +0000 UTC m=+1891.720065915" watchObservedRunningTime="2026-04-04 02:26:52.058075588 +0000 UTC m=+1891.723850708" Apr 04 02:26:52 crc kubenswrapper[4681]: I0404 02:26:52.118629 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Apr 04 02:26:53 crc kubenswrapper[4681]: I0404 02:26:53.021173 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3190cdec-e3a8-4aa4-81bb-bd814b96537f","Type":"ContainerStarted","Data":"db94528a4fd5a4633a4ee440f0c10d99a7ff242a03b0c8efeb0ff3395435c60e"} Apr 04 02:26:53 crc kubenswrapper[4681]: I0404 02:26:53.021471 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3190cdec-e3a8-4aa4-81bb-bd814b96537f","Type":"ContainerStarted","Data":"2fa72aacc1ca8d20e306c12a53b94e8022d116fe5b8a825361af8cb597a2498b"} Apr 04 02:26:53 crc kubenswrapper[4681]: I0404 02:26:53.041661 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.041641689 podStartE2EDuration="2.041641689s" podCreationTimestamp="2026-04-04 02:26:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:26:53.035685364 +0000 UTC m=+1892.701460494" watchObservedRunningTime="2026-04-04 02:26:53.041641689 +0000 UTC m=+1892.707416809" Apr 04 02:26:53 crc kubenswrapper[4681]: I0404 02:26:53.167333 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Apr 04 02:26:53 crc kubenswrapper[4681]: I0404 02:26:53.201314 4681 scope.go:117] "RemoveContainer" containerID="a115e63e478f2dcd96d6a1ea5579c4abd8544184899aee1395f08415f92823da" Apr 04 02:26:53 crc kubenswrapper[4681]: E0404 02:26:53.201650 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:26:53 crc kubenswrapper[4681]: I0404 02:26:53.269142 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Apr 04 02:26:54 crc kubenswrapper[4681]: I0404 02:26:54.797448 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Apr 04 02:26:55 crc kubenswrapper[4681]: I0404 02:26:55.030925 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Apr 04 02:26:55 crc kubenswrapper[4681]: I0404 02:26:55.031146 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="60193934-a521-4dda-8d57-f41affeaab02" containerName="glance-log" containerID="cri-o://08852104fb308cc149f2bf55c77ee20cee14d2e0fff74ac52a5b28ed27db9ff2" gracePeriod=30 Apr 04 02:26:55 crc kubenswrapper[4681]: I0404 02:26:55.031430 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="60193934-a521-4dda-8d57-f41affeaab02" containerName="glance-httpd" containerID="cri-o://51cbafa8d2eb1dd9e74f8eb062a9c11c17e082f2fe4bd1ec5b261e6f904b51ef" gracePeriod=30 Apr 04 02:26:55 crc kubenswrapper[4681]: I0404 02:26:55.069724 4681 generic.go:334] "Generic (PLEG): container finished" podID="51ce35e6-2f7c-4013-a980-ec9378099292" containerID="501a283427a89ee1724c49da0ccc3c0d3ef755362f51d1fdcfa0a1950b614d83" exitCode=0 Apr 04 02:26:55 crc kubenswrapper[4681]: I0404 02:26:55.069767 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51ce35e6-2f7c-4013-a980-ec9378099292","Type":"ContainerDied","Data":"501a283427a89ee1724c49da0ccc3c0d3ef755362f51d1fdcfa0a1950b614d83"} Apr 04 02:26:55 crc kubenswrapper[4681]: I0404 02:26:55.248304 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:26:55 crc kubenswrapper[4681]: I0404 02:26:55.438037 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51ce35e6-2f7c-4013-a980-ec9378099292-run-httpd\") pod \"51ce35e6-2f7c-4013-a980-ec9378099292\" (UID: \"51ce35e6-2f7c-4013-a980-ec9378099292\") " Apr 04 02:26:55 crc kubenswrapper[4681]: I0404 02:26:55.438158 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghl4s\" (UniqueName: \"kubernetes.io/projected/51ce35e6-2f7c-4013-a980-ec9378099292-kube-api-access-ghl4s\") pod \"51ce35e6-2f7c-4013-a980-ec9378099292\" (UID: \"51ce35e6-2f7c-4013-a980-ec9378099292\") " Apr 04 02:26:55 crc kubenswrapper[4681]: I0404 02:26:55.438227 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ce35e6-2f7c-4013-a980-ec9378099292-combined-ca-bundle\") pod \"51ce35e6-2f7c-4013-a980-ec9378099292\" (UID: \"51ce35e6-2f7c-4013-a980-ec9378099292\") " Apr 04 02:26:55 crc kubenswrapper[4681]: I0404 02:26:55.438315 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51ce35e6-2f7c-4013-a980-ec9378099292-sg-core-conf-yaml\") pod \"51ce35e6-2f7c-4013-a980-ec9378099292\" (UID: \"51ce35e6-2f7c-4013-a980-ec9378099292\") " Apr 04 02:26:55 crc kubenswrapper[4681]: I0404 02:26:55.438458 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ce35e6-2f7c-4013-a980-ec9378099292-config-data\") pod \"51ce35e6-2f7c-4013-a980-ec9378099292\" (UID: \"51ce35e6-2f7c-4013-a980-ec9378099292\") " Apr 04 02:26:55 crc kubenswrapper[4681]: I0404 02:26:55.438554 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51ce35e6-2f7c-4013-a980-ec9378099292-scripts\") pod \"51ce35e6-2f7c-4013-a980-ec9378099292\" (UID: \"51ce35e6-2f7c-4013-a980-ec9378099292\") " Apr 04 02:26:55 crc kubenswrapper[4681]: I0404 02:26:55.438641 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51ce35e6-2f7c-4013-a980-ec9378099292-log-httpd\") pod \"51ce35e6-2f7c-4013-a980-ec9378099292\" (UID: \"51ce35e6-2f7c-4013-a980-ec9378099292\") " Apr 04 02:26:55 crc kubenswrapper[4681]: I0404 02:26:55.440257 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51ce35e6-2f7c-4013-a980-ec9378099292-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "51ce35e6-2f7c-4013-a980-ec9378099292" (UID: "51ce35e6-2f7c-4013-a980-ec9378099292"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:26:55 crc kubenswrapper[4681]: I0404 02:26:55.440536 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51ce35e6-2f7c-4013-a980-ec9378099292-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "51ce35e6-2f7c-4013-a980-ec9378099292" (UID: "51ce35e6-2f7c-4013-a980-ec9378099292"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:26:55 crc kubenswrapper[4681]: I0404 02:26:55.450001 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51ce35e6-2f7c-4013-a980-ec9378099292-scripts" (OuterVolumeSpecName: "scripts") pod "51ce35e6-2f7c-4013-a980-ec9378099292" (UID: "51ce35e6-2f7c-4013-a980-ec9378099292"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:55 crc kubenswrapper[4681]: I0404 02:26:55.453378 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51ce35e6-2f7c-4013-a980-ec9378099292-kube-api-access-ghl4s" (OuterVolumeSpecName: "kube-api-access-ghl4s") pod "51ce35e6-2f7c-4013-a980-ec9378099292" (UID: "51ce35e6-2f7c-4013-a980-ec9378099292"). InnerVolumeSpecName "kube-api-access-ghl4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:26:55 crc kubenswrapper[4681]: I0404 02:26:55.485671 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51ce35e6-2f7c-4013-a980-ec9378099292-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "51ce35e6-2f7c-4013-a980-ec9378099292" (UID: "51ce35e6-2f7c-4013-a980-ec9378099292"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:55 crc kubenswrapper[4681]: I0404 02:26:55.541528 4681 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51ce35e6-2f7c-4013-a980-ec9378099292-run-httpd\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:55 crc kubenswrapper[4681]: I0404 02:26:55.541562 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghl4s\" (UniqueName: \"kubernetes.io/projected/51ce35e6-2f7c-4013-a980-ec9378099292-kube-api-access-ghl4s\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:55 crc kubenswrapper[4681]: I0404 02:26:55.541574 4681 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/51ce35e6-2f7c-4013-a980-ec9378099292-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:55 crc kubenswrapper[4681]: I0404 02:26:55.541582 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51ce35e6-2f7c-4013-a980-ec9378099292-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:55 crc kubenswrapper[4681]: I0404 02:26:55.541589 4681 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/51ce35e6-2f7c-4013-a980-ec9378099292-log-httpd\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:55 crc kubenswrapper[4681]: I0404 02:26:55.559966 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51ce35e6-2f7c-4013-a980-ec9378099292-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51ce35e6-2f7c-4013-a980-ec9378099292" (UID: "51ce35e6-2f7c-4013-a980-ec9378099292"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:55 crc kubenswrapper[4681]: I0404 02:26:55.588110 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51ce35e6-2f7c-4013-a980-ec9378099292-config-data" (OuterVolumeSpecName: "config-data") pod "51ce35e6-2f7c-4013-a980-ec9378099292" (UID: "51ce35e6-2f7c-4013-a980-ec9378099292"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:55 crc kubenswrapper[4681]: I0404 02:26:55.642886 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51ce35e6-2f7c-4013-a980-ec9378099292-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:55 crc kubenswrapper[4681]: I0404 02:26:55.643089 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51ce35e6-2f7c-4013-a980-ec9378099292-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.083073 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"51ce35e6-2f7c-4013-a980-ec9378099292","Type":"ContainerDied","Data":"427b3dd696687ae35765da4e1b142e80bb1c9f4af9b32c19afd3d8f59b0232aa"} Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.083133 4681 scope.go:117] "RemoveContainer" containerID="fe3a84d9fc51e6a3d0996ba3fe30bd51b031f194df4f38d72043e4469bbf91ba" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.084474 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.085310 4681 generic.go:334] "Generic (PLEG): container finished" podID="60193934-a521-4dda-8d57-f41affeaab02" containerID="08852104fb308cc149f2bf55c77ee20cee14d2e0fff74ac52a5b28ed27db9ff2" exitCode=143 Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.085349 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"60193934-a521-4dda-8d57-f41affeaab02","Type":"ContainerDied","Data":"08852104fb308cc149f2bf55c77ee20cee14d2e0fff74ac52a5b28ed27db9ff2"} Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.139692 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.147052 4681 scope.go:117] "RemoveContainer" containerID="2f72faad5dff891f6f91aa187215912aba5d0206d4112c5dbb28b433ede8cacc" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.157285 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.175571 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:26:56 crc kubenswrapper[4681]: E0404 02:26:56.176076 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ce35e6-2f7c-4013-a980-ec9378099292" containerName="ceilometer-central-agent" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.176098 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ce35e6-2f7c-4013-a980-ec9378099292" containerName="ceilometer-central-agent" Apr 04 02:26:56 crc kubenswrapper[4681]: E0404 02:26:56.176117 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ce35e6-2f7c-4013-a980-ec9378099292" containerName="sg-core" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.176124 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ce35e6-2f7c-4013-a980-ec9378099292" containerName="sg-core" Apr 04 02:26:56 crc kubenswrapper[4681]: E0404 02:26:56.176139 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ce35e6-2f7c-4013-a980-ec9378099292" containerName="ceilometer-notification-agent" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.176145 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ce35e6-2f7c-4013-a980-ec9378099292" containerName="ceilometer-notification-agent" Apr 04 02:26:56 crc kubenswrapper[4681]: E0404 02:26:56.176157 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ce35e6-2f7c-4013-a980-ec9378099292" containerName="proxy-httpd" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.176162 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ce35e6-2f7c-4013-a980-ec9378099292" containerName="proxy-httpd" Apr 04 02:26:56 crc kubenswrapper[4681]: E0404 02:26:56.176174 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="442b54de-22a7-4121-aab3-5365d4e0872d" containerName="watcher-decision-engine" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.176180 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="442b54de-22a7-4121-aab3-5365d4e0872d" containerName="watcher-decision-engine" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.176382 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="51ce35e6-2f7c-4013-a980-ec9378099292" containerName="proxy-httpd" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.176396 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="51ce35e6-2f7c-4013-a980-ec9378099292" containerName="sg-core" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.176408 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="51ce35e6-2f7c-4013-a980-ec9378099292" containerName="ceilometer-central-agent" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.176427 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="51ce35e6-2f7c-4013-a980-ec9378099292" containerName="ceilometer-notification-agent" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.178356 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.180732 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.184349 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.190147 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.232291 4681 scope.go:117] "RemoveContainer" containerID="501a283427a89ee1724c49da0ccc3c0d3ef755362f51d1fdcfa0a1950b614d83" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.251208 4681 scope.go:117] "RemoveContainer" containerID="5853721a4ea3ee9282934db6901495b8247b930d117e6e7feb74f3bbcd07a87f" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.256147 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\") " pod="openstack/ceilometer-0" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.256194 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-log-httpd\") pod \"ceilometer-0\" (UID: \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\") " pod="openstack/ceilometer-0" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.256426 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-config-data\") pod \"ceilometer-0\" (UID: \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\") " pod="openstack/ceilometer-0" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.256544 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-run-httpd\") pod \"ceilometer-0\" (UID: \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\") " pod="openstack/ceilometer-0" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.256625 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\") " pod="openstack/ceilometer-0" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.256651 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-scripts\") pod \"ceilometer-0\" (UID: \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\") " pod="openstack/ceilometer-0" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.256745 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzrjm\" (UniqueName: \"kubernetes.io/projected/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-kube-api-access-lzrjm\") pod \"ceilometer-0\" (UID: \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\") " pod="openstack/ceilometer-0" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.358596 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\") " pod="openstack/ceilometer-0" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.358667 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-log-httpd\") pod \"ceilometer-0\" (UID: \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\") " pod="openstack/ceilometer-0" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.359654 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-log-httpd\") pod \"ceilometer-0\" (UID: \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\") " pod="openstack/ceilometer-0" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.358809 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-config-data\") pod \"ceilometer-0\" (UID: \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\") " pod="openstack/ceilometer-0" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.360475 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-run-httpd\") pod \"ceilometer-0\" (UID: \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\") " pod="openstack/ceilometer-0" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.360739 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\") " pod="openstack/ceilometer-0" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.360778 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-scripts\") pod \"ceilometer-0\" (UID: \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\") " pod="openstack/ceilometer-0" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.360811 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzrjm\" (UniqueName: \"kubernetes.io/projected/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-kube-api-access-lzrjm\") pod \"ceilometer-0\" (UID: \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\") " pod="openstack/ceilometer-0" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.360839 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-run-httpd\") pod \"ceilometer-0\" (UID: \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\") " pod="openstack/ceilometer-0" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.364330 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\") " pod="openstack/ceilometer-0" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.364345 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\") " pod="openstack/ceilometer-0" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.365455 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-config-data\") pod \"ceilometer-0\" (UID: \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\") " pod="openstack/ceilometer-0" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.366161 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-scripts\") pod \"ceilometer-0\" (UID: \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\") " pod="openstack/ceilometer-0" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.382327 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzrjm\" (UniqueName: \"kubernetes.io/projected/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-kube-api-access-lzrjm\") pod \"ceilometer-0\" (UID: \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\") " pod="openstack/ceilometer-0" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.531050 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.672246 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.672700 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4b42bc47-8477-440f-b606-5c8c5cc6dee3" containerName="glance-httpd" containerID="cri-o://7fc6d50786cbfbfb9e2f347c07b3050a1e48e7c61b300d9e7bb625443c95232a" gracePeriod=30 Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.672620 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4b42bc47-8477-440f-b606-5c8c5cc6dee3" containerName="glance-log" containerID="cri-o://e68dda0525c1dd814b88ac8ab7014ba8cfbdafb2115894da3d97ef447742b5f8" gracePeriod=30 Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.736698 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-dn8zm"] Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.737969 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dn8zm" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.754408 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dn8zm"] Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.868527 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-zz49b"] Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.871874 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/624921f8-2de2-4354-9a6b-c5cb0c9e9a21-operator-scripts\") pod \"nova-api-db-create-dn8zm\" (UID: \"624921f8-2de2-4354-9a6b-c5cb0c9e9a21\") " pod="openstack/nova-api-db-create-dn8zm" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.871921 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlnrq\" (UniqueName: \"kubernetes.io/projected/624921f8-2de2-4354-9a6b-c5cb0c9e9a21-kube-api-access-tlnrq\") pod \"nova-api-db-create-dn8zm\" (UID: \"624921f8-2de2-4354-9a6b-c5cb0c9e9a21\") " pod="openstack/nova-api-db-create-dn8zm" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.872163 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zz49b" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.873832 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-f7f9d8bd4-j5zm6" podUID="838b96bb-fdff-4688-9737-aa60034d9538" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.873857 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-f7f9d8bd4-j5zm6" podUID="838b96bb-fdff-4688-9737-aa60034d9538" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.913349 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-zz49b"] Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.920770 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-f7f9d8bd4-j5zm6" podUID="838b96bb-fdff-4688-9737-aa60034d9538" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.975898 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7v6c\" (UniqueName: \"kubernetes.io/projected/5c9e030b-26b4-4add-95b6-aaf9b50907db-kube-api-access-n7v6c\") pod \"nova-cell0-db-create-zz49b\" (UID: \"5c9e030b-26b4-4add-95b6-aaf9b50907db\") " pod="openstack/nova-cell0-db-create-zz49b" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.975938 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c9e030b-26b4-4add-95b6-aaf9b50907db-operator-scripts\") pod \"nova-cell0-db-create-zz49b\" (UID: \"5c9e030b-26b4-4add-95b6-aaf9b50907db\") " pod="openstack/nova-cell0-db-create-zz49b" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.975968 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/624921f8-2de2-4354-9a6b-c5cb0c9e9a21-operator-scripts\") pod \"nova-api-db-create-dn8zm\" (UID: \"624921f8-2de2-4354-9a6b-c5cb0c9e9a21\") " pod="openstack/nova-api-db-create-dn8zm" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.975996 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlnrq\" (UniqueName: \"kubernetes.io/projected/624921f8-2de2-4354-9a6b-c5cb0c9e9a21-kube-api-access-tlnrq\") pod \"nova-api-db-create-dn8zm\" (UID: \"624921f8-2de2-4354-9a6b-c5cb0c9e9a21\") " pod="openstack/nova-api-db-create-dn8zm" Apr 04 02:26:56 crc kubenswrapper[4681]: I0404 02:26:56.977045 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/624921f8-2de2-4354-9a6b-c5cb0c9e9a21-operator-scripts\") pod \"nova-api-db-create-dn8zm\" (UID: \"624921f8-2de2-4354-9a6b-c5cb0c9e9a21\") " pod="openstack/nova-api-db-create-dn8zm" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.005320 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-2f8c-account-create-update-96qv2"] Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.006660 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2f8c-account-create-update-96qv2" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.010449 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.012526 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2f8c-account-create-update-96qv2"] Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.015060 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlnrq\" (UniqueName: \"kubernetes.io/projected/624921f8-2de2-4354-9a6b-c5cb0c9e9a21-kube-api-access-tlnrq\") pod \"nova-api-db-create-dn8zm\" (UID: \"624921f8-2de2-4354-9a6b-c5cb0c9e9a21\") " pod="openstack/nova-api-db-create-dn8zm" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.029419 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="60193934-a521-4dda-8d57-f41affeaab02" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.179:9292/healthcheck\": dial tcp 10.217.0.179:9292: connect: connection refused" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.029770 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="60193934-a521-4dda-8d57-f41affeaab02" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.179:9292/healthcheck\": dial tcp 10.217.0.179:9292: connect: connection refused" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.046327 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-qlsx4"] Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.047919 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qlsx4" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.079769 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7v6c\" (UniqueName: \"kubernetes.io/projected/5c9e030b-26b4-4add-95b6-aaf9b50907db-kube-api-access-n7v6c\") pod \"nova-cell0-db-create-zz49b\" (UID: \"5c9e030b-26b4-4add-95b6-aaf9b50907db\") " pod="openstack/nova-cell0-db-create-zz49b" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.079807 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c9e030b-26b4-4add-95b6-aaf9b50907db-operator-scripts\") pod \"nova-cell0-db-create-zz49b\" (UID: \"5c9e030b-26b4-4add-95b6-aaf9b50907db\") " pod="openstack/nova-cell0-db-create-zz49b" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.079870 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43326c52-09d6-47c0-a336-cd16e11dd6a0-operator-scripts\") pod \"nova-api-2f8c-account-create-update-96qv2\" (UID: \"43326c52-09d6-47c0-a336-cd16e11dd6a0\") " pod="openstack/nova-api-2f8c-account-create-update-96qv2" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.079955 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnjds\" (UniqueName: \"kubernetes.io/projected/43326c52-09d6-47c0-a336-cd16e11dd6a0-kube-api-access-fnjds\") pod \"nova-api-2f8c-account-create-update-96qv2\" (UID: \"43326c52-09d6-47c0-a336-cd16e11dd6a0\") " pod="openstack/nova-api-2f8c-account-create-update-96qv2" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.084423 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c9e030b-26b4-4add-95b6-aaf9b50907db-operator-scripts\") pod \"nova-cell0-db-create-zz49b\" (UID: \"5c9e030b-26b4-4add-95b6-aaf9b50907db\") " pod="openstack/nova-cell0-db-create-zz49b" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.109648 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-qlsx4"] Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.150246 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7v6c\" (UniqueName: \"kubernetes.io/projected/5c9e030b-26b4-4add-95b6-aaf9b50907db-kube-api-access-n7v6c\") pod \"nova-cell0-db-create-zz49b\" (UID: \"5c9e030b-26b4-4add-95b6-aaf9b50907db\") " pod="openstack/nova-cell0-db-create-zz49b" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.157440 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-4867-account-create-update-gj56s"] Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.159177 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4867-account-create-update-gj56s" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.161084 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dn8zm" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.162410 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.163553 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-4867-account-create-update-gj56s"] Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.183164 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43326c52-09d6-47c0-a336-cd16e11dd6a0-operator-scripts\") pod \"nova-api-2f8c-account-create-update-96qv2\" (UID: \"43326c52-09d6-47c0-a336-cd16e11dd6a0\") " pod="openstack/nova-api-2f8c-account-create-update-96qv2" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.183219 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zbbv\" (UniqueName: \"kubernetes.io/projected/23481861-506b-4a5d-a1da-d6a21811d7c5-kube-api-access-5zbbv\") pod \"nova-cell1-db-create-qlsx4\" (UID: \"23481861-506b-4a5d-a1da-d6a21811d7c5\") " pod="openstack/nova-cell1-db-create-qlsx4" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.183244 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnjds\" (UniqueName: \"kubernetes.io/projected/43326c52-09d6-47c0-a336-cd16e11dd6a0-kube-api-access-fnjds\") pod \"nova-api-2f8c-account-create-update-96qv2\" (UID: \"43326c52-09d6-47c0-a336-cd16e11dd6a0\") " pod="openstack/nova-api-2f8c-account-create-update-96qv2" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.183339 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23481861-506b-4a5d-a1da-d6a21811d7c5-operator-scripts\") pod \"nova-cell1-db-create-qlsx4\" (UID: \"23481861-506b-4a5d-a1da-d6a21811d7c5\") " pod="openstack/nova-cell1-db-create-qlsx4" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.184107 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43326c52-09d6-47c0-a336-cd16e11dd6a0-operator-scripts\") pod \"nova-api-2f8c-account-create-update-96qv2\" (UID: \"43326c52-09d6-47c0-a336-cd16e11dd6a0\") " pod="openstack/nova-api-2f8c-account-create-update-96qv2" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.190625 4681 generic.go:334] "Generic (PLEG): container finished" podID="60193934-a521-4dda-8d57-f41affeaab02" containerID="51cbafa8d2eb1dd9e74f8eb062a9c11c17e082f2fe4bd1ec5b261e6f904b51ef" exitCode=0 Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.190772 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"60193934-a521-4dda-8d57-f41affeaab02","Type":"ContainerDied","Data":"51cbafa8d2eb1dd9e74f8eb062a9c11c17e082f2fe4bd1ec5b261e6f904b51ef"} Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.217182 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnjds\" (UniqueName: \"kubernetes.io/projected/43326c52-09d6-47c0-a336-cd16e11dd6a0-kube-api-access-fnjds\") pod \"nova-api-2f8c-account-create-update-96qv2\" (UID: \"43326c52-09d6-47c0-a336-cd16e11dd6a0\") " pod="openstack/nova-api-2f8c-account-create-update-96qv2" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.225081 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zz49b" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.243183 4681 generic.go:334] "Generic (PLEG): container finished" podID="4b42bc47-8477-440f-b606-5c8c5cc6dee3" containerID="e68dda0525c1dd814b88ac8ab7014ba8cfbdafb2115894da3d97ef447742b5f8" exitCode=143 Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.247246 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51ce35e6-2f7c-4013-a980-ec9378099292" path="/var/lib/kubelet/pods/51ce35e6-2f7c-4013-a980-ec9378099292/volumes" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.255411 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4b42bc47-8477-440f-b606-5c8c5cc6dee3","Type":"ContainerDied","Data":"e68dda0525c1dd814b88ac8ab7014ba8cfbdafb2115894da3d97ef447742b5f8"} Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.296558 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7k75\" (UniqueName: \"kubernetes.io/projected/61bda19d-5103-4038-8296-583fb1d25024-kube-api-access-m7k75\") pod \"nova-cell0-4867-account-create-update-gj56s\" (UID: \"61bda19d-5103-4038-8296-583fb1d25024\") " pod="openstack/nova-cell0-4867-account-create-update-gj56s" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.296644 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zbbv\" (UniqueName: \"kubernetes.io/projected/23481861-506b-4a5d-a1da-d6a21811d7c5-kube-api-access-5zbbv\") pod \"nova-cell1-db-create-qlsx4\" (UID: \"23481861-506b-4a5d-a1da-d6a21811d7c5\") " pod="openstack/nova-cell1-db-create-qlsx4" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.296931 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23481861-506b-4a5d-a1da-d6a21811d7c5-operator-scripts\") pod \"nova-cell1-db-create-qlsx4\" (UID: \"23481861-506b-4a5d-a1da-d6a21811d7c5\") " pod="openstack/nova-cell1-db-create-qlsx4" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.297181 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61bda19d-5103-4038-8296-583fb1d25024-operator-scripts\") pod \"nova-cell0-4867-account-create-update-gj56s\" (UID: \"61bda19d-5103-4038-8296-583fb1d25024\") " pod="openstack/nova-cell0-4867-account-create-update-gj56s" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.307700 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23481861-506b-4a5d-a1da-d6a21811d7c5-operator-scripts\") pod \"nova-cell1-db-create-qlsx4\" (UID: \"23481861-506b-4a5d-a1da-d6a21811d7c5\") " pod="openstack/nova-cell1-db-create-qlsx4" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.326102 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-dc13-account-create-update-bm9fk"] Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.327430 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-dc13-account-create-update-bm9fk" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.334379 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.352847 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zbbv\" (UniqueName: \"kubernetes.io/projected/23481861-506b-4a5d-a1da-d6a21811d7c5-kube-api-access-5zbbv\") pod \"nova-cell1-db-create-qlsx4\" (UID: \"23481861-506b-4a5d-a1da-d6a21811d7c5\") " pod="openstack/nova-cell1-db-create-qlsx4" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.361347 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-dc13-account-create-update-bm9fk"] Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.388167 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2f8c-account-create-update-96qv2" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.406332 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7k75\" (UniqueName: \"kubernetes.io/projected/61bda19d-5103-4038-8296-583fb1d25024-kube-api-access-m7k75\") pod \"nova-cell0-4867-account-create-update-gj56s\" (UID: \"61bda19d-5103-4038-8296-583fb1d25024\") " pod="openstack/nova-cell0-4867-account-create-update-gj56s" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.410622 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f70a3b1-7886-4a8d-988e-8bf233a96729-operator-scripts\") pod \"nova-cell1-dc13-account-create-update-bm9fk\" (UID: \"0f70a3b1-7886-4a8d-988e-8bf233a96729\") " pod="openstack/nova-cell1-dc13-account-create-update-bm9fk" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.411166 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61bda19d-5103-4038-8296-583fb1d25024-operator-scripts\") pod \"nova-cell0-4867-account-create-update-gj56s\" (UID: \"61bda19d-5103-4038-8296-583fb1d25024\") " pod="openstack/nova-cell0-4867-account-create-update-gj56s" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.411294 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn2gv\" (UniqueName: \"kubernetes.io/projected/0f70a3b1-7886-4a8d-988e-8bf233a96729-kube-api-access-hn2gv\") pod \"nova-cell1-dc13-account-create-update-bm9fk\" (UID: \"0f70a3b1-7886-4a8d-988e-8bf233a96729\") " pod="openstack/nova-cell1-dc13-account-create-update-bm9fk" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.412554 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61bda19d-5103-4038-8296-583fb1d25024-operator-scripts\") pod \"nova-cell0-4867-account-create-update-gj56s\" (UID: \"61bda19d-5103-4038-8296-583fb1d25024\") " pod="openstack/nova-cell0-4867-account-create-update-gj56s" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.425494 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qlsx4" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.504452 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7k75\" (UniqueName: \"kubernetes.io/projected/61bda19d-5103-4038-8296-583fb1d25024-kube-api-access-m7k75\") pod \"nova-cell0-4867-account-create-update-gj56s\" (UID: \"61bda19d-5103-4038-8296-583fb1d25024\") " pod="openstack/nova-cell0-4867-account-create-update-gj56s" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.505348 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4867-account-create-update-gj56s" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.507378 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.529148 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f70a3b1-7886-4a8d-988e-8bf233a96729-operator-scripts\") pod \"nova-cell1-dc13-account-create-update-bm9fk\" (UID: \"0f70a3b1-7886-4a8d-988e-8bf233a96729\") " pod="openstack/nova-cell1-dc13-account-create-update-bm9fk" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.529894 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f70a3b1-7886-4a8d-988e-8bf233a96729-operator-scripts\") pod \"nova-cell1-dc13-account-create-update-bm9fk\" (UID: \"0f70a3b1-7886-4a8d-988e-8bf233a96729\") " pod="openstack/nova-cell1-dc13-account-create-update-bm9fk" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.530576 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn2gv\" (UniqueName: \"kubernetes.io/projected/0f70a3b1-7886-4a8d-988e-8bf233a96729-kube-api-access-hn2gv\") pod \"nova-cell1-dc13-account-create-update-bm9fk\" (UID: \"0f70a3b1-7886-4a8d-988e-8bf233a96729\") " pod="openstack/nova-cell1-dc13-account-create-update-bm9fk" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.551136 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn2gv\" (UniqueName: \"kubernetes.io/projected/0f70a3b1-7886-4a8d-988e-8bf233a96729-kube-api-access-hn2gv\") pod \"nova-cell1-dc13-account-create-update-bm9fk\" (UID: \"0f70a3b1-7886-4a8d-988e-8bf233a96729\") " pod="openstack/nova-cell1-dc13-account-create-update-bm9fk" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.715518 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-dc13-account-create-update-bm9fk" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.731492 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.838584 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60193934-a521-4dda-8d57-f41affeaab02-public-tls-certs\") pod \"60193934-a521-4dda-8d57-f41affeaab02\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.838734 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60193934-a521-4dda-8d57-f41affeaab02-combined-ca-bundle\") pod \"60193934-a521-4dda-8d57-f41affeaab02\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.838973 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60193934-a521-4dda-8d57-f41affeaab02-scripts\") pod \"60193934-a521-4dda-8d57-f41affeaab02\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.839098 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/60193934-a521-4dda-8d57-f41affeaab02-httpd-run\") pod \"60193934-a521-4dda-8d57-f41affeaab02\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.839159 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb6wm\" (UniqueName: \"kubernetes.io/projected/60193934-a521-4dda-8d57-f41affeaab02-kube-api-access-mb6wm\") pod \"60193934-a521-4dda-8d57-f41affeaab02\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.839241 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60193934-a521-4dda-8d57-f41affeaab02-logs\") pod \"60193934-a521-4dda-8d57-f41affeaab02\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.839597 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"60193934-a521-4dda-8d57-f41affeaab02\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.839890 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60193934-a521-4dda-8d57-f41affeaab02-config-data\") pod \"60193934-a521-4dda-8d57-f41affeaab02\" (UID: \"60193934-a521-4dda-8d57-f41affeaab02\") " Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.846210 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60193934-a521-4dda-8d57-f41affeaab02-logs" (OuterVolumeSpecName: "logs") pod "60193934-a521-4dda-8d57-f41affeaab02" (UID: "60193934-a521-4dda-8d57-f41affeaab02"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.849155 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60193934-a521-4dda-8d57-f41affeaab02-scripts" (OuterVolumeSpecName: "scripts") pod "60193934-a521-4dda-8d57-f41affeaab02" (UID: "60193934-a521-4dda-8d57-f41affeaab02"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.849624 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60193934-a521-4dda-8d57-f41affeaab02-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "60193934-a521-4dda-8d57-f41affeaab02" (UID: "60193934-a521-4dda-8d57-f41affeaab02"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.856846 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "60193934-a521-4dda-8d57-f41affeaab02" (UID: "60193934-a521-4dda-8d57-f41affeaab02"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.860298 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60193934-a521-4dda-8d57-f41affeaab02-kube-api-access-mb6wm" (OuterVolumeSpecName: "kube-api-access-mb6wm") pod "60193934-a521-4dda-8d57-f41affeaab02" (UID: "60193934-a521-4dda-8d57-f41affeaab02"). InnerVolumeSpecName "kube-api-access-mb6wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.945862 4681 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.981119 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60193934-a521-4dda-8d57-f41affeaab02-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.981282 4681 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/60193934-a521-4dda-8d57-f41affeaab02-httpd-run\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.981507 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb6wm\" (UniqueName: \"kubernetes.io/projected/60193934-a521-4dda-8d57-f41affeaab02-kube-api-access-mb6wm\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.981597 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60193934-a521-4dda-8d57-f41affeaab02-logs\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.954757 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60193934-a521-4dda-8d57-f41affeaab02-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60193934-a521-4dda-8d57-f41affeaab02" (UID: "60193934-a521-4dda-8d57-f41affeaab02"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.978322 4681 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Apr 04 02:26:57 crc kubenswrapper[4681]: I0404 02:26:57.985153 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60193934-a521-4dda-8d57-f41affeaab02-config-data" (OuterVolumeSpecName: "config-data") pod "60193934-a521-4dda-8d57-f41affeaab02" (UID: "60193934-a521-4dda-8d57-f41affeaab02"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.003444 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60193934-a521-4dda-8d57-f41affeaab02-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "60193934-a521-4dda-8d57-f41affeaab02" (UID: "60193934-a521-4dda-8d57-f41affeaab02"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.083874 4681 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.083924 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60193934-a521-4dda-8d57-f41affeaab02-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.083937 4681 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/60193934-a521-4dda-8d57-f41affeaab02-public-tls-certs\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.083945 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60193934-a521-4dda-8d57-f41affeaab02-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.255859 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a09665ac-96ec-498e-a5de-0f2e60a6a6a4","Type":"ContainerStarted","Data":"d3c8eded2a9aaea1936056634d7a7d6a959a84dcac7cdc037f6e77a4e601b7ed"} Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.257816 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"60193934-a521-4dda-8d57-f41affeaab02","Type":"ContainerDied","Data":"5c1ce7b7e6429aab3163ce4fd4403dc6993144e555d0349de648c0896441fb48"} Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.257862 4681 scope.go:117] "RemoveContainer" containerID="51cbafa8d2eb1dd9e74f8eb062a9c11c17e082f2fe4bd1ec5b261e6f904b51ef" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.258046 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.258597 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-zz49b"] Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.270115 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.288155 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.352749 4681 scope.go:117] "RemoveContainer" containerID="08852104fb308cc149f2bf55c77ee20cee14d2e0fff74ac52a5b28ed27db9ff2" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.363031 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="4b42bc47-8477-440f-b606-5c8c5cc6dee3" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.178:9292/healthcheck\": read tcp 10.217.0.2:32778->10.217.0.178:9292: read: connection reset by peer" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.363382 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="4b42bc47-8477-440f-b606-5c8c5cc6dee3" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.178:9292/healthcheck\": read tcp 10.217.0.2:32768->10.217.0.178:9292: read: connection reset by peer" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.367577 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.384621 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.399872 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Apr 04 02:26:58 crc kubenswrapper[4681]: E0404 02:26:58.400375 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60193934-a521-4dda-8d57-f41affeaab02" containerName="glance-log" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.400391 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="60193934-a521-4dda-8d57-f41affeaab02" containerName="glance-log" Apr 04 02:26:58 crc kubenswrapper[4681]: E0404 02:26:58.400420 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60193934-a521-4dda-8d57-f41affeaab02" containerName="glance-httpd" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.400427 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="60193934-a521-4dda-8d57-f41affeaab02" containerName="glance-httpd" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.400654 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="60193934-a521-4dda-8d57-f41affeaab02" containerName="glance-log" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.400670 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="60193934-a521-4dda-8d57-f41affeaab02" containerName="glance-httpd" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.401958 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.407072 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.407276 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.420902 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.458556 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dn8zm"] Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.491295 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af43da5-4945-49e2-ad66-afe1eefd4f4c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9af43da5-4945-49e2-ad66-afe1eefd4f4c\") " pod="openstack/glance-default-external-api-0" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.491340 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfjqb\" (UniqueName: \"kubernetes.io/projected/9af43da5-4945-49e2-ad66-afe1eefd4f4c-kube-api-access-pfjqb\") pod \"glance-default-external-api-0\" (UID: \"9af43da5-4945-49e2-ad66-afe1eefd4f4c\") " pod="openstack/glance-default-external-api-0" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.491384 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"9af43da5-4945-49e2-ad66-afe1eefd4f4c\") " pod="openstack/glance-default-external-api-0" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.491470 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9af43da5-4945-49e2-ad66-afe1eefd4f4c-scripts\") pod \"glance-default-external-api-0\" (UID: \"9af43da5-4945-49e2-ad66-afe1eefd4f4c\") " pod="openstack/glance-default-external-api-0" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.491547 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9af43da5-4945-49e2-ad66-afe1eefd4f4c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9af43da5-4945-49e2-ad66-afe1eefd4f4c\") " pod="openstack/glance-default-external-api-0" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.491572 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af43da5-4945-49e2-ad66-afe1eefd4f4c-logs\") pod \"glance-default-external-api-0\" (UID: \"9af43da5-4945-49e2-ad66-afe1eefd4f4c\") " pod="openstack/glance-default-external-api-0" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.491633 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9af43da5-4945-49e2-ad66-afe1eefd4f4c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9af43da5-4945-49e2-ad66-afe1eefd4f4c\") " pod="openstack/glance-default-external-api-0" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.491704 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af43da5-4945-49e2-ad66-afe1eefd4f4c-config-data\") pod \"glance-default-external-api-0\" (UID: \"9af43da5-4945-49e2-ad66-afe1eefd4f4c\") " pod="openstack/glance-default-external-api-0" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.593472 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af43da5-4945-49e2-ad66-afe1eefd4f4c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9af43da5-4945-49e2-ad66-afe1eefd4f4c\") " pod="openstack/glance-default-external-api-0" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.594806 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfjqb\" (UniqueName: \"kubernetes.io/projected/9af43da5-4945-49e2-ad66-afe1eefd4f4c-kube-api-access-pfjqb\") pod \"glance-default-external-api-0\" (UID: \"9af43da5-4945-49e2-ad66-afe1eefd4f4c\") " pod="openstack/glance-default-external-api-0" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.594877 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"9af43da5-4945-49e2-ad66-afe1eefd4f4c\") " pod="openstack/glance-default-external-api-0" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.595248 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"9af43da5-4945-49e2-ad66-afe1eefd4f4c\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.595842 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9af43da5-4945-49e2-ad66-afe1eefd4f4c-scripts\") pod \"glance-default-external-api-0\" (UID: \"9af43da5-4945-49e2-ad66-afe1eefd4f4c\") " pod="openstack/glance-default-external-api-0" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.595953 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9af43da5-4945-49e2-ad66-afe1eefd4f4c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9af43da5-4945-49e2-ad66-afe1eefd4f4c\") " pod="openstack/glance-default-external-api-0" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.595978 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af43da5-4945-49e2-ad66-afe1eefd4f4c-logs\") pod \"glance-default-external-api-0\" (UID: \"9af43da5-4945-49e2-ad66-afe1eefd4f4c\") " pod="openstack/glance-default-external-api-0" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.596045 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9af43da5-4945-49e2-ad66-afe1eefd4f4c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9af43da5-4945-49e2-ad66-afe1eefd4f4c\") " pod="openstack/glance-default-external-api-0" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.596115 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af43da5-4945-49e2-ad66-afe1eefd4f4c-config-data\") pod \"glance-default-external-api-0\" (UID: \"9af43da5-4945-49e2-ad66-afe1eefd4f4c\") " pod="openstack/glance-default-external-api-0" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.596653 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9af43da5-4945-49e2-ad66-afe1eefd4f4c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9af43da5-4945-49e2-ad66-afe1eefd4f4c\") " pod="openstack/glance-default-external-api-0" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.596700 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9af43da5-4945-49e2-ad66-afe1eefd4f4c-logs\") pod \"glance-default-external-api-0\" (UID: \"9af43da5-4945-49e2-ad66-afe1eefd4f4c\") " pod="openstack/glance-default-external-api-0" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.604231 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9af43da5-4945-49e2-ad66-afe1eefd4f4c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9af43da5-4945-49e2-ad66-afe1eefd4f4c\") " pod="openstack/glance-default-external-api-0" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.604593 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9af43da5-4945-49e2-ad66-afe1eefd4f4c-config-data\") pod \"glance-default-external-api-0\" (UID: \"9af43da5-4945-49e2-ad66-afe1eefd4f4c\") " pod="openstack/glance-default-external-api-0" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.608801 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9af43da5-4945-49e2-ad66-afe1eefd4f4c-scripts\") pod \"glance-default-external-api-0\" (UID: \"9af43da5-4945-49e2-ad66-afe1eefd4f4c\") " pod="openstack/glance-default-external-api-0" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.616926 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9af43da5-4945-49e2-ad66-afe1eefd4f4c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9af43da5-4945-49e2-ad66-afe1eefd4f4c\") " pod="openstack/glance-default-external-api-0" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.619754 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfjqb\" (UniqueName: \"kubernetes.io/projected/9af43da5-4945-49e2-ad66-afe1eefd4f4c-kube-api-access-pfjqb\") pod \"glance-default-external-api-0\" (UID: \"9af43da5-4945-49e2-ad66-afe1eefd4f4c\") " pod="openstack/glance-default-external-api-0" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.652113 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"9af43da5-4945-49e2-ad66-afe1eefd4f4c\") " pod="openstack/glance-default-external-api-0" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.730509 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.744729 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.746650 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-2f8c-account-create-update-96qv2"] Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.800343 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-qlsx4"] Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.866761 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-4867-account-create-update-gj56s"] Apr 04 02:26:58 crc kubenswrapper[4681]: I0404 02:26:58.881560 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-dc13-account-create-update-bm9fk"] Apr 04 02:26:59 crc kubenswrapper[4681]: I0404 02:26:59.216398 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60193934-a521-4dda-8d57-f41affeaab02" path="/var/lib/kubelet/pods/60193934-a521-4dda-8d57-f41affeaab02/volumes" Apr 04 02:26:59 crc kubenswrapper[4681]: I0404 02:26:59.264709 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-554fd9954f-c5kv8" podUID="99648c0a-d8f3-41f8-a03d-7a21a4a84156" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 04 02:26:59 crc kubenswrapper[4681]: I0404 02:26:59.272334 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-554fd9954f-c5kv8" podUID="99648c0a-d8f3-41f8-a03d-7a21a4a84156" containerName="neutron-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 04 02:26:59 crc kubenswrapper[4681]: I0404 02:26:59.274630 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-554fd9954f-c5kv8" podUID="99648c0a-d8f3-41f8-a03d-7a21a4a84156" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 503" Apr 04 02:26:59 crc kubenswrapper[4681]: I0404 02:26:59.288526 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zz49b" event={"ID":"5c9e030b-26b4-4add-95b6-aaf9b50907db","Type":"ContainerStarted","Data":"e1e3f8c4c2eaa5fa6f2817639852c3f7064d58c649cb967fc4717e06ce9b7fd8"} Apr 04 02:26:59 crc kubenswrapper[4681]: I0404 02:26:59.288573 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zz49b" event={"ID":"5c9e030b-26b4-4add-95b6-aaf9b50907db","Type":"ContainerStarted","Data":"1509cd3e414da40ba57de680c9d6347b9aa243f362ad66e4152e6c02360a3c73"} Apr 04 02:26:59 crc kubenswrapper[4681]: I0404 02:26:59.298179 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4867-account-create-update-gj56s" event={"ID":"61bda19d-5103-4038-8296-583fb1d25024","Type":"ContainerStarted","Data":"5839011d1433399b8f15bae69a584cb6956cd40732713f9dd631f3c602baf909"} Apr 04 02:26:59 crc kubenswrapper[4681]: I0404 02:26:59.306641 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2f8c-account-create-update-96qv2" event={"ID":"43326c52-09d6-47c0-a336-cd16e11dd6a0","Type":"ContainerStarted","Data":"cc8084f2436cc2d52944fd2ec5ef3436667fbdf7dcfac590d8deab0ce7ddd9b5"} Apr 04 02:26:59 crc kubenswrapper[4681]: I0404 02:26:59.316543 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-zz49b" podStartSLOduration=3.316521957 podStartE2EDuration="3.316521957s" podCreationTimestamp="2026-04-04 02:26:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:26:59.314632995 +0000 UTC m=+1898.980408115" watchObservedRunningTime="2026-04-04 02:26:59.316521957 +0000 UTC m=+1898.982297077" Apr 04 02:26:59 crc kubenswrapper[4681]: I0404 02:26:59.337010 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qlsx4" event={"ID":"23481861-506b-4a5d-a1da-d6a21811d7c5","Type":"ContainerStarted","Data":"11b609ffa52a0158aaec66d437e92916326c4aa39f05874c631655e1188c73f0"} Apr 04 02:26:59 crc kubenswrapper[4681]: I0404 02:26:59.379728 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dn8zm" event={"ID":"624921f8-2de2-4354-9a6b-c5cb0c9e9a21","Type":"ContainerStarted","Data":"9eba567f03f4a472d2996c523cb0a32a66fc018e1aa71c8e22056ea11e3349b5"} Apr 04 02:26:59 crc kubenswrapper[4681]: I0404 02:26:59.383449 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-dc13-account-create-update-bm9fk" event={"ID":"0f70a3b1-7886-4a8d-988e-8bf233a96729","Type":"ContainerStarted","Data":"83bab8969b59c443b0ce68df4fd19db82e5a8a593880877e34ec43f93518bc36"} Apr 04 02:26:59 crc kubenswrapper[4681]: I0404 02:26:59.393984 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4b42bc47-8477-440f-b606-5c8c5cc6dee3","Type":"ContainerDied","Data":"7fc6d50786cbfbfb9e2f347c07b3050a1e48e7c61b300d9e7bb625443c95232a"} Apr 04 02:26:59 crc kubenswrapper[4681]: I0404 02:26:59.393096 4681 generic.go:334] "Generic (PLEG): container finished" podID="4b42bc47-8477-440f-b606-5c8c5cc6dee3" containerID="7fc6d50786cbfbfb9e2f347c07b3050a1e48e7c61b300d9e7bb625443c95232a" exitCode=0 Apr 04 02:26:59 crc kubenswrapper[4681]: I0404 02:26:59.418795 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Apr 04 02:26:59 crc kubenswrapper[4681]: I0404 02:26:59.797027 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Apr 04 02:26:59 crc kubenswrapper[4681]: I0404 02:26:59.897230 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.008604 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Apr 04 02:27:00 crc kubenswrapper[4681]: W0404 02:27:00.093708 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9af43da5_4945_49e2_ad66_afe1eefd4f4c.slice/crio-a47bef8688372f5dc707f4e639b6b2753202efcce2e165958c9c3f4d7242fdcf WatchSource:0}: Error finding container a47bef8688372f5dc707f4e639b6b2753202efcce2e165958c9c3f4d7242fdcf: Status 404 returned error can't find the container with id a47bef8688372f5dc707f4e639b6b2753202efcce2e165958c9c3f4d7242fdcf Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.097703 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.289943 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b42bc47-8477-440f-b606-5c8c5cc6dee3-internal-tls-certs\") pod \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.290242 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b42bc47-8477-440f-b606-5c8c5cc6dee3-logs\") pod \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.290352 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqjjm\" (UniqueName: \"kubernetes.io/projected/4b42bc47-8477-440f-b606-5c8c5cc6dee3-kube-api-access-xqjjm\") pod \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.290424 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b42bc47-8477-440f-b606-5c8c5cc6dee3-combined-ca-bundle\") pod \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.290461 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b42bc47-8477-440f-b606-5c8c5cc6dee3-httpd-run\") pod \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.290496 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b42bc47-8477-440f-b606-5c8c5cc6dee3-config-data\") pod \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.290533 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.290561 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b42bc47-8477-440f-b606-5c8c5cc6dee3-scripts\") pod \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\" (UID: \"4b42bc47-8477-440f-b606-5c8c5cc6dee3\") " Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.293790 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b42bc47-8477-440f-b606-5c8c5cc6dee3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4b42bc47-8477-440f-b606-5c8c5cc6dee3" (UID: "4b42bc47-8477-440f-b606-5c8c5cc6dee3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.294902 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b42bc47-8477-440f-b606-5c8c5cc6dee3-logs" (OuterVolumeSpecName: "logs") pod "4b42bc47-8477-440f-b606-5c8c5cc6dee3" (UID: "4b42bc47-8477-440f-b606-5c8c5cc6dee3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.299181 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b42bc47-8477-440f-b606-5c8c5cc6dee3-kube-api-access-xqjjm" (OuterVolumeSpecName: "kube-api-access-xqjjm") pod "4b42bc47-8477-440f-b606-5c8c5cc6dee3" (UID: "4b42bc47-8477-440f-b606-5c8c5cc6dee3"). InnerVolumeSpecName "kube-api-access-xqjjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.302773 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "4b42bc47-8477-440f-b606-5c8c5cc6dee3" (UID: "4b42bc47-8477-440f-b606-5c8c5cc6dee3"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.305555 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b42bc47-8477-440f-b606-5c8c5cc6dee3-scripts" (OuterVolumeSpecName: "scripts") pod "4b42bc47-8477-440f-b606-5c8c5cc6dee3" (UID: "4b42bc47-8477-440f-b606-5c8c5cc6dee3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.354775 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b42bc47-8477-440f-b606-5c8c5cc6dee3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b42bc47-8477-440f-b606-5c8c5cc6dee3" (UID: "4b42bc47-8477-440f-b606-5c8c5cc6dee3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.392526 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b42bc47-8477-440f-b606-5c8c5cc6dee3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.392562 4681 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b42bc47-8477-440f-b606-5c8c5cc6dee3-httpd-run\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.392580 4681 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.392591 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b42bc47-8477-440f-b606-5c8c5cc6dee3-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.392601 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b42bc47-8477-440f-b606-5c8c5cc6dee3-logs\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.392609 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqjjm\" (UniqueName: \"kubernetes.io/projected/4b42bc47-8477-440f-b606-5c8c5cc6dee3-kube-api-access-xqjjm\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.415209 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b42bc47-8477-440f-b606-5c8c5cc6dee3-config-data" (OuterVolumeSpecName: "config-data") pod "4b42bc47-8477-440f-b606-5c8c5cc6dee3" (UID: "4b42bc47-8477-440f-b606-5c8c5cc6dee3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.416566 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4b42bc47-8477-440f-b606-5c8c5cc6dee3","Type":"ContainerDied","Data":"e0215b4bd80fea3875abc995e2bda8f208083b22a3807b390a26a20158866062"} Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.416758 4681 scope.go:117] "RemoveContainer" containerID="7fc6d50786cbfbfb9e2f347c07b3050a1e48e7c61b300d9e7bb625443c95232a" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.416965 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.419314 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9af43da5-4945-49e2-ad66-afe1eefd4f4c","Type":"ContainerStarted","Data":"a47bef8688372f5dc707f4e639b6b2753202efcce2e165958c9c3f4d7242fdcf"} Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.422661 4681 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.426391 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b42bc47-8477-440f-b606-5c8c5cc6dee3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4b42bc47-8477-440f-b606-5c8c5cc6dee3" (UID: "4b42bc47-8477-440f-b606-5c8c5cc6dee3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.447935 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dn8zm" event={"ID":"624921f8-2de2-4354-9a6b-c5cb0c9e9a21","Type":"ContainerStarted","Data":"b674a83f619baadebecdb85e2d2cf9e2b2914920f0d44fb1a5207e4f0c022473"} Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.455730 4681 scope.go:117] "RemoveContainer" containerID="e68dda0525c1dd814b88ac8ab7014ba8cfbdafb2115894da3d97ef447742b5f8" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.475041 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-dc13-account-create-update-bm9fk" event={"ID":"0f70a3b1-7886-4a8d-988e-8bf233a96729","Type":"ContainerStarted","Data":"311673f6c79575cc287edbc697b98f4db2d232b9478454010740f1f10de0a239"} Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.480790 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-dn8zm" podStartSLOduration=4.480765036 podStartE2EDuration="4.480765036s" podCreationTimestamp="2026-04-04 02:26:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:27:00.469798025 +0000 UTC m=+1900.135573155" watchObservedRunningTime="2026-04-04 02:27:00.480765036 +0000 UTC m=+1900.146540156" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.494916 4681 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b42bc47-8477-440f-b606-5c8c5cc6dee3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.494968 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b42bc47-8477-440f-b606-5c8c5cc6dee3-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.494979 4681 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.523464 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.768644 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.789613 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.810679 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Apr 04 02:27:00 crc kubenswrapper[4681]: E0404 02:27:00.811439 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b42bc47-8477-440f-b606-5c8c5cc6dee3" containerName="glance-log" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.811471 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b42bc47-8477-440f-b606-5c8c5cc6dee3" containerName="glance-log" Apr 04 02:27:00 crc kubenswrapper[4681]: E0404 02:27:00.811548 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b42bc47-8477-440f-b606-5c8c5cc6dee3" containerName="glance-httpd" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.811561 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b42bc47-8477-440f-b606-5c8c5cc6dee3" containerName="glance-httpd" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.811848 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b42bc47-8477-440f-b606-5c8c5cc6dee3" containerName="glance-log" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.811871 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b42bc47-8477-440f-b606-5c8c5cc6dee3" containerName="glance-httpd" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.813453 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.816360 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.816608 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Apr 04 02:27:00 crc kubenswrapper[4681]: I0404 02:27:00.893382 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.012616 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"1f0d9a1d-5773-426e-adfa-6a0aae0ec79a\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.012727 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f0d9a1d-5773-426e-adfa-6a0aae0ec79a-logs\") pod \"glance-default-internal-api-0\" (UID: \"1f0d9a1d-5773-426e-adfa-6a0aae0ec79a\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.012809 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f0d9a1d-5773-426e-adfa-6a0aae0ec79a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1f0d9a1d-5773-426e-adfa-6a0aae0ec79a\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.012848 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0d9a1d-5773-426e-adfa-6a0aae0ec79a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1f0d9a1d-5773-426e-adfa-6a0aae0ec79a\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.012870 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0d9a1d-5773-426e-adfa-6a0aae0ec79a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1f0d9a1d-5773-426e-adfa-6a0aae0ec79a\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.012898 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-286dc\" (UniqueName: \"kubernetes.io/projected/1f0d9a1d-5773-426e-adfa-6a0aae0ec79a-kube-api-access-286dc\") pod \"glance-default-internal-api-0\" (UID: \"1f0d9a1d-5773-426e-adfa-6a0aae0ec79a\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.012920 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f0d9a1d-5773-426e-adfa-6a0aae0ec79a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1f0d9a1d-5773-426e-adfa-6a0aae0ec79a\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.013048 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f0d9a1d-5773-426e-adfa-6a0aae0ec79a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1f0d9a1d-5773-426e-adfa-6a0aae0ec79a\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.114660 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f0d9a1d-5773-426e-adfa-6a0aae0ec79a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1f0d9a1d-5773-426e-adfa-6a0aae0ec79a\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.114731 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0d9a1d-5773-426e-adfa-6a0aae0ec79a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1f0d9a1d-5773-426e-adfa-6a0aae0ec79a\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.114765 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0d9a1d-5773-426e-adfa-6a0aae0ec79a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1f0d9a1d-5773-426e-adfa-6a0aae0ec79a\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.114796 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-286dc\" (UniqueName: \"kubernetes.io/projected/1f0d9a1d-5773-426e-adfa-6a0aae0ec79a-kube-api-access-286dc\") pod \"glance-default-internal-api-0\" (UID: \"1f0d9a1d-5773-426e-adfa-6a0aae0ec79a\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.114824 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f0d9a1d-5773-426e-adfa-6a0aae0ec79a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1f0d9a1d-5773-426e-adfa-6a0aae0ec79a\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.114857 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f0d9a1d-5773-426e-adfa-6a0aae0ec79a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1f0d9a1d-5773-426e-adfa-6a0aae0ec79a\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.114954 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"1f0d9a1d-5773-426e-adfa-6a0aae0ec79a\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.114995 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f0d9a1d-5773-426e-adfa-6a0aae0ec79a-logs\") pod \"glance-default-internal-api-0\" (UID: \"1f0d9a1d-5773-426e-adfa-6a0aae0ec79a\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.115997 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f0d9a1d-5773-426e-adfa-6a0aae0ec79a-logs\") pod \"glance-default-internal-api-0\" (UID: \"1f0d9a1d-5773-426e-adfa-6a0aae0ec79a\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.116352 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1f0d9a1d-5773-426e-adfa-6a0aae0ec79a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1f0d9a1d-5773-426e-adfa-6a0aae0ec79a\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.117135 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"1f0d9a1d-5773-426e-adfa-6a0aae0ec79a\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.120329 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f0d9a1d-5773-426e-adfa-6a0aae0ec79a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1f0d9a1d-5773-426e-adfa-6a0aae0ec79a\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.121127 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f0d9a1d-5773-426e-adfa-6a0aae0ec79a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1f0d9a1d-5773-426e-adfa-6a0aae0ec79a\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.121541 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f0d9a1d-5773-426e-adfa-6a0aae0ec79a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1f0d9a1d-5773-426e-adfa-6a0aae0ec79a\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.130467 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f0d9a1d-5773-426e-adfa-6a0aae0ec79a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1f0d9a1d-5773-426e-adfa-6a0aae0ec79a\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.143183 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-286dc\" (UniqueName: \"kubernetes.io/projected/1f0d9a1d-5773-426e-adfa-6a0aae0ec79a-kube-api-access-286dc\") pod \"glance-default-internal-api-0\" (UID: \"1f0d9a1d-5773-426e-adfa-6a0aae0ec79a\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.206625 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"1f0d9a1d-5773-426e-adfa-6a0aae0ec79a\") " pod="openstack/glance-default-internal-api-0" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.240923 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b42bc47-8477-440f-b606-5c8c5cc6dee3" path="/var/lib/kubelet/pods/4b42bc47-8477-440f-b606-5c8c5cc6dee3/volumes" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.463950 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.507201 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.520452 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4867-account-create-update-gj56s" event={"ID":"61bda19d-5103-4038-8296-583fb1d25024","Type":"ContainerStarted","Data":"69024f7ab31c18cfb5865f3d5ec39c5c664924e6314921baf872776089090cf9"} Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.525544 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2f8c-account-create-update-96qv2" event={"ID":"43326c52-09d6-47c0-a336-cd16e11dd6a0","Type":"ContainerStarted","Data":"f9dbf785a3d3b41f29e3dc2fc882897b370437dc2d62b37f2d236a7f8dbad8b4"} Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.533027 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qlsx4" event={"ID":"23481861-506b-4a5d-a1da-d6a21811d7c5","Type":"ContainerStarted","Data":"79c496998594a50611ff5d009d57137b1e9fded3c052d1bee84662db1ca68783"} Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.546151 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a09665ac-96ec-498e-a5de-0f2e60a6a6a4","Type":"ContainerStarted","Data":"54abf6825932ee075a152abf32429784fb8e77fcf888a409f8b6c60a4bdf4bfd"} Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.552068 4681 generic.go:334] "Generic (PLEG): container finished" podID="624921f8-2de2-4354-9a6b-c5cb0c9e9a21" containerID="b674a83f619baadebecdb85e2d2cf9e2b2914920f0d44fb1a5207e4f0c022473" exitCode=0 Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.552122 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dn8zm" event={"ID":"624921f8-2de2-4354-9a6b-c5cb0c9e9a21","Type":"ContainerDied","Data":"b674a83f619baadebecdb85e2d2cf9e2b2914920f0d44fb1a5207e4f0c022473"} Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.573278 4681 generic.go:334] "Generic (PLEG): container finished" podID="0f70a3b1-7886-4a8d-988e-8bf233a96729" containerID="311673f6c79575cc287edbc697b98f4db2d232b9478454010740f1f10de0a239" exitCode=0 Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.573423 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-dc13-account-create-update-bm9fk" event={"ID":"0f70a3b1-7886-4a8d-988e-8bf233a96729","Type":"ContainerDied","Data":"311673f6c79575cc287edbc697b98f4db2d232b9478454010740f1f10de0a239"} Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.574213 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-4867-account-create-update-gj56s" podStartSLOduration=4.57417538 podStartE2EDuration="4.57417538s" podCreationTimestamp="2026-04-04 02:26:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:27:01.547123128 +0000 UTC m=+1901.212898248" watchObservedRunningTime="2026-04-04 02:27:01.57417538 +0000 UTC m=+1901.239950500" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.582752 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-2f8c-account-create-update-96qv2" podStartSLOduration=5.582715634 podStartE2EDuration="5.582715634s" podCreationTimestamp="2026-04-04 02:26:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:27:01.565933194 +0000 UTC m=+1901.231708314" watchObservedRunningTime="2026-04-04 02:27:01.582715634 +0000 UTC m=+1901.248490754" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.613141 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-qlsx4" podStartSLOduration=5.613088258 podStartE2EDuration="5.613088258s" podCreationTimestamp="2026-04-04 02:26:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:27:01.594378365 +0000 UTC m=+1901.260153485" watchObservedRunningTime="2026-04-04 02:27:01.613088258 +0000 UTC m=+1901.278863368" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.622252 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.633372 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9af43da5-4945-49e2-ad66-afe1eefd4f4c","Type":"ContainerStarted","Data":"27b4ccddb298ad6f0f999faca6342e9af52bcc23bc20c1af447a248b4d9c926e"} Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.644834 4681 generic.go:334] "Generic (PLEG): container finished" podID="5c9e030b-26b4-4add-95b6-aaf9b50907db" containerID="e1e3f8c4c2eaa5fa6f2817639852c3f7064d58c649cb967fc4717e06ce9b7fd8" exitCode=0 Apr 04 02:27:01 crc kubenswrapper[4681]: I0404 02:27:01.644968 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zz49b" event={"ID":"5c9e030b-26b4-4add-95b6-aaf9b50907db","Type":"ContainerDied","Data":"e1e3f8c4c2eaa5fa6f2817639852c3f7064d58c649cb967fc4717e06ce9b7fd8"} Apr 04 02:27:01 crc kubenswrapper[4681]: E0404 02:27:01.865415 4681 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f70a3b1_7886_4a8d_988e_8bf233a96729.slice/crio-311673f6c79575cc287edbc697b98f4db2d232b9478454010740f1f10de0a239.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod624921f8_2de2_4354_9a6b_c5cb0c9e9a21.slice/crio-conmon-b674a83f619baadebecdb85e2d2cf9e2b2914920f0d44fb1a5207e4f0c022473.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c9e030b_26b4_4add_95b6_aaf9b50907db.slice/crio-conmon-e1e3f8c4c2eaa5fa6f2817639852c3f7064d58c649cb967fc4717e06ce9b7fd8.scope\": RecentStats: unable to find data in memory cache]" Apr 04 02:27:02 crc kubenswrapper[4681]: I0404 02:27:02.381279 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Apr 04 02:27:02 crc kubenswrapper[4681]: I0404 02:27:02.656427 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a09665ac-96ec-498e-a5de-0f2e60a6a6a4","Type":"ContainerStarted","Data":"4caeb7e915589760f1b67c931dfecb34555468064a7f241b94dbe0e7d6e00a3e"} Apr 04 02:27:02 crc kubenswrapper[4681]: I0404 02:27:02.658675 4681 generic.go:334] "Generic (PLEG): container finished" podID="23481861-506b-4a5d-a1da-d6a21811d7c5" containerID="79c496998594a50611ff5d009d57137b1e9fded3c052d1bee84662db1ca68783" exitCode=0 Apr 04 02:27:02 crc kubenswrapper[4681]: I0404 02:27:02.658773 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qlsx4" event={"ID":"23481861-506b-4a5d-a1da-d6a21811d7c5","Type":"ContainerDied","Data":"79c496998594a50611ff5d009d57137b1e9fded3c052d1bee84662db1ca68783"} Apr 04 02:27:02 crc kubenswrapper[4681]: I0404 02:27:02.660992 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1f0d9a1d-5773-426e-adfa-6a0aae0ec79a","Type":"ContainerStarted","Data":"db24c6e15f5949d1ffe3b5c8127c8ff82a5c2914398b94516e529e6b40faaef0"} Apr 04 02:27:02 crc kubenswrapper[4681]: I0404 02:27:02.662075 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Apr 04 02:27:02 crc kubenswrapper[4681]: I0404 02:27:02.691492 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.178551 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dn8zm" Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.320833 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/624921f8-2de2-4354-9a6b-c5cb0c9e9a21-operator-scripts\") pod \"624921f8-2de2-4354-9a6b-c5cb0c9e9a21\" (UID: \"624921f8-2de2-4354-9a6b-c5cb0c9e9a21\") " Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.320904 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlnrq\" (UniqueName: \"kubernetes.io/projected/624921f8-2de2-4354-9a6b-c5cb0c9e9a21-kube-api-access-tlnrq\") pod \"624921f8-2de2-4354-9a6b-c5cb0c9e9a21\" (UID: \"624921f8-2de2-4354-9a6b-c5cb0c9e9a21\") " Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.322311 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/624921f8-2de2-4354-9a6b-c5cb0c9e9a21-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "624921f8-2de2-4354-9a6b-c5cb0c9e9a21" (UID: "624921f8-2de2-4354-9a6b-c5cb0c9e9a21"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.322844 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/624921f8-2de2-4354-9a6b-c5cb0c9e9a21-operator-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.331645 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/624921f8-2de2-4354-9a6b-c5cb0c9e9a21-kube-api-access-tlnrq" (OuterVolumeSpecName: "kube-api-access-tlnrq") pod "624921f8-2de2-4354-9a6b-c5cb0c9e9a21" (UID: "624921f8-2de2-4354-9a6b-c5cb0c9e9a21"). InnerVolumeSpecName "kube-api-access-tlnrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.335779 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-dc13-account-create-update-bm9fk" Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.379417 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zz49b" Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.424775 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f70a3b1-7886-4a8d-988e-8bf233a96729-operator-scripts\") pod \"0f70a3b1-7886-4a8d-988e-8bf233a96729\" (UID: \"0f70a3b1-7886-4a8d-988e-8bf233a96729\") " Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.424910 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn2gv\" (UniqueName: \"kubernetes.io/projected/0f70a3b1-7886-4a8d-988e-8bf233a96729-kube-api-access-hn2gv\") pod \"0f70a3b1-7886-4a8d-988e-8bf233a96729\" (UID: \"0f70a3b1-7886-4a8d-988e-8bf233a96729\") " Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.425371 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlnrq\" (UniqueName: \"kubernetes.io/projected/624921f8-2de2-4354-9a6b-c5cb0c9e9a21-kube-api-access-tlnrq\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.425681 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f70a3b1-7886-4a8d-988e-8bf233a96729-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0f70a3b1-7886-4a8d-988e-8bf233a96729" (UID: "0f70a3b1-7886-4a8d-988e-8bf233a96729"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.429596 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f70a3b1-7886-4a8d-988e-8bf233a96729-kube-api-access-hn2gv" (OuterVolumeSpecName: "kube-api-access-hn2gv") pod "0f70a3b1-7886-4a8d-988e-8bf233a96729" (UID: "0f70a3b1-7886-4a8d-988e-8bf233a96729"). InnerVolumeSpecName "kube-api-access-hn2gv". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.526860 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7v6c\" (UniqueName: \"kubernetes.io/projected/5c9e030b-26b4-4add-95b6-aaf9b50907db-kube-api-access-n7v6c\") pod \"5c9e030b-26b4-4add-95b6-aaf9b50907db\" (UID: \"5c9e030b-26b4-4add-95b6-aaf9b50907db\") " Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.527102 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c9e030b-26b4-4add-95b6-aaf9b50907db-operator-scripts\") pod \"5c9e030b-26b4-4add-95b6-aaf9b50907db\" (UID: \"5c9e030b-26b4-4add-95b6-aaf9b50907db\") " Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.527614 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn2gv\" (UniqueName: \"kubernetes.io/projected/0f70a3b1-7886-4a8d-988e-8bf233a96729-kube-api-access-hn2gv\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.527630 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0f70a3b1-7886-4a8d-988e-8bf233a96729-operator-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.528135 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c9e030b-26b4-4add-95b6-aaf9b50907db-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5c9e030b-26b4-4add-95b6-aaf9b50907db" (UID: "5c9e030b-26b4-4add-95b6-aaf9b50907db"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.532999 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c9e030b-26b4-4add-95b6-aaf9b50907db-kube-api-access-n7v6c" (OuterVolumeSpecName: "kube-api-access-n7v6c") pod "5c9e030b-26b4-4add-95b6-aaf9b50907db" (UID: "5c9e030b-26b4-4add-95b6-aaf9b50907db"). InnerVolumeSpecName "kube-api-access-n7v6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.629437 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7v6c\" (UniqueName: \"kubernetes.io/projected/5c9e030b-26b4-4add-95b6-aaf9b50907db-kube-api-access-n7v6c\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.629475 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c9e030b-26b4-4add-95b6-aaf9b50907db-operator-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.678407 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dn8zm" event={"ID":"624921f8-2de2-4354-9a6b-c5cb0c9e9a21","Type":"ContainerDied","Data":"9eba567f03f4a472d2996c523cb0a32a66fc018e1aa71c8e22056ea11e3349b5"} Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.678449 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9eba567f03f4a472d2996c523cb0a32a66fc018e1aa71c8e22056ea11e3349b5" Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.678492 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dn8zm" Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.684795 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-dc13-account-create-update-bm9fk" event={"ID":"0f70a3b1-7886-4a8d-988e-8bf233a96729","Type":"ContainerDied","Data":"83bab8969b59c443b0ce68df4fd19db82e5a8a593880877e34ec43f93518bc36"} Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.684831 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83bab8969b59c443b0ce68df4fd19db82e5a8a593880877e34ec43f93518bc36" Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.684878 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-dc13-account-create-update-bm9fk" Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.688040 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1f0d9a1d-5773-426e-adfa-6a0aae0ec79a","Type":"ContainerStarted","Data":"5c632e5e131dd9e0dd77507f82f5fba89b6586faa7d2e36459741df1f5e331e4"} Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.689850 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9af43da5-4945-49e2-ad66-afe1eefd4f4c","Type":"ContainerStarted","Data":"43e1a1520cdd2a07b26c8d3cd3b4499673f539f11beefc68ed87d9042232c287"} Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.694409 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-zz49b" Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.694437 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-zz49b" event={"ID":"5c9e030b-26b4-4add-95b6-aaf9b50907db","Type":"ContainerDied","Data":"1509cd3e414da40ba57de680c9d6347b9aa243f362ad66e4152e6c02360a3c73"} Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.694476 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1509cd3e414da40ba57de680c9d6347b9aa243f362ad66e4152e6c02360a3c73" Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.718753 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.71873468 podStartE2EDuration="5.71873468s" podCreationTimestamp="2026-04-04 02:26:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:27:03.715563343 +0000 UTC m=+1903.381338473" watchObservedRunningTime="2026-04-04 02:27:03.71873468 +0000 UTC m=+1903.384509800" Apr 04 02:27:03 crc kubenswrapper[4681]: I0404 02:27:03.982639 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qlsx4" Apr 04 02:27:04 crc kubenswrapper[4681]: I0404 02:27:04.140705 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zbbv\" (UniqueName: \"kubernetes.io/projected/23481861-506b-4a5d-a1da-d6a21811d7c5-kube-api-access-5zbbv\") pod \"23481861-506b-4a5d-a1da-d6a21811d7c5\" (UID: \"23481861-506b-4a5d-a1da-d6a21811d7c5\") " Apr 04 02:27:04 crc kubenswrapper[4681]: I0404 02:27:04.141084 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23481861-506b-4a5d-a1da-d6a21811d7c5-operator-scripts\") pod \"23481861-506b-4a5d-a1da-d6a21811d7c5\" (UID: \"23481861-506b-4a5d-a1da-d6a21811d7c5\") " Apr 04 02:27:04 crc kubenswrapper[4681]: I0404 02:27:04.142092 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23481861-506b-4a5d-a1da-d6a21811d7c5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "23481861-506b-4a5d-a1da-d6a21811d7c5" (UID: "23481861-506b-4a5d-a1da-d6a21811d7c5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:27:04 crc kubenswrapper[4681]: I0404 02:27:04.160600 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23481861-506b-4a5d-a1da-d6a21811d7c5-kube-api-access-5zbbv" (OuterVolumeSpecName: "kube-api-access-5zbbv") pod "23481861-506b-4a5d-a1da-d6a21811d7c5" (UID: "23481861-506b-4a5d-a1da-d6a21811d7c5"). InnerVolumeSpecName "kube-api-access-5zbbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:27:04 crc kubenswrapper[4681]: I0404 02:27:04.243327 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zbbv\" (UniqueName: \"kubernetes.io/projected/23481861-506b-4a5d-a1da-d6a21811d7c5-kube-api-access-5zbbv\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:04 crc kubenswrapper[4681]: I0404 02:27:04.243360 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23481861-506b-4a5d-a1da-d6a21811d7c5-operator-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:04 crc kubenswrapper[4681]: I0404 02:27:04.715522 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-qlsx4" event={"ID":"23481861-506b-4a5d-a1da-d6a21811d7c5","Type":"ContainerDied","Data":"11b609ffa52a0158aaec66d437e92916326c4aa39f05874c631655e1188c73f0"} Apr 04 02:27:04 crc kubenswrapper[4681]: I0404 02:27:04.715583 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11b609ffa52a0158aaec66d437e92916326c4aa39f05874c631655e1188c73f0" Apr 04 02:27:04 crc kubenswrapper[4681]: I0404 02:27:04.715785 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-qlsx4" Apr 04 02:27:06 crc kubenswrapper[4681]: I0404 02:27:06.767996 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1f0d9a1d-5773-426e-adfa-6a0aae0ec79a","Type":"ContainerStarted","Data":"a2361b90b96974dd19d11f7e756f44a6f7e11d6f637c0da178f3c00dacc20920"} Apr 04 02:27:07 crc kubenswrapper[4681]: I0404 02:27:07.201356 4681 scope.go:117] "RemoveContainer" containerID="a115e63e478f2dcd96d6a1ea5579c4abd8544184899aee1395f08415f92823da" Apr 04 02:27:07 crc kubenswrapper[4681]: E0404 02:27:07.201603 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:27:07 crc kubenswrapper[4681]: I0404 02:27:07.739451 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Apr 04 02:27:07 crc kubenswrapper[4681]: I0404 02:27:07.739913 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="3190cdec-e3a8-4aa4-81bb-bd814b96537f" containerName="watcher-decision-engine" containerID="cri-o://db94528a4fd5a4633a4ee440f0c10d99a7ff242a03b0c8efeb0ff3395435c60e" gracePeriod=30 Apr 04 02:27:07 crc kubenswrapper[4681]: I0404 02:27:07.780374 4681 generic.go:334] "Generic (PLEG): container finished" podID="61bda19d-5103-4038-8296-583fb1d25024" containerID="69024f7ab31c18cfb5865f3d5ec39c5c664924e6314921baf872776089090cf9" exitCode=0 Apr 04 02:27:07 crc kubenswrapper[4681]: I0404 02:27:07.780458 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4867-account-create-update-gj56s" event={"ID":"61bda19d-5103-4038-8296-583fb1d25024","Type":"ContainerDied","Data":"69024f7ab31c18cfb5865f3d5ec39c5c664924e6314921baf872776089090cf9"} Apr 04 02:27:07 crc kubenswrapper[4681]: I0404 02:27:07.783786 4681 generic.go:334] "Generic (PLEG): container finished" podID="43326c52-09d6-47c0-a336-cd16e11dd6a0" containerID="f9dbf785a3d3b41f29e3dc2fc882897b370437dc2d62b37f2d236a7f8dbad8b4" exitCode=0 Apr 04 02:27:07 crc kubenswrapper[4681]: I0404 02:27:07.783862 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2f8c-account-create-update-96qv2" event={"ID":"43326c52-09d6-47c0-a336-cd16e11dd6a0","Type":"ContainerDied","Data":"f9dbf785a3d3b41f29e3dc2fc882897b370437dc2d62b37f2d236a7f8dbad8b4"} Apr 04 02:27:07 crc kubenswrapper[4681]: I0404 02:27:07.851782 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.851763173 podStartE2EDuration="7.851763173s" podCreationTimestamp="2026-04-04 02:27:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:27:07.85128254 +0000 UTC m=+1907.517057670" watchObservedRunningTime="2026-04-04 02:27:07.851763173 +0000 UTC m=+1907.517538293" Apr 04 02:27:08 crc kubenswrapper[4681]: I0404 02:27:08.745765 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Apr 04 02:27:08 crc kubenswrapper[4681]: I0404 02:27:08.746066 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Apr 04 02:27:08 crc kubenswrapper[4681]: I0404 02:27:08.774669 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Apr 04 02:27:08 crc kubenswrapper[4681]: I0404 02:27:08.786452 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Apr 04 02:27:08 crc kubenswrapper[4681]: I0404 02:27:08.798571 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a09665ac-96ec-498e-a5de-0f2e60a6a6a4","Type":"ContainerStarted","Data":"863541c08a4105958de6ad17cedfc4c9079a35435eefd2f92ae13bbc6d21b237"} Apr 04 02:27:08 crc kubenswrapper[4681]: I0404 02:27:08.799156 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Apr 04 02:27:08 crc kubenswrapper[4681]: I0404 02:27:08.799185 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Apr 04 02:27:09 crc kubenswrapper[4681]: I0404 02:27:09.258634 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4867-account-create-update-gj56s" Apr 04 02:27:09 crc kubenswrapper[4681]: I0404 02:27:09.268380 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2f8c-account-create-update-96qv2" Apr 04 02:27:09 crc kubenswrapper[4681]: I0404 02:27:09.373861 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnjds\" (UniqueName: \"kubernetes.io/projected/43326c52-09d6-47c0-a336-cd16e11dd6a0-kube-api-access-fnjds\") pod \"43326c52-09d6-47c0-a336-cd16e11dd6a0\" (UID: \"43326c52-09d6-47c0-a336-cd16e11dd6a0\") " Apr 04 02:27:09 crc kubenswrapper[4681]: I0404 02:27:09.374155 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61bda19d-5103-4038-8296-583fb1d25024-operator-scripts\") pod \"61bda19d-5103-4038-8296-583fb1d25024\" (UID: \"61bda19d-5103-4038-8296-583fb1d25024\") " Apr 04 02:27:09 crc kubenswrapper[4681]: I0404 02:27:09.374315 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7k75\" (UniqueName: \"kubernetes.io/projected/61bda19d-5103-4038-8296-583fb1d25024-kube-api-access-m7k75\") pod \"61bda19d-5103-4038-8296-583fb1d25024\" (UID: \"61bda19d-5103-4038-8296-583fb1d25024\") " Apr 04 02:27:09 crc kubenswrapper[4681]: I0404 02:27:09.374503 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43326c52-09d6-47c0-a336-cd16e11dd6a0-operator-scripts\") pod \"43326c52-09d6-47c0-a336-cd16e11dd6a0\" (UID: \"43326c52-09d6-47c0-a336-cd16e11dd6a0\") " Apr 04 02:27:09 crc kubenswrapper[4681]: I0404 02:27:09.374778 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61bda19d-5103-4038-8296-583fb1d25024-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "61bda19d-5103-4038-8296-583fb1d25024" (UID: "61bda19d-5103-4038-8296-583fb1d25024"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:27:09 crc kubenswrapper[4681]: I0404 02:27:09.374872 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43326c52-09d6-47c0-a336-cd16e11dd6a0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "43326c52-09d6-47c0-a336-cd16e11dd6a0" (UID: "43326c52-09d6-47c0-a336-cd16e11dd6a0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:27:09 crc kubenswrapper[4681]: I0404 02:27:09.375739 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/61bda19d-5103-4038-8296-583fb1d25024-operator-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:09 crc kubenswrapper[4681]: I0404 02:27:09.375832 4681 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43326c52-09d6-47c0-a336-cd16e11dd6a0-operator-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:09 crc kubenswrapper[4681]: I0404 02:27:09.382432 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43326c52-09d6-47c0-a336-cd16e11dd6a0-kube-api-access-fnjds" (OuterVolumeSpecName: "kube-api-access-fnjds") pod "43326c52-09d6-47c0-a336-cd16e11dd6a0" (UID: "43326c52-09d6-47c0-a336-cd16e11dd6a0"). InnerVolumeSpecName "kube-api-access-fnjds". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:27:09 crc kubenswrapper[4681]: I0404 02:27:09.389086 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61bda19d-5103-4038-8296-583fb1d25024-kube-api-access-m7k75" (OuterVolumeSpecName: "kube-api-access-m7k75") pod "61bda19d-5103-4038-8296-583fb1d25024" (UID: "61bda19d-5103-4038-8296-583fb1d25024"). InnerVolumeSpecName "kube-api-access-m7k75". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:27:09 crc kubenswrapper[4681]: I0404 02:27:09.482506 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7k75\" (UniqueName: \"kubernetes.io/projected/61bda19d-5103-4038-8296-583fb1d25024-kube-api-access-m7k75\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:09 crc kubenswrapper[4681]: I0404 02:27:09.482554 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnjds\" (UniqueName: \"kubernetes.io/projected/43326c52-09d6-47c0-a336-cd16e11dd6a0-kube-api-access-fnjds\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:09 crc kubenswrapper[4681]: I0404 02:27:09.809979 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-4867-account-create-update-gj56s" event={"ID":"61bda19d-5103-4038-8296-583fb1d25024","Type":"ContainerDied","Data":"5839011d1433399b8f15bae69a584cb6956cd40732713f9dd631f3c602baf909"} Apr 04 02:27:09 crc kubenswrapper[4681]: I0404 02:27:09.810015 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-4867-account-create-update-gj56s" Apr 04 02:27:09 crc kubenswrapper[4681]: I0404 02:27:09.810020 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5839011d1433399b8f15bae69a584cb6956cd40732713f9dd631f3c602baf909" Apr 04 02:27:09 crc kubenswrapper[4681]: I0404 02:27:09.812684 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-2f8c-account-create-update-96qv2" event={"ID":"43326c52-09d6-47c0-a336-cd16e11dd6a0","Type":"ContainerDied","Data":"cc8084f2436cc2d52944fd2ec5ef3436667fbdf7dcfac590d8deab0ce7ddd9b5"} Apr 04 02:27:09 crc kubenswrapper[4681]: I0404 02:27:09.812721 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc8084f2436cc2d52944fd2ec5ef3436667fbdf7dcfac590d8deab0ce7ddd9b5" Apr 04 02:27:09 crc kubenswrapper[4681]: I0404 02:27:09.812804 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-2f8c-account-create-update-96qv2" Apr 04 02:27:09 crc kubenswrapper[4681]: I0404 02:27:09.819466 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a09665ac-96ec-498e-a5de-0f2e60a6a6a4" containerName="ceilometer-central-agent" containerID="cri-o://54abf6825932ee075a152abf32429784fb8e77fcf888a409f8b6c60a4bdf4bfd" gracePeriod=30 Apr 04 02:27:09 crc kubenswrapper[4681]: I0404 02:27:09.819720 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a09665ac-96ec-498e-a5de-0f2e60a6a6a4","Type":"ContainerStarted","Data":"3a10d60c6b5f05093985431c34ad25e416ececeefef490951d1b8398c4e5477f"} Apr 04 02:27:09 crc kubenswrapper[4681]: I0404 02:27:09.819767 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Apr 04 02:27:09 crc kubenswrapper[4681]: I0404 02:27:09.820075 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a09665ac-96ec-498e-a5de-0f2e60a6a6a4" containerName="proxy-httpd" containerID="cri-o://3a10d60c6b5f05093985431c34ad25e416ececeefef490951d1b8398c4e5477f" gracePeriod=30 Apr 04 02:27:09 crc kubenswrapper[4681]: I0404 02:27:09.820121 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a09665ac-96ec-498e-a5de-0f2e60a6a6a4" containerName="sg-core" containerID="cri-o://863541c08a4105958de6ad17cedfc4c9079a35435eefd2f92ae13bbc6d21b237" gracePeriod=30 Apr 04 02:27:09 crc kubenswrapper[4681]: I0404 02:27:09.820156 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a09665ac-96ec-498e-a5de-0f2e60a6a6a4" containerName="ceilometer-notification-agent" containerID="cri-o://4caeb7e915589760f1b67c931dfecb34555468064a7f241b94dbe0e7d6e00a3e" gracePeriod=30 Apr 04 02:27:09 crc kubenswrapper[4681]: I0404 02:27:09.869852 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.840775747 podStartE2EDuration="13.86983456s" podCreationTimestamp="2026-04-04 02:26:56 +0000 UTC" firstStartedPulling="2026-04-04 02:26:57.496719153 +0000 UTC m=+1897.162494273" lastFinishedPulling="2026-04-04 02:27:09.525777966 +0000 UTC m=+1909.191553086" observedRunningTime="2026-04-04 02:27:09.847790705 +0000 UTC m=+1909.513565825" watchObservedRunningTime="2026-04-04 02:27:09.86983456 +0000 UTC m=+1909.535609670" Apr 04 02:27:10 crc kubenswrapper[4681]: I0404 02:27:10.829900 4681 generic.go:334] "Generic (PLEG): container finished" podID="a09665ac-96ec-498e-a5de-0f2e60a6a6a4" containerID="3a10d60c6b5f05093985431c34ad25e416ececeefef490951d1b8398c4e5477f" exitCode=0 Apr 04 02:27:10 crc kubenswrapper[4681]: I0404 02:27:10.829929 4681 generic.go:334] "Generic (PLEG): container finished" podID="a09665ac-96ec-498e-a5de-0f2e60a6a6a4" containerID="863541c08a4105958de6ad17cedfc4c9079a35435eefd2f92ae13bbc6d21b237" exitCode=2 Apr 04 02:27:10 crc kubenswrapper[4681]: I0404 02:27:10.829930 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a09665ac-96ec-498e-a5de-0f2e60a6a6a4","Type":"ContainerDied","Data":"3a10d60c6b5f05093985431c34ad25e416ececeefef490951d1b8398c4e5477f"} Apr 04 02:27:10 crc kubenswrapper[4681]: I0404 02:27:10.829965 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a09665ac-96ec-498e-a5de-0f2e60a6a6a4","Type":"ContainerDied","Data":"863541c08a4105958de6ad17cedfc4c9079a35435eefd2f92ae13bbc6d21b237"} Apr 04 02:27:10 crc kubenswrapper[4681]: I0404 02:27:10.955036 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Apr 04 02:27:10 crc kubenswrapper[4681]: I0404 02:27:10.955141 4681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 04 02:27:10 crc kubenswrapper[4681]: I0404 02:27:10.966860 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Apr 04 02:27:11 crc kubenswrapper[4681]: I0404 02:27:11.465079 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Apr 04 02:27:11 crc kubenswrapper[4681]: I0404 02:27:11.466637 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Apr 04 02:27:11 crc kubenswrapper[4681]: I0404 02:27:11.497725 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Apr 04 02:27:11 crc kubenswrapper[4681]: I0404 02:27:11.524628 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Apr 04 02:27:11 crc kubenswrapper[4681]: I0404 02:27:11.842402 4681 generic.go:334] "Generic (PLEG): container finished" podID="a09665ac-96ec-498e-a5de-0f2e60a6a6a4" containerID="4caeb7e915589760f1b67c931dfecb34555468064a7f241b94dbe0e7d6e00a3e" exitCode=0 Apr 04 02:27:11 crc kubenswrapper[4681]: I0404 02:27:11.843369 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a09665ac-96ec-498e-a5de-0f2e60a6a6a4","Type":"ContainerDied","Data":"4caeb7e915589760f1b67c931dfecb34555468064a7f241b94dbe0e7d6e00a3e"} Apr 04 02:27:11 crc kubenswrapper[4681]: I0404 02:27:11.844303 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Apr 04 02:27:11 crc kubenswrapper[4681]: I0404 02:27:11.844330 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.564111 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dgfkz"] Apr 04 02:27:12 crc kubenswrapper[4681]: E0404 02:27:12.568777 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23481861-506b-4a5d-a1da-d6a21811d7c5" containerName="mariadb-database-create" Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.568810 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="23481861-506b-4a5d-a1da-d6a21811d7c5" containerName="mariadb-database-create" Apr 04 02:27:12 crc kubenswrapper[4681]: E0404 02:27:12.568840 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f70a3b1-7886-4a8d-988e-8bf233a96729" containerName="mariadb-account-create-update" Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.568852 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f70a3b1-7886-4a8d-988e-8bf233a96729" containerName="mariadb-account-create-update" Apr 04 02:27:12 crc kubenswrapper[4681]: E0404 02:27:12.568867 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43326c52-09d6-47c0-a336-cd16e11dd6a0" containerName="mariadb-account-create-update" Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.568875 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="43326c52-09d6-47c0-a336-cd16e11dd6a0" containerName="mariadb-account-create-update" Apr 04 02:27:12 crc kubenswrapper[4681]: E0404 02:27:12.568889 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="624921f8-2de2-4354-9a6b-c5cb0c9e9a21" containerName="mariadb-database-create" Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.568897 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="624921f8-2de2-4354-9a6b-c5cb0c9e9a21" containerName="mariadb-database-create" Apr 04 02:27:12 crc kubenswrapper[4681]: E0404 02:27:12.568911 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c9e030b-26b4-4add-95b6-aaf9b50907db" containerName="mariadb-database-create" Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.568919 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c9e030b-26b4-4add-95b6-aaf9b50907db" containerName="mariadb-database-create" Apr 04 02:27:12 crc kubenswrapper[4681]: E0404 02:27:12.568944 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61bda19d-5103-4038-8296-583fb1d25024" containerName="mariadb-account-create-update" Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.568953 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="61bda19d-5103-4038-8296-583fb1d25024" containerName="mariadb-account-create-update" Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.569198 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="23481861-506b-4a5d-a1da-d6a21811d7c5" containerName="mariadb-database-create" Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.569216 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c9e030b-26b4-4add-95b6-aaf9b50907db" containerName="mariadb-database-create" Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.569237 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="61bda19d-5103-4038-8296-583fb1d25024" containerName="mariadb-account-create-update" Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.569256 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="624921f8-2de2-4354-9a6b-c5cb0c9e9a21" containerName="mariadb-database-create" Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.569293 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="43326c52-09d6-47c0-a336-cd16e11dd6a0" containerName="mariadb-account-create-update" Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.569312 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f70a3b1-7886-4a8d-988e-8bf233a96729" containerName="mariadb-account-create-update" Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.570163 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dgfkz" Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.576274 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dgfkz"] Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.610904 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.610943 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-l9nvw" Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.611313 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.654075 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb3d2475-e275-4139-8d39-3b0518fa8e02-scripts\") pod \"nova-cell0-conductor-db-sync-dgfkz\" (UID: \"eb3d2475-e275-4139-8d39-3b0518fa8e02\") " pod="openstack/nova-cell0-conductor-db-sync-dgfkz" Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.654456 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86bbd\" (UniqueName: \"kubernetes.io/projected/eb3d2475-e275-4139-8d39-3b0518fa8e02-kube-api-access-86bbd\") pod \"nova-cell0-conductor-db-sync-dgfkz\" (UID: \"eb3d2475-e275-4139-8d39-3b0518fa8e02\") " pod="openstack/nova-cell0-conductor-db-sync-dgfkz" Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.654505 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb3d2475-e275-4139-8d39-3b0518fa8e02-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-dgfkz\" (UID: \"eb3d2475-e275-4139-8d39-3b0518fa8e02\") " pod="openstack/nova-cell0-conductor-db-sync-dgfkz" Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.654587 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb3d2475-e275-4139-8d39-3b0518fa8e02-config-data\") pod \"nova-cell0-conductor-db-sync-dgfkz\" (UID: \"eb3d2475-e275-4139-8d39-3b0518fa8e02\") " pod="openstack/nova-cell0-conductor-db-sync-dgfkz" Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.757682 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb3d2475-e275-4139-8d39-3b0518fa8e02-scripts\") pod \"nova-cell0-conductor-db-sync-dgfkz\" (UID: \"eb3d2475-e275-4139-8d39-3b0518fa8e02\") " pod="openstack/nova-cell0-conductor-db-sync-dgfkz" Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.757771 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86bbd\" (UniqueName: \"kubernetes.io/projected/eb3d2475-e275-4139-8d39-3b0518fa8e02-kube-api-access-86bbd\") pod \"nova-cell0-conductor-db-sync-dgfkz\" (UID: \"eb3d2475-e275-4139-8d39-3b0518fa8e02\") " pod="openstack/nova-cell0-conductor-db-sync-dgfkz" Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.757816 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb3d2475-e275-4139-8d39-3b0518fa8e02-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-dgfkz\" (UID: \"eb3d2475-e275-4139-8d39-3b0518fa8e02\") " pod="openstack/nova-cell0-conductor-db-sync-dgfkz" Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.757891 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb3d2475-e275-4139-8d39-3b0518fa8e02-config-data\") pod \"nova-cell0-conductor-db-sync-dgfkz\" (UID: \"eb3d2475-e275-4139-8d39-3b0518fa8e02\") " pod="openstack/nova-cell0-conductor-db-sync-dgfkz" Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.766154 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb3d2475-e275-4139-8d39-3b0518fa8e02-scripts\") pod \"nova-cell0-conductor-db-sync-dgfkz\" (UID: \"eb3d2475-e275-4139-8d39-3b0518fa8e02\") " pod="openstack/nova-cell0-conductor-db-sync-dgfkz" Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.766400 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb3d2475-e275-4139-8d39-3b0518fa8e02-config-data\") pod \"nova-cell0-conductor-db-sync-dgfkz\" (UID: \"eb3d2475-e275-4139-8d39-3b0518fa8e02\") " pod="openstack/nova-cell0-conductor-db-sync-dgfkz" Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.770006 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb3d2475-e275-4139-8d39-3b0518fa8e02-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-dgfkz\" (UID: \"eb3d2475-e275-4139-8d39-3b0518fa8e02\") " pod="openstack/nova-cell0-conductor-db-sync-dgfkz" Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.782002 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86bbd\" (UniqueName: \"kubernetes.io/projected/eb3d2475-e275-4139-8d39-3b0518fa8e02-kube-api-access-86bbd\") pod \"nova-cell0-conductor-db-sync-dgfkz\" (UID: \"eb3d2475-e275-4139-8d39-3b0518fa8e02\") " pod="openstack/nova-cell0-conductor-db-sync-dgfkz" Apr 04 02:27:12 crc kubenswrapper[4681]: I0404 02:27:12.945424 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dgfkz" Apr 04 02:27:13 crc kubenswrapper[4681]: I0404 02:27:13.474546 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dgfkz"] Apr 04 02:27:13 crc kubenswrapper[4681]: I0404 02:27:13.863482 4681 generic.go:334] "Generic (PLEG): container finished" podID="a09665ac-96ec-498e-a5de-0f2e60a6a6a4" containerID="54abf6825932ee075a152abf32429784fb8e77fcf888a409f8b6c60a4bdf4bfd" exitCode=0 Apr 04 02:27:13 crc kubenswrapper[4681]: I0404 02:27:13.863666 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a09665ac-96ec-498e-a5de-0f2e60a6a6a4","Type":"ContainerDied","Data":"54abf6825932ee075a152abf32429784fb8e77fcf888a409f8b6c60a4bdf4bfd"} Apr 04 02:27:13 crc kubenswrapper[4681]: I0404 02:27:13.865312 4681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 04 02:27:13 crc kubenswrapper[4681]: I0404 02:27:13.865331 4681 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 04 02:27:13 crc kubenswrapper[4681]: I0404 02:27:13.866120 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dgfkz" event={"ID":"eb3d2475-e275-4139-8d39-3b0518fa8e02","Type":"ContainerStarted","Data":"afa2242ef4d8cd23d1599f7a3c04ac23753b7ec1bf03ba56d7e5162f16c4f267"} Apr 04 02:27:14 crc kubenswrapper[4681]: I0404 02:27:14.193789 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Apr 04 02:27:14 crc kubenswrapper[4681]: I0404 02:27:14.198815 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Apr 04 02:27:14 crc kubenswrapper[4681]: I0404 02:27:14.737589 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:27:14 crc kubenswrapper[4681]: I0404 02:27:14.808017 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzrjm\" (UniqueName: \"kubernetes.io/projected/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-kube-api-access-lzrjm\") pod \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\" (UID: \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\") " Apr 04 02:27:14 crc kubenswrapper[4681]: I0404 02:27:14.808081 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-run-httpd\") pod \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\" (UID: \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\") " Apr 04 02:27:14 crc kubenswrapper[4681]: I0404 02:27:14.808136 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-sg-core-conf-yaml\") pod \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\" (UID: \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\") " Apr 04 02:27:14 crc kubenswrapper[4681]: I0404 02:27:14.808194 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-combined-ca-bundle\") pod \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\" (UID: \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\") " Apr 04 02:27:14 crc kubenswrapper[4681]: I0404 02:27:14.808222 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-log-httpd\") pod \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\" (UID: \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\") " Apr 04 02:27:14 crc kubenswrapper[4681]: I0404 02:27:14.808293 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-config-data\") pod \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\" (UID: \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\") " Apr 04 02:27:14 crc kubenswrapper[4681]: I0404 02:27:14.808394 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-scripts\") pod \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\" (UID: \"a09665ac-96ec-498e-a5de-0f2e60a6a6a4\") " Apr 04 02:27:14 crc kubenswrapper[4681]: I0404 02:27:14.808773 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a09665ac-96ec-498e-a5de-0f2e60a6a6a4" (UID: "a09665ac-96ec-498e-a5de-0f2e60a6a6a4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:27:14 crc kubenswrapper[4681]: I0404 02:27:14.809005 4681 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-run-httpd\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:14 crc kubenswrapper[4681]: I0404 02:27:14.809016 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a09665ac-96ec-498e-a5de-0f2e60a6a6a4" (UID: "a09665ac-96ec-498e-a5de-0f2e60a6a6a4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:27:14 crc kubenswrapper[4681]: I0404 02:27:14.814577 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-scripts" (OuterVolumeSpecName: "scripts") pod "a09665ac-96ec-498e-a5de-0f2e60a6a6a4" (UID: "a09665ac-96ec-498e-a5de-0f2e60a6a6a4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:27:14 crc kubenswrapper[4681]: I0404 02:27:14.814829 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-kube-api-access-lzrjm" (OuterVolumeSpecName: "kube-api-access-lzrjm") pod "a09665ac-96ec-498e-a5de-0f2e60a6a6a4" (UID: "a09665ac-96ec-498e-a5de-0f2e60a6a6a4"). InnerVolumeSpecName "kube-api-access-lzrjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:27:14 crc kubenswrapper[4681]: I0404 02:27:14.837327 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a09665ac-96ec-498e-a5de-0f2e60a6a6a4" (UID: "a09665ac-96ec-498e-a5de-0f2e60a6a6a4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:27:14 crc kubenswrapper[4681]: I0404 02:27:14.882033 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:27:14 crc kubenswrapper[4681]: I0404 02:27:14.882163 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a09665ac-96ec-498e-a5de-0f2e60a6a6a4","Type":"ContainerDied","Data":"d3c8eded2a9aaea1936056634d7a7d6a959a84dcac7cdc037f6e77a4e601b7ed"} Apr 04 02:27:14 crc kubenswrapper[4681]: I0404 02:27:14.882422 4681 scope.go:117] "RemoveContainer" containerID="3a10d60c6b5f05093985431c34ad25e416ececeefef490951d1b8398c4e5477f" Apr 04 02:27:14 crc kubenswrapper[4681]: I0404 02:27:14.911748 4681 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-log-httpd\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:14 crc kubenswrapper[4681]: I0404 02:27:14.911779 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:14 crc kubenswrapper[4681]: I0404 02:27:14.911791 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzrjm\" (UniqueName: \"kubernetes.io/projected/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-kube-api-access-lzrjm\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:14 crc kubenswrapper[4681]: I0404 02:27:14.911809 4681 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:14 crc kubenswrapper[4681]: I0404 02:27:14.914038 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a09665ac-96ec-498e-a5de-0f2e60a6a6a4" (UID: "a09665ac-96ec-498e-a5de-0f2e60a6a6a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:27:14 crc kubenswrapper[4681]: I0404 02:27:14.963109 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-config-data" (OuterVolumeSpecName: "config-data") pod "a09665ac-96ec-498e-a5de-0f2e60a6a6a4" (UID: "a09665ac-96ec-498e-a5de-0f2e60a6a6a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.013785 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.013820 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a09665ac-96ec-498e-a5de-0f2e60a6a6a4-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.051712 4681 scope.go:117] "RemoveContainer" containerID="863541c08a4105958de6ad17cedfc4c9079a35435eefd2f92ae13bbc6d21b237" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.082878 4681 scope.go:117] "RemoveContainer" containerID="4caeb7e915589760f1b67c931dfecb34555468064a7f241b94dbe0e7d6e00a3e" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.108163 4681 scope.go:117] "RemoveContainer" containerID="54abf6825932ee075a152abf32429784fb8e77fcf888a409f8b6c60a4bdf4bfd" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.230811 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.252016 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.264979 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:27:15 crc kubenswrapper[4681]: E0404 02:27:15.265467 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a09665ac-96ec-498e-a5de-0f2e60a6a6a4" containerName="proxy-httpd" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.265484 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="a09665ac-96ec-498e-a5de-0f2e60a6a6a4" containerName="proxy-httpd" Apr 04 02:27:15 crc kubenswrapper[4681]: E0404 02:27:15.265498 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a09665ac-96ec-498e-a5de-0f2e60a6a6a4" containerName="ceilometer-central-agent" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.265503 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="a09665ac-96ec-498e-a5de-0f2e60a6a6a4" containerName="ceilometer-central-agent" Apr 04 02:27:15 crc kubenswrapper[4681]: E0404 02:27:15.265529 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a09665ac-96ec-498e-a5de-0f2e60a6a6a4" containerName="sg-core" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.265535 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="a09665ac-96ec-498e-a5de-0f2e60a6a6a4" containerName="sg-core" Apr 04 02:27:15 crc kubenswrapper[4681]: E0404 02:27:15.265561 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a09665ac-96ec-498e-a5de-0f2e60a6a6a4" containerName="ceilometer-notification-agent" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.265567 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="a09665ac-96ec-498e-a5de-0f2e60a6a6a4" containerName="ceilometer-notification-agent" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.265744 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="a09665ac-96ec-498e-a5de-0f2e60a6a6a4" containerName="proxy-httpd" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.265762 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="a09665ac-96ec-498e-a5de-0f2e60a6a6a4" containerName="sg-core" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.265774 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="a09665ac-96ec-498e-a5de-0f2e60a6a6a4" containerName="ceilometer-notification-agent" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.265785 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="a09665ac-96ec-498e-a5de-0f2e60a6a6a4" containerName="ceilometer-central-agent" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.267796 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.271981 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.272693 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.274276 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.332408 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-config-data\") pod \"ceilometer-0\" (UID: \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\") " pod="openstack/ceilometer-0" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.332468 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-scripts\") pod \"ceilometer-0\" (UID: \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\") " pod="openstack/ceilometer-0" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.332489 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\") " pod="openstack/ceilometer-0" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.332524 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-log-httpd\") pod \"ceilometer-0\" (UID: \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\") " pod="openstack/ceilometer-0" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.332633 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\") " pod="openstack/ceilometer-0" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.332679 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-run-httpd\") pod \"ceilometer-0\" (UID: \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\") " pod="openstack/ceilometer-0" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.332697 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn49h\" (UniqueName: \"kubernetes.io/projected/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-kube-api-access-mn49h\") pod \"ceilometer-0\" (UID: \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\") " pod="openstack/ceilometer-0" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.434655 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\") " pod="openstack/ceilometer-0" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.435309 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-run-httpd\") pod \"ceilometer-0\" (UID: \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\") " pod="openstack/ceilometer-0" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.435398 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn49h\" (UniqueName: \"kubernetes.io/projected/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-kube-api-access-mn49h\") pod \"ceilometer-0\" (UID: \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\") " pod="openstack/ceilometer-0" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.435535 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-config-data\") pod \"ceilometer-0\" (UID: \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\") " pod="openstack/ceilometer-0" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.435643 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-scripts\") pod \"ceilometer-0\" (UID: \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\") " pod="openstack/ceilometer-0" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.435724 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\") " pod="openstack/ceilometer-0" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.435833 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-log-httpd\") pod \"ceilometer-0\" (UID: \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\") " pod="openstack/ceilometer-0" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.435832 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-run-httpd\") pod \"ceilometer-0\" (UID: \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\") " pod="openstack/ceilometer-0" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.436035 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-log-httpd\") pod \"ceilometer-0\" (UID: \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\") " pod="openstack/ceilometer-0" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.440291 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\") " pod="openstack/ceilometer-0" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.440723 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-config-data\") pod \"ceilometer-0\" (UID: \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\") " pod="openstack/ceilometer-0" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.440747 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-scripts\") pod \"ceilometer-0\" (UID: \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\") " pod="openstack/ceilometer-0" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.453874 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\") " pod="openstack/ceilometer-0" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.458512 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn49h\" (UniqueName: \"kubernetes.io/projected/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-kube-api-access-mn49h\") pod \"ceilometer-0\" (UID: \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\") " pod="openstack/ceilometer-0" Apr 04 02:27:15 crc kubenswrapper[4681]: I0404 02:27:15.591977 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:27:16 crc kubenswrapper[4681]: I0404 02:27:16.071178 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:27:16 crc kubenswrapper[4681]: W0404 02:27:16.084076 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podffcef6b3_e265_474b_bd35_8fa4d786e1aa.slice/crio-28bb167874e562e54d7ef6aa04c2ac52437663173e53859d000b4525d0563ae8 WatchSource:0}: Error finding container 28bb167874e562e54d7ef6aa04c2ac52437663173e53859d000b4525d0563ae8: Status 404 returned error can't find the container with id 28bb167874e562e54d7ef6aa04c2ac52437663173e53859d000b4525d0563ae8 Apr 04 02:27:16 crc kubenswrapper[4681]: I0404 02:27:16.904559 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ffcef6b3-e265-474b-bd35-8fa4d786e1aa","Type":"ContainerStarted","Data":"83f33e7f437c0c3918030386c130571b8269a874a0bfad4c2c290f03236323bc"} Apr 04 02:27:16 crc kubenswrapper[4681]: I0404 02:27:16.904794 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ffcef6b3-e265-474b-bd35-8fa4d786e1aa","Type":"ContainerStarted","Data":"28bb167874e562e54d7ef6aa04c2ac52437663173e53859d000b4525d0563ae8"} Apr 04 02:27:17 crc kubenswrapper[4681]: I0404 02:27:17.216633 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a09665ac-96ec-498e-a5de-0f2e60a6a6a4" path="/var/lib/kubelet/pods/a09665ac-96ec-498e-a5de-0f2e60a6a6a4/volumes" Apr 04 02:27:17 crc kubenswrapper[4681]: I0404 02:27:17.916875 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ffcef6b3-e265-474b-bd35-8fa4d786e1aa","Type":"ContainerStarted","Data":"d3e3bf08080cd9b34a2656a777b05040d099a6b667b7266d12de8ecbef15e4a6"} Apr 04 02:27:18 crc kubenswrapper[4681]: I0404 02:27:18.928558 4681 generic.go:334] "Generic (PLEG): container finished" podID="3190cdec-e3a8-4aa4-81bb-bd814b96537f" containerID="db94528a4fd5a4633a4ee440f0c10d99a7ff242a03b0c8efeb0ff3395435c60e" exitCode=0 Apr 04 02:27:18 crc kubenswrapper[4681]: I0404 02:27:18.928586 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3190cdec-e3a8-4aa4-81bb-bd814b96537f","Type":"ContainerDied","Data":"db94528a4fd5a4633a4ee440f0c10d99a7ff242a03b0c8efeb0ff3395435c60e"} Apr 04 02:27:19 crc kubenswrapper[4681]: I0404 02:27:19.202787 4681 scope.go:117] "RemoveContainer" containerID="a115e63e478f2dcd96d6a1ea5579c4abd8544184899aee1395f08415f92823da" Apr 04 02:27:19 crc kubenswrapper[4681]: E0404 02:27:19.203063 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:27:21 crc kubenswrapper[4681]: E0404 02:27:21.506560 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of db94528a4fd5a4633a4ee440f0c10d99a7ff242a03b0c8efeb0ff3395435c60e is running failed: container process not found" containerID="db94528a4fd5a4633a4ee440f0c10d99a7ff242a03b0c8efeb0ff3395435c60e" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Apr 04 02:27:21 crc kubenswrapper[4681]: E0404 02:27:21.509030 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of db94528a4fd5a4633a4ee440f0c10d99a7ff242a03b0c8efeb0ff3395435c60e is running failed: container process not found" containerID="db94528a4fd5a4633a4ee440f0c10d99a7ff242a03b0c8efeb0ff3395435c60e" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Apr 04 02:27:21 crc kubenswrapper[4681]: E0404 02:27:21.509625 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of db94528a4fd5a4633a4ee440f0c10d99a7ff242a03b0c8efeb0ff3395435c60e is running failed: container process not found" containerID="db94528a4fd5a4633a4ee440f0c10d99a7ff242a03b0c8efeb0ff3395435c60e" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Apr 04 02:27:21 crc kubenswrapper[4681]: E0404 02:27:21.509665 4681 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of db94528a4fd5a4633a4ee440f0c10d99a7ff242a03b0c8efeb0ff3395435c60e is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-decision-engine-0" podUID="3190cdec-e3a8-4aa4-81bb-bd814b96537f" containerName="watcher-decision-engine" Apr 04 02:27:24 crc kubenswrapper[4681]: I0404 02:27:24.087604 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:27:24 crc kubenswrapper[4681]: I0404 02:27:24.440991 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Apr 04 02:27:24 crc kubenswrapper[4681]: I0404 02:27:24.636441 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3190cdec-e3a8-4aa4-81bb-bd814b96537f-custom-prometheus-ca\") pod \"3190cdec-e3a8-4aa4-81bb-bd814b96537f\" (UID: \"3190cdec-e3a8-4aa4-81bb-bd814b96537f\") " Apr 04 02:27:24 crc kubenswrapper[4681]: I0404 02:27:24.636678 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3190cdec-e3a8-4aa4-81bb-bd814b96537f-config-data\") pod \"3190cdec-e3a8-4aa4-81bb-bd814b96537f\" (UID: \"3190cdec-e3a8-4aa4-81bb-bd814b96537f\") " Apr 04 02:27:24 crc kubenswrapper[4681]: I0404 02:27:24.636878 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3190cdec-e3a8-4aa4-81bb-bd814b96537f-combined-ca-bundle\") pod \"3190cdec-e3a8-4aa4-81bb-bd814b96537f\" (UID: \"3190cdec-e3a8-4aa4-81bb-bd814b96537f\") " Apr 04 02:27:24 crc kubenswrapper[4681]: I0404 02:27:24.636996 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48584\" (UniqueName: \"kubernetes.io/projected/3190cdec-e3a8-4aa4-81bb-bd814b96537f-kube-api-access-48584\") pod \"3190cdec-e3a8-4aa4-81bb-bd814b96537f\" (UID: \"3190cdec-e3a8-4aa4-81bb-bd814b96537f\") " Apr 04 02:27:24 crc kubenswrapper[4681]: I0404 02:27:24.637106 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3190cdec-e3a8-4aa4-81bb-bd814b96537f-logs\") pod \"3190cdec-e3a8-4aa4-81bb-bd814b96537f\" (UID: \"3190cdec-e3a8-4aa4-81bb-bd814b96537f\") " Apr 04 02:27:24 crc kubenswrapper[4681]: I0404 02:27:24.637728 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3190cdec-e3a8-4aa4-81bb-bd814b96537f-logs" (OuterVolumeSpecName: "logs") pod "3190cdec-e3a8-4aa4-81bb-bd814b96537f" (UID: "3190cdec-e3a8-4aa4-81bb-bd814b96537f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:27:24 crc kubenswrapper[4681]: I0404 02:27:24.640468 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3190cdec-e3a8-4aa4-81bb-bd814b96537f-kube-api-access-48584" (OuterVolumeSpecName: "kube-api-access-48584") pod "3190cdec-e3a8-4aa4-81bb-bd814b96537f" (UID: "3190cdec-e3a8-4aa4-81bb-bd814b96537f"). InnerVolumeSpecName "kube-api-access-48584". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:27:24 crc kubenswrapper[4681]: I0404 02:27:24.671656 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3190cdec-e3a8-4aa4-81bb-bd814b96537f-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "3190cdec-e3a8-4aa4-81bb-bd814b96537f" (UID: "3190cdec-e3a8-4aa4-81bb-bd814b96537f"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:27:24 crc kubenswrapper[4681]: I0404 02:27:24.680286 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3190cdec-e3a8-4aa4-81bb-bd814b96537f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3190cdec-e3a8-4aa4-81bb-bd814b96537f" (UID: "3190cdec-e3a8-4aa4-81bb-bd814b96537f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:27:24 crc kubenswrapper[4681]: I0404 02:27:24.721154 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3190cdec-e3a8-4aa4-81bb-bd814b96537f-config-data" (OuterVolumeSpecName: "config-data") pod "3190cdec-e3a8-4aa4-81bb-bd814b96537f" (UID: "3190cdec-e3a8-4aa4-81bb-bd814b96537f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:27:24 crc kubenswrapper[4681]: I0404 02:27:24.739077 4681 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/3190cdec-e3a8-4aa4-81bb-bd814b96537f-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:24 crc kubenswrapper[4681]: I0404 02:27:24.739116 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3190cdec-e3a8-4aa4-81bb-bd814b96537f-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:24 crc kubenswrapper[4681]: I0404 02:27:24.739128 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3190cdec-e3a8-4aa4-81bb-bd814b96537f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:24 crc kubenswrapper[4681]: I0404 02:27:24.739141 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48584\" (UniqueName: \"kubernetes.io/projected/3190cdec-e3a8-4aa4-81bb-bd814b96537f-kube-api-access-48584\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:24 crc kubenswrapper[4681]: I0404 02:27:24.739152 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3190cdec-e3a8-4aa4-81bb-bd814b96537f-logs\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.000713 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dgfkz" event={"ID":"eb3d2475-e275-4139-8d39-3b0518fa8e02","Type":"ContainerStarted","Data":"3728b19c83666aac895486a18b6e03ef521b83f65df191dcdd7eb17506ee7643"} Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.003382 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ffcef6b3-e265-474b-bd35-8fa4d786e1aa","Type":"ContainerStarted","Data":"d24fbcddd34bb382ac2f3359c50e45e5f2d83203297676d653c71e4fcf6da29f"} Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.004956 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"3190cdec-e3a8-4aa4-81bb-bd814b96537f","Type":"ContainerDied","Data":"2fa72aacc1ca8d20e306c12a53b94e8022d116fe5b8a825361af8cb597a2498b"} Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.004987 4681 scope.go:117] "RemoveContainer" containerID="db94528a4fd5a4633a4ee440f0c10d99a7ff242a03b0c8efeb0ff3395435c60e" Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.005087 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.030711 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-dgfkz" podStartSLOduration=2.282961263 podStartE2EDuration="13.030690632s" podCreationTimestamp="2026-04-04 02:27:12 +0000 UTC" firstStartedPulling="2026-04-04 02:27:13.485879512 +0000 UTC m=+1913.151654632" lastFinishedPulling="2026-04-04 02:27:24.233608881 +0000 UTC m=+1923.899384001" observedRunningTime="2026-04-04 02:27:25.023856864 +0000 UTC m=+1924.689631984" watchObservedRunningTime="2026-04-04 02:27:25.030690632 +0000 UTC m=+1924.696465752" Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.055213 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.064019 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.082700 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Apr 04 02:27:25 crc kubenswrapper[4681]: E0404 02:27:25.083148 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3190cdec-e3a8-4aa4-81bb-bd814b96537f" containerName="watcher-decision-engine" Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.083164 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="3190cdec-e3a8-4aa4-81bb-bd814b96537f" containerName="watcher-decision-engine" Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.083402 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="3190cdec-e3a8-4aa4-81bb-bd814b96537f" containerName="watcher-decision-engine" Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.084017 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.087781 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.105444 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.214245 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3190cdec-e3a8-4aa4-81bb-bd814b96537f" path="/var/lib/kubelet/pods/3190cdec-e3a8-4aa4-81bb-bd814b96537f/volumes" Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.247870 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qtn4\" (UniqueName: \"kubernetes.io/projected/4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0-kube-api-access-2qtn4\") pod \"watcher-decision-engine-0\" (UID: \"4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.247982 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0-config-data\") pod \"watcher-decision-engine-0\" (UID: \"4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.248016 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.248036 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.248131 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0-logs\") pod \"watcher-decision-engine-0\" (UID: \"4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.350417 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0-logs\") pod \"watcher-decision-engine-0\" (UID: \"4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.351258 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0-logs\") pod \"watcher-decision-engine-0\" (UID: \"4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.351845 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qtn4\" (UniqueName: \"kubernetes.io/projected/4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0-kube-api-access-2qtn4\") pod \"watcher-decision-engine-0\" (UID: \"4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.352251 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0-config-data\") pod \"watcher-decision-engine-0\" (UID: \"4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.352342 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.352371 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.356602 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0-config-data\") pod \"watcher-decision-engine-0\" (UID: \"4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.356829 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.357084 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.361491 4681 scope.go:117] "RemoveContainer" containerID="eb72537278ec7f69a33cd0f2ce675823ca7641fafd48cd792c89d075fe907366" Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.373495 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qtn4\" (UniqueName: \"kubernetes.io/projected/4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0-kube-api-access-2qtn4\") pod \"watcher-decision-engine-0\" (UID: \"4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0\") " pod="openstack/watcher-decision-engine-0" Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.404146 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.605210 4681 scope.go:117] "RemoveContainer" containerID="98e4b427ccb02feb53c227a64ef9528e7af40fdb4c88d48ec5f8772059950776" Apr 04 02:27:25 crc kubenswrapper[4681]: I0404 02:27:25.896284 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Apr 04 02:27:25 crc kubenswrapper[4681]: W0404 02:27:25.933953 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c83a5da_4f75_4ce3_8ed1_77404dd4f2b0.slice/crio-e500765868f860018a734b67aeafafcf781a5c554308e88f724a913dfe2e9e63 WatchSource:0}: Error finding container e500765868f860018a734b67aeafafcf781a5c554308e88f724a913dfe2e9e63: Status 404 returned error can't find the container with id e500765868f860018a734b67aeafafcf781a5c554308e88f724a913dfe2e9e63 Apr 04 02:27:26 crc kubenswrapper[4681]: I0404 02:27:26.037887 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0","Type":"ContainerStarted","Data":"e500765868f860018a734b67aeafafcf781a5c554308e88f724a913dfe2e9e63"} Apr 04 02:27:26 crc kubenswrapper[4681]: I0404 02:27:26.852785 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-f7f9d8bd4-j5zm6" Apr 04 02:27:28 crc kubenswrapper[4681]: I0404 02:27:28.059578 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0","Type":"ContainerStarted","Data":"bdbac511495cbc62d27cc1a917b12296cfffd1b5fac02973a0a01ed8a7122213"} Apr 04 02:27:28 crc kubenswrapper[4681]: I0404 02:27:28.085063 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=3.085038965 podStartE2EDuration="3.085038965s" podCreationTimestamp="2026-04-04 02:27:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:27:28.078560607 +0000 UTC m=+1927.744335767" watchObservedRunningTime="2026-04-04 02:27:28.085038965 +0000 UTC m=+1927.750814125" Apr 04 02:27:29 crc kubenswrapper[4681]: I0404 02:27:29.269917 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-554fd9954f-c5kv8" Apr 04 02:27:29 crc kubenswrapper[4681]: I0404 02:27:29.345175 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f7f9d8bd4-j5zm6"] Apr 04 02:27:29 crc kubenswrapper[4681]: I0404 02:27:29.345526 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-f7f9d8bd4-j5zm6" podUID="838b96bb-fdff-4688-9737-aa60034d9538" containerName="neutron-api" containerID="cri-o://b2222ed78e6d8f588226b2399a03c39faddeeb5eb23772118a56a534823a6175" gracePeriod=30 Apr 04 02:27:29 crc kubenswrapper[4681]: I0404 02:27:29.345679 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-f7f9d8bd4-j5zm6" podUID="838b96bb-fdff-4688-9737-aa60034d9538" containerName="neutron-httpd" containerID="cri-o://aecfcd3810bcc4dad67b72df2fc14884155f796fae37d75380459f9f72523479" gracePeriod=30 Apr 04 02:27:30 crc kubenswrapper[4681]: I0404 02:27:30.085236 4681 generic.go:334] "Generic (PLEG): container finished" podID="838b96bb-fdff-4688-9737-aa60034d9538" containerID="aecfcd3810bcc4dad67b72df2fc14884155f796fae37d75380459f9f72523479" exitCode=0 Apr 04 02:27:30 crc kubenswrapper[4681]: I0404 02:27:30.085533 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f7f9d8bd4-j5zm6" event={"ID":"838b96bb-fdff-4688-9737-aa60034d9538","Type":"ContainerDied","Data":"aecfcd3810bcc4dad67b72df2fc14884155f796fae37d75380459f9f72523479"} Apr 04 02:27:32 crc kubenswrapper[4681]: I0404 02:27:32.201427 4681 scope.go:117] "RemoveContainer" containerID="a115e63e478f2dcd96d6a1ea5579c4abd8544184899aee1395f08415f92823da" Apr 04 02:27:32 crc kubenswrapper[4681]: E0404 02:27:32.201766 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:27:34 crc kubenswrapper[4681]: I0404 02:27:34.128494 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ffcef6b3-e265-474b-bd35-8fa4d786e1aa","Type":"ContainerStarted","Data":"29dd94245290c1cd391c493b18df6815c6133d7163be64714c1d021fc264d422"} Apr 04 02:27:34 crc kubenswrapper[4681]: I0404 02:27:34.129025 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Apr 04 02:27:34 crc kubenswrapper[4681]: I0404 02:27:34.128731 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ffcef6b3-e265-474b-bd35-8fa4d786e1aa" containerName="sg-core" containerID="cri-o://d24fbcddd34bb382ac2f3359c50e45e5f2d83203297676d653c71e4fcf6da29f" gracePeriod=30 Apr 04 02:27:34 crc kubenswrapper[4681]: I0404 02:27:34.128792 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ffcef6b3-e265-474b-bd35-8fa4d786e1aa" containerName="proxy-httpd" containerID="cri-o://29dd94245290c1cd391c493b18df6815c6133d7163be64714c1d021fc264d422" gracePeriod=30 Apr 04 02:27:34 crc kubenswrapper[4681]: I0404 02:27:34.128684 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ffcef6b3-e265-474b-bd35-8fa4d786e1aa" containerName="ceilometer-central-agent" containerID="cri-o://83f33e7f437c0c3918030386c130571b8269a874a0bfad4c2c290f03236323bc" gracePeriod=30 Apr 04 02:27:34 crc kubenswrapper[4681]: I0404 02:27:34.128822 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ffcef6b3-e265-474b-bd35-8fa4d786e1aa" containerName="ceilometer-notification-agent" containerID="cri-o://d3e3bf08080cd9b34a2656a777b05040d099a6b667b7266d12de8ecbef15e4a6" gracePeriod=30 Apr 04 02:27:34 crc kubenswrapper[4681]: I0404 02:27:34.171878 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.180356499 podStartE2EDuration="19.171851202s" podCreationTimestamp="2026-04-04 02:27:15 +0000 UTC" firstStartedPulling="2026-04-04 02:27:16.087660492 +0000 UTC m=+1915.753435622" lastFinishedPulling="2026-04-04 02:27:31.079155195 +0000 UTC m=+1930.744930325" observedRunningTime="2026-04-04 02:27:34.159634996 +0000 UTC m=+1933.825410126" watchObservedRunningTime="2026-04-04 02:27:34.171851202 +0000 UTC m=+1933.837626342" Apr 04 02:27:35 crc kubenswrapper[4681]: I0404 02:27:35.140444 4681 generic.go:334] "Generic (PLEG): container finished" podID="ffcef6b3-e265-474b-bd35-8fa4d786e1aa" containerID="29dd94245290c1cd391c493b18df6815c6133d7163be64714c1d021fc264d422" exitCode=0 Apr 04 02:27:35 crc kubenswrapper[4681]: I0404 02:27:35.140763 4681 generic.go:334] "Generic (PLEG): container finished" podID="ffcef6b3-e265-474b-bd35-8fa4d786e1aa" containerID="d24fbcddd34bb382ac2f3359c50e45e5f2d83203297676d653c71e4fcf6da29f" exitCode=2 Apr 04 02:27:35 crc kubenswrapper[4681]: I0404 02:27:35.140506 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ffcef6b3-e265-474b-bd35-8fa4d786e1aa","Type":"ContainerDied","Data":"29dd94245290c1cd391c493b18df6815c6133d7163be64714c1d021fc264d422"} Apr 04 02:27:35 crc kubenswrapper[4681]: I0404 02:27:35.140804 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ffcef6b3-e265-474b-bd35-8fa4d786e1aa","Type":"ContainerDied","Data":"d24fbcddd34bb382ac2f3359c50e45e5f2d83203297676d653c71e4fcf6da29f"} Apr 04 02:27:35 crc kubenswrapper[4681]: I0404 02:27:35.404945 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Apr 04 02:27:35 crc kubenswrapper[4681]: I0404 02:27:35.435170 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Apr 04 02:27:36 crc kubenswrapper[4681]: I0404 02:27:36.156661 4681 generic.go:334] "Generic (PLEG): container finished" podID="ffcef6b3-e265-474b-bd35-8fa4d786e1aa" containerID="83f33e7f437c0c3918030386c130571b8269a874a0bfad4c2c290f03236323bc" exitCode=0 Apr 04 02:27:36 crc kubenswrapper[4681]: I0404 02:27:36.158957 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ffcef6b3-e265-474b-bd35-8fa4d786e1aa","Type":"ContainerDied","Data":"83f33e7f437c0c3918030386c130571b8269a874a0bfad4c2c290f03236323bc"} Apr 04 02:27:36 crc kubenswrapper[4681]: I0404 02:27:36.159036 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Apr 04 02:27:36 crc kubenswrapper[4681]: I0404 02:27:36.185407 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Apr 04 02:27:39 crc kubenswrapper[4681]: I0404 02:27:39.705402 4681 generic.go:334] "Generic (PLEG): container finished" podID="ffcef6b3-e265-474b-bd35-8fa4d786e1aa" containerID="d3e3bf08080cd9b34a2656a777b05040d099a6b667b7266d12de8ecbef15e4a6" exitCode=0 Apr 04 02:27:39 crc kubenswrapper[4681]: I0404 02:27:39.705466 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ffcef6b3-e265-474b-bd35-8fa4d786e1aa","Type":"ContainerDied","Data":"d3e3bf08080cd9b34a2656a777b05040d099a6b667b7266d12de8ecbef15e4a6"} Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.344374 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.479727 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn49h\" (UniqueName: \"kubernetes.io/projected/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-kube-api-access-mn49h\") pod \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\" (UID: \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\") " Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.479804 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-scripts\") pod \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\" (UID: \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\") " Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.479869 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-log-httpd\") pod \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\" (UID: \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\") " Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.479949 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-combined-ca-bundle\") pod \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\" (UID: \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\") " Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.479990 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-config-data\") pod \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\" (UID: \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\") " Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.480012 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-sg-core-conf-yaml\") pod \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\" (UID: \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\") " Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.480029 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-run-httpd\") pod \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\" (UID: \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\") " Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.480476 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ffcef6b3-e265-474b-bd35-8fa4d786e1aa" (UID: "ffcef6b3-e265-474b-bd35-8fa4d786e1aa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.480590 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ffcef6b3-e265-474b-bd35-8fa4d786e1aa" (UID: "ffcef6b3-e265-474b-bd35-8fa4d786e1aa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.487576 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-kube-api-access-mn49h" (OuterVolumeSpecName: "kube-api-access-mn49h") pod "ffcef6b3-e265-474b-bd35-8fa4d786e1aa" (UID: "ffcef6b3-e265-474b-bd35-8fa4d786e1aa"). InnerVolumeSpecName "kube-api-access-mn49h". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.487643 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-scripts" (OuterVolumeSpecName: "scripts") pod "ffcef6b3-e265-474b-bd35-8fa4d786e1aa" (UID: "ffcef6b3-e265-474b-bd35-8fa4d786e1aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.529861 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ffcef6b3-e265-474b-bd35-8fa4d786e1aa" (UID: "ffcef6b3-e265-474b-bd35-8fa4d786e1aa"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.581010 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffcef6b3-e265-474b-bd35-8fa4d786e1aa" (UID: "ffcef6b3-e265-474b-bd35-8fa4d786e1aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.581760 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-combined-ca-bundle\") pod \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\" (UID: \"ffcef6b3-e265-474b-bd35-8fa4d786e1aa\") " Apr 04 02:27:40 crc kubenswrapper[4681]: W0404 02:27:40.581878 4681 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/ffcef6b3-e265-474b-bd35-8fa4d786e1aa/volumes/kubernetes.io~secret/combined-ca-bundle Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.581891 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ffcef6b3-e265-474b-bd35-8fa4d786e1aa" (UID: "ffcef6b3-e265-474b-bd35-8fa4d786e1aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.582669 4681 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-log-httpd\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.582721 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.582733 4681 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.582746 4681 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-run-httpd\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.582757 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mn49h\" (UniqueName: \"kubernetes.io/projected/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-kube-api-access-mn49h\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.582767 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.624466 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-config-data" (OuterVolumeSpecName: "config-data") pod "ffcef6b3-e265-474b-bd35-8fa4d786e1aa" (UID: "ffcef6b3-e265-474b-bd35-8fa4d786e1aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.684396 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ffcef6b3-e265-474b-bd35-8fa4d786e1aa-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.718106 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ffcef6b3-e265-474b-bd35-8fa4d786e1aa","Type":"ContainerDied","Data":"28bb167874e562e54d7ef6aa04c2ac52437663173e53859d000b4525d0563ae8"} Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.718204 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.718673 4681 scope.go:117] "RemoveContainer" containerID="29dd94245290c1cd391c493b18df6815c6133d7163be64714c1d021fc264d422" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.722712 4681 generic.go:334] "Generic (PLEG): container finished" podID="838b96bb-fdff-4688-9737-aa60034d9538" containerID="b2222ed78e6d8f588226b2399a03c39faddeeb5eb23772118a56a534823a6175" exitCode=0 Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.722744 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f7f9d8bd4-j5zm6" event={"ID":"838b96bb-fdff-4688-9737-aa60034d9538","Type":"ContainerDied","Data":"b2222ed78e6d8f588226b2399a03c39faddeeb5eb23772118a56a534823a6175"} Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.764526 4681 scope.go:117] "RemoveContainer" containerID="d24fbcddd34bb382ac2f3359c50e45e5f2d83203297676d653c71e4fcf6da29f" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.769397 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.784818 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.791656 4681 scope.go:117] "RemoveContainer" containerID="d3e3bf08080cd9b34a2656a777b05040d099a6b667b7266d12de8ecbef15e4a6" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.800998 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:27:40 crc kubenswrapper[4681]: E0404 02:27:40.801538 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffcef6b3-e265-474b-bd35-8fa4d786e1aa" containerName="ceilometer-central-agent" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.801556 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffcef6b3-e265-474b-bd35-8fa4d786e1aa" containerName="ceilometer-central-agent" Apr 04 02:27:40 crc kubenswrapper[4681]: E0404 02:27:40.801573 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffcef6b3-e265-474b-bd35-8fa4d786e1aa" containerName="ceilometer-notification-agent" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.801580 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffcef6b3-e265-474b-bd35-8fa4d786e1aa" containerName="ceilometer-notification-agent" Apr 04 02:27:40 crc kubenswrapper[4681]: E0404 02:27:40.801590 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffcef6b3-e265-474b-bd35-8fa4d786e1aa" containerName="proxy-httpd" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.801598 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffcef6b3-e265-474b-bd35-8fa4d786e1aa" containerName="proxy-httpd" Apr 04 02:27:40 crc kubenswrapper[4681]: E0404 02:27:40.801610 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffcef6b3-e265-474b-bd35-8fa4d786e1aa" containerName="sg-core" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.801616 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffcef6b3-e265-474b-bd35-8fa4d786e1aa" containerName="sg-core" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.801782 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffcef6b3-e265-474b-bd35-8fa4d786e1aa" containerName="sg-core" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.801799 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffcef6b3-e265-474b-bd35-8fa4d786e1aa" containerName="ceilometer-notification-agent" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.801807 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffcef6b3-e265-474b-bd35-8fa4d786e1aa" containerName="proxy-httpd" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.801821 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffcef6b3-e265-474b-bd35-8fa4d786e1aa" containerName="ceilometer-central-agent" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.803497 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.806687 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.806986 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.814770 4681 scope.go:117] "RemoveContainer" containerID="83f33e7f437c0c3918030386c130571b8269a874a0bfad4c2c290f03236323bc" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.817404 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.888157 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44324be8-7883-46e8-8eda-4639355efd37-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"44324be8-7883-46e8-8eda-4639355efd37\") " pod="openstack/ceilometer-0" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.888233 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44324be8-7883-46e8-8eda-4639355efd37-run-httpd\") pod \"ceilometer-0\" (UID: \"44324be8-7883-46e8-8eda-4639355efd37\") " pod="openstack/ceilometer-0" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.888287 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/44324be8-7883-46e8-8eda-4639355efd37-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"44324be8-7883-46e8-8eda-4639355efd37\") " pod="openstack/ceilometer-0" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.888305 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44324be8-7883-46e8-8eda-4639355efd37-config-data\") pod \"ceilometer-0\" (UID: \"44324be8-7883-46e8-8eda-4639355efd37\") " pod="openstack/ceilometer-0" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.888363 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc5s5\" (UniqueName: \"kubernetes.io/projected/44324be8-7883-46e8-8eda-4639355efd37-kube-api-access-tc5s5\") pod \"ceilometer-0\" (UID: \"44324be8-7883-46e8-8eda-4639355efd37\") " pod="openstack/ceilometer-0" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.888387 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44324be8-7883-46e8-8eda-4639355efd37-scripts\") pod \"ceilometer-0\" (UID: \"44324be8-7883-46e8-8eda-4639355efd37\") " pod="openstack/ceilometer-0" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.888568 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44324be8-7883-46e8-8eda-4639355efd37-log-httpd\") pod \"ceilometer-0\" (UID: \"44324be8-7883-46e8-8eda-4639355efd37\") " pod="openstack/ceilometer-0" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.990757 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/44324be8-7883-46e8-8eda-4639355efd37-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"44324be8-7883-46e8-8eda-4639355efd37\") " pod="openstack/ceilometer-0" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.990817 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44324be8-7883-46e8-8eda-4639355efd37-config-data\") pod \"ceilometer-0\" (UID: \"44324be8-7883-46e8-8eda-4639355efd37\") " pod="openstack/ceilometer-0" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.990916 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc5s5\" (UniqueName: \"kubernetes.io/projected/44324be8-7883-46e8-8eda-4639355efd37-kube-api-access-tc5s5\") pod \"ceilometer-0\" (UID: \"44324be8-7883-46e8-8eda-4639355efd37\") " pod="openstack/ceilometer-0" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.990963 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44324be8-7883-46e8-8eda-4639355efd37-scripts\") pod \"ceilometer-0\" (UID: \"44324be8-7883-46e8-8eda-4639355efd37\") " pod="openstack/ceilometer-0" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.991002 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44324be8-7883-46e8-8eda-4639355efd37-log-httpd\") pod \"ceilometer-0\" (UID: \"44324be8-7883-46e8-8eda-4639355efd37\") " pod="openstack/ceilometer-0" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.991091 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44324be8-7883-46e8-8eda-4639355efd37-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"44324be8-7883-46e8-8eda-4639355efd37\") " pod="openstack/ceilometer-0" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.991160 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44324be8-7883-46e8-8eda-4639355efd37-run-httpd\") pod \"ceilometer-0\" (UID: \"44324be8-7883-46e8-8eda-4639355efd37\") " pod="openstack/ceilometer-0" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.991657 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44324be8-7883-46e8-8eda-4639355efd37-log-httpd\") pod \"ceilometer-0\" (UID: \"44324be8-7883-46e8-8eda-4639355efd37\") " pod="openstack/ceilometer-0" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.991730 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44324be8-7883-46e8-8eda-4639355efd37-run-httpd\") pod \"ceilometer-0\" (UID: \"44324be8-7883-46e8-8eda-4639355efd37\") " pod="openstack/ceilometer-0" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.995786 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/44324be8-7883-46e8-8eda-4639355efd37-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"44324be8-7883-46e8-8eda-4639355efd37\") " pod="openstack/ceilometer-0" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.995911 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44324be8-7883-46e8-8eda-4639355efd37-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"44324be8-7883-46e8-8eda-4639355efd37\") " pod="openstack/ceilometer-0" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.996055 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44324be8-7883-46e8-8eda-4639355efd37-scripts\") pod \"ceilometer-0\" (UID: \"44324be8-7883-46e8-8eda-4639355efd37\") " pod="openstack/ceilometer-0" Apr 04 02:27:40 crc kubenswrapper[4681]: I0404 02:27:40.996787 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44324be8-7883-46e8-8eda-4639355efd37-config-data\") pod \"ceilometer-0\" (UID: \"44324be8-7883-46e8-8eda-4639355efd37\") " pod="openstack/ceilometer-0" Apr 04 02:27:41 crc kubenswrapper[4681]: I0404 02:27:41.011388 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc5s5\" (UniqueName: \"kubernetes.io/projected/44324be8-7883-46e8-8eda-4639355efd37-kube-api-access-tc5s5\") pod \"ceilometer-0\" (UID: \"44324be8-7883-46e8-8eda-4639355efd37\") " pod="openstack/ceilometer-0" Apr 04 02:27:41 crc kubenswrapper[4681]: I0404 02:27:41.132254 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:27:41 crc kubenswrapper[4681]: I0404 02:27:41.211608 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffcef6b3-e265-474b-bd35-8fa4d786e1aa" path="/var/lib/kubelet/pods/ffcef6b3-e265-474b-bd35-8fa4d786e1aa/volumes" Apr 04 02:27:41 crc kubenswrapper[4681]: I0404 02:27:41.633081 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f7f9d8bd4-j5zm6" Apr 04 02:27:41 crc kubenswrapper[4681]: I0404 02:27:41.713904 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:27:41 crc kubenswrapper[4681]: I0404 02:27:41.733538 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/838b96bb-fdff-4688-9737-aa60034d9538-config\") pod \"838b96bb-fdff-4688-9737-aa60034d9538\" (UID: \"838b96bb-fdff-4688-9737-aa60034d9538\") " Apr 04 02:27:41 crc kubenswrapper[4681]: I0404 02:27:41.733613 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cblv\" (UniqueName: \"kubernetes.io/projected/838b96bb-fdff-4688-9737-aa60034d9538-kube-api-access-7cblv\") pod \"838b96bb-fdff-4688-9737-aa60034d9538\" (UID: \"838b96bb-fdff-4688-9737-aa60034d9538\") " Apr 04 02:27:41 crc kubenswrapper[4681]: I0404 02:27:41.733637 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/838b96bb-fdff-4688-9737-aa60034d9538-combined-ca-bundle\") pod \"838b96bb-fdff-4688-9737-aa60034d9538\" (UID: \"838b96bb-fdff-4688-9737-aa60034d9538\") " Apr 04 02:27:41 crc kubenswrapper[4681]: I0404 02:27:41.733673 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/838b96bb-fdff-4688-9737-aa60034d9538-httpd-config\") pod \"838b96bb-fdff-4688-9737-aa60034d9538\" (UID: \"838b96bb-fdff-4688-9737-aa60034d9538\") " Apr 04 02:27:41 crc kubenswrapper[4681]: I0404 02:27:41.733751 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/838b96bb-fdff-4688-9737-aa60034d9538-ovndb-tls-certs\") pod \"838b96bb-fdff-4688-9737-aa60034d9538\" (UID: \"838b96bb-fdff-4688-9737-aa60034d9538\") " Apr 04 02:27:41 crc kubenswrapper[4681]: I0404 02:27:41.744623 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/838b96bb-fdff-4688-9737-aa60034d9538-kube-api-access-7cblv" (OuterVolumeSpecName: "kube-api-access-7cblv") pod "838b96bb-fdff-4688-9737-aa60034d9538" (UID: "838b96bb-fdff-4688-9737-aa60034d9538"). InnerVolumeSpecName "kube-api-access-7cblv". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:27:41 crc kubenswrapper[4681]: I0404 02:27:41.744730 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/838b96bb-fdff-4688-9737-aa60034d9538-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "838b96bb-fdff-4688-9737-aa60034d9538" (UID: "838b96bb-fdff-4688-9737-aa60034d9538"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:27:41 crc kubenswrapper[4681]: I0404 02:27:41.750393 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f7f9d8bd4-j5zm6" event={"ID":"838b96bb-fdff-4688-9737-aa60034d9538","Type":"ContainerDied","Data":"48d1faae73814fccab342495ee67c1f8b0d95df949a5a9fdca1c14a0f68e6457"} Apr 04 02:27:41 crc kubenswrapper[4681]: I0404 02:27:41.750456 4681 scope.go:117] "RemoveContainer" containerID="aecfcd3810bcc4dad67b72df2fc14884155f796fae37d75380459f9f72523479" Apr 04 02:27:41 crc kubenswrapper[4681]: I0404 02:27:41.750672 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f7f9d8bd4-j5zm6" Apr 04 02:27:41 crc kubenswrapper[4681]: I0404 02:27:41.755118 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"44324be8-7883-46e8-8eda-4639355efd37","Type":"ContainerStarted","Data":"cb688cf282818eda3f55cdcdb0aac7a4e173f3ad9578eb347c236bb8873baec1"} Apr 04 02:27:41 crc kubenswrapper[4681]: I0404 02:27:41.797235 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/838b96bb-fdff-4688-9737-aa60034d9538-config" (OuterVolumeSpecName: "config") pod "838b96bb-fdff-4688-9737-aa60034d9538" (UID: "838b96bb-fdff-4688-9737-aa60034d9538"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:27:41 crc kubenswrapper[4681]: I0404 02:27:41.800217 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/838b96bb-fdff-4688-9737-aa60034d9538-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "838b96bb-fdff-4688-9737-aa60034d9538" (UID: "838b96bb-fdff-4688-9737-aa60034d9538"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:27:41 crc kubenswrapper[4681]: I0404 02:27:41.815217 4681 scope.go:117] "RemoveContainer" containerID="b2222ed78e6d8f588226b2399a03c39faddeeb5eb23772118a56a534823a6175" Apr 04 02:27:41 crc kubenswrapper[4681]: I0404 02:27:41.837968 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/838b96bb-fdff-4688-9737-aa60034d9538-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:41 crc kubenswrapper[4681]: I0404 02:27:41.837986 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cblv\" (UniqueName: \"kubernetes.io/projected/838b96bb-fdff-4688-9737-aa60034d9538-kube-api-access-7cblv\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:41 crc kubenswrapper[4681]: I0404 02:27:41.837998 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/838b96bb-fdff-4688-9737-aa60034d9538-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:41 crc kubenswrapper[4681]: I0404 02:27:41.838006 4681 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/838b96bb-fdff-4688-9737-aa60034d9538-httpd-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:41 crc kubenswrapper[4681]: I0404 02:27:41.840157 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/838b96bb-fdff-4688-9737-aa60034d9538-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "838b96bb-fdff-4688-9737-aa60034d9538" (UID: "838b96bb-fdff-4688-9737-aa60034d9538"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:27:41 crc kubenswrapper[4681]: I0404 02:27:41.939729 4681 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/838b96bb-fdff-4688-9737-aa60034d9538-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:42 crc kubenswrapper[4681]: I0404 02:27:42.086736 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-f7f9d8bd4-j5zm6"] Apr 04 02:27:42 crc kubenswrapper[4681]: I0404 02:27:42.095515 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-f7f9d8bd4-j5zm6"] Apr 04 02:27:42 crc kubenswrapper[4681]: I0404 02:27:42.767123 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"44324be8-7883-46e8-8eda-4639355efd37","Type":"ContainerStarted","Data":"959ab40c2e22e478b693833f11d21adfd34ba5eb3d2f8a6b4343469ea3df2c41"} Apr 04 02:27:42 crc kubenswrapper[4681]: I0404 02:27:42.767389 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"44324be8-7883-46e8-8eda-4639355efd37","Type":"ContainerStarted","Data":"a902a836cf4090c6778dd1ea9a854aa87904560f44a7d684d54aa6480e772291"} Apr 04 02:27:42 crc kubenswrapper[4681]: I0404 02:27:42.836604 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:27:43 crc kubenswrapper[4681]: I0404 02:27:43.214585 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="838b96bb-fdff-4688-9737-aa60034d9538" path="/var/lib/kubelet/pods/838b96bb-fdff-4688-9737-aa60034d9538/volumes" Apr 04 02:27:43 crc kubenswrapper[4681]: I0404 02:27:43.780323 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"44324be8-7883-46e8-8eda-4639355efd37","Type":"ContainerStarted","Data":"a0a9bdd89d415ddb54275c9c36bdd4cab8e444790cb687ce2794842d9cf799c7"} Apr 04 02:27:45 crc kubenswrapper[4681]: I0404 02:27:45.820180 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"44324be8-7883-46e8-8eda-4639355efd37","Type":"ContainerStarted","Data":"52002e2c5815b666a31f3ac29a2f1dafe873d029b2109dc12adb38041a55c0ec"} Apr 04 02:27:45 crc kubenswrapper[4681]: I0404 02:27:45.820362 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="44324be8-7883-46e8-8eda-4639355efd37" containerName="ceilometer-central-agent" containerID="cri-o://a902a836cf4090c6778dd1ea9a854aa87904560f44a7d684d54aa6480e772291" gracePeriod=30 Apr 04 02:27:45 crc kubenswrapper[4681]: I0404 02:27:45.820408 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="44324be8-7883-46e8-8eda-4639355efd37" containerName="sg-core" containerID="cri-o://a0a9bdd89d415ddb54275c9c36bdd4cab8e444790cb687ce2794842d9cf799c7" gracePeriod=30 Apr 04 02:27:45 crc kubenswrapper[4681]: I0404 02:27:45.820444 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="44324be8-7883-46e8-8eda-4639355efd37" containerName="ceilometer-notification-agent" containerID="cri-o://959ab40c2e22e478b693833f11d21adfd34ba5eb3d2f8a6b4343469ea3df2c41" gracePeriod=30 Apr 04 02:27:45 crc kubenswrapper[4681]: I0404 02:27:45.820443 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="44324be8-7883-46e8-8eda-4639355efd37" containerName="proxy-httpd" containerID="cri-o://52002e2c5815b666a31f3ac29a2f1dafe873d029b2109dc12adb38041a55c0ec" gracePeriod=30 Apr 04 02:27:45 crc kubenswrapper[4681]: I0404 02:27:45.820895 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Apr 04 02:27:45 crc kubenswrapper[4681]: I0404 02:27:45.848533 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.345334859 podStartE2EDuration="5.848512493s" podCreationTimestamp="2026-04-04 02:27:40 +0000 UTC" firstStartedPulling="2026-04-04 02:27:41.71115835 +0000 UTC m=+1941.376933470" lastFinishedPulling="2026-04-04 02:27:45.214335954 +0000 UTC m=+1944.880111104" observedRunningTime="2026-04-04 02:27:45.842760624 +0000 UTC m=+1945.508535744" watchObservedRunningTime="2026-04-04 02:27:45.848512493 +0000 UTC m=+1945.514287613" Apr 04 02:27:46 crc kubenswrapper[4681]: I0404 02:27:46.830760 4681 generic.go:334] "Generic (PLEG): container finished" podID="44324be8-7883-46e8-8eda-4639355efd37" containerID="52002e2c5815b666a31f3ac29a2f1dafe873d029b2109dc12adb38041a55c0ec" exitCode=0 Apr 04 02:27:46 crc kubenswrapper[4681]: I0404 02:27:46.831064 4681 generic.go:334] "Generic (PLEG): container finished" podID="44324be8-7883-46e8-8eda-4639355efd37" containerID="a0a9bdd89d415ddb54275c9c36bdd4cab8e444790cb687ce2794842d9cf799c7" exitCode=2 Apr 04 02:27:46 crc kubenswrapper[4681]: I0404 02:27:46.831077 4681 generic.go:334] "Generic (PLEG): container finished" podID="44324be8-7883-46e8-8eda-4639355efd37" containerID="959ab40c2e22e478b693833f11d21adfd34ba5eb3d2f8a6b4343469ea3df2c41" exitCode=0 Apr 04 02:27:46 crc kubenswrapper[4681]: I0404 02:27:46.831111 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"44324be8-7883-46e8-8eda-4639355efd37","Type":"ContainerDied","Data":"52002e2c5815b666a31f3ac29a2f1dafe873d029b2109dc12adb38041a55c0ec"} Apr 04 02:27:46 crc kubenswrapper[4681]: I0404 02:27:46.831141 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"44324be8-7883-46e8-8eda-4639355efd37","Type":"ContainerDied","Data":"a0a9bdd89d415ddb54275c9c36bdd4cab8e444790cb687ce2794842d9cf799c7"} Apr 04 02:27:46 crc kubenswrapper[4681]: I0404 02:27:46.831153 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"44324be8-7883-46e8-8eda-4639355efd37","Type":"ContainerDied","Data":"959ab40c2e22e478b693833f11d21adfd34ba5eb3d2f8a6b4343469ea3df2c41"} Apr 04 02:27:47 crc kubenswrapper[4681]: I0404 02:27:47.201140 4681 scope.go:117] "RemoveContainer" containerID="a115e63e478f2dcd96d6a1ea5579c4abd8544184899aee1395f08415f92823da" Apr 04 02:27:47 crc kubenswrapper[4681]: E0404 02:27:47.201545 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.429832 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.498806 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44324be8-7883-46e8-8eda-4639355efd37-config-data\") pod \"44324be8-7883-46e8-8eda-4639355efd37\" (UID: \"44324be8-7883-46e8-8eda-4639355efd37\") " Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.498967 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44324be8-7883-46e8-8eda-4639355efd37-scripts\") pod \"44324be8-7883-46e8-8eda-4639355efd37\" (UID: \"44324be8-7883-46e8-8eda-4639355efd37\") " Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.499006 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc5s5\" (UniqueName: \"kubernetes.io/projected/44324be8-7883-46e8-8eda-4639355efd37-kube-api-access-tc5s5\") pod \"44324be8-7883-46e8-8eda-4639355efd37\" (UID: \"44324be8-7883-46e8-8eda-4639355efd37\") " Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.499061 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/44324be8-7883-46e8-8eda-4639355efd37-sg-core-conf-yaml\") pod \"44324be8-7883-46e8-8eda-4639355efd37\" (UID: \"44324be8-7883-46e8-8eda-4639355efd37\") " Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.499124 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44324be8-7883-46e8-8eda-4639355efd37-log-httpd\") pod \"44324be8-7883-46e8-8eda-4639355efd37\" (UID: \"44324be8-7883-46e8-8eda-4639355efd37\") " Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.499167 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44324be8-7883-46e8-8eda-4639355efd37-combined-ca-bundle\") pod \"44324be8-7883-46e8-8eda-4639355efd37\" (UID: \"44324be8-7883-46e8-8eda-4639355efd37\") " Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.499194 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44324be8-7883-46e8-8eda-4639355efd37-run-httpd\") pod \"44324be8-7883-46e8-8eda-4639355efd37\" (UID: \"44324be8-7883-46e8-8eda-4639355efd37\") " Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.499748 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44324be8-7883-46e8-8eda-4639355efd37-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "44324be8-7883-46e8-8eda-4639355efd37" (UID: "44324be8-7883-46e8-8eda-4639355efd37"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.499905 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44324be8-7883-46e8-8eda-4639355efd37-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "44324be8-7883-46e8-8eda-4639355efd37" (UID: "44324be8-7883-46e8-8eda-4639355efd37"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.504773 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44324be8-7883-46e8-8eda-4639355efd37-scripts" (OuterVolumeSpecName: "scripts") pod "44324be8-7883-46e8-8eda-4639355efd37" (UID: "44324be8-7883-46e8-8eda-4639355efd37"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.505402 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44324be8-7883-46e8-8eda-4639355efd37-kube-api-access-tc5s5" (OuterVolumeSpecName: "kube-api-access-tc5s5") pod "44324be8-7883-46e8-8eda-4639355efd37" (UID: "44324be8-7883-46e8-8eda-4639355efd37"). InnerVolumeSpecName "kube-api-access-tc5s5". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.549201 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44324be8-7883-46e8-8eda-4639355efd37-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "44324be8-7883-46e8-8eda-4639355efd37" (UID: "44324be8-7883-46e8-8eda-4639355efd37"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.596343 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44324be8-7883-46e8-8eda-4639355efd37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "44324be8-7883-46e8-8eda-4639355efd37" (UID: "44324be8-7883-46e8-8eda-4639355efd37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.601878 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/44324be8-7883-46e8-8eda-4639355efd37-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.601912 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tc5s5\" (UniqueName: \"kubernetes.io/projected/44324be8-7883-46e8-8eda-4639355efd37-kube-api-access-tc5s5\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.601923 4681 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/44324be8-7883-46e8-8eda-4639355efd37-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.601932 4681 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44324be8-7883-46e8-8eda-4639355efd37-log-httpd\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.601941 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44324be8-7883-46e8-8eda-4639355efd37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.601949 4681 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/44324be8-7883-46e8-8eda-4639355efd37-run-httpd\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.613362 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44324be8-7883-46e8-8eda-4639355efd37-config-data" (OuterVolumeSpecName: "config-data") pod "44324be8-7883-46e8-8eda-4639355efd37" (UID: "44324be8-7883-46e8-8eda-4639355efd37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.704070 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/44324be8-7883-46e8-8eda-4639355efd37-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.872481 4681 generic.go:334] "Generic (PLEG): container finished" podID="44324be8-7883-46e8-8eda-4639355efd37" containerID="a902a836cf4090c6778dd1ea9a854aa87904560f44a7d684d54aa6480e772291" exitCode=0 Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.872527 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"44324be8-7883-46e8-8eda-4639355efd37","Type":"ContainerDied","Data":"a902a836cf4090c6778dd1ea9a854aa87904560f44a7d684d54aa6480e772291"} Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.872557 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"44324be8-7883-46e8-8eda-4639355efd37","Type":"ContainerDied","Data":"cb688cf282818eda3f55cdcdb0aac7a4e173f3ad9578eb347c236bb8873baec1"} Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.872573 4681 scope.go:117] "RemoveContainer" containerID="52002e2c5815b666a31f3ac29a2f1dafe873d029b2109dc12adb38041a55c0ec" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.872570 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.893622 4681 scope.go:117] "RemoveContainer" containerID="a0a9bdd89d415ddb54275c9c36bdd4cab8e444790cb687ce2794842d9cf799c7" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.911609 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.921703 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.924488 4681 scope.go:117] "RemoveContainer" containerID="959ab40c2e22e478b693833f11d21adfd34ba5eb3d2f8a6b4343469ea3df2c41" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.944958 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:27:50 crc kubenswrapper[4681]: E0404 02:27:50.945414 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44324be8-7883-46e8-8eda-4639355efd37" containerName="sg-core" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.945433 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="44324be8-7883-46e8-8eda-4639355efd37" containerName="sg-core" Apr 04 02:27:50 crc kubenswrapper[4681]: E0404 02:27:50.945461 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="838b96bb-fdff-4688-9737-aa60034d9538" containerName="neutron-httpd" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.945468 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="838b96bb-fdff-4688-9737-aa60034d9538" containerName="neutron-httpd" Apr 04 02:27:50 crc kubenswrapper[4681]: E0404 02:27:50.945482 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44324be8-7883-46e8-8eda-4639355efd37" containerName="ceilometer-notification-agent" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.945490 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="44324be8-7883-46e8-8eda-4639355efd37" containerName="ceilometer-notification-agent" Apr 04 02:27:50 crc kubenswrapper[4681]: E0404 02:27:50.945510 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44324be8-7883-46e8-8eda-4639355efd37" containerName="ceilometer-central-agent" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.945517 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="44324be8-7883-46e8-8eda-4639355efd37" containerName="ceilometer-central-agent" Apr 04 02:27:50 crc kubenswrapper[4681]: E0404 02:27:50.945532 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44324be8-7883-46e8-8eda-4639355efd37" containerName="proxy-httpd" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.945538 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="44324be8-7883-46e8-8eda-4639355efd37" containerName="proxy-httpd" Apr 04 02:27:50 crc kubenswrapper[4681]: E0404 02:27:50.945550 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="838b96bb-fdff-4688-9737-aa60034d9538" containerName="neutron-api" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.945556 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="838b96bb-fdff-4688-9737-aa60034d9538" containerName="neutron-api" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.945753 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="44324be8-7883-46e8-8eda-4639355efd37" containerName="sg-core" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.945773 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="838b96bb-fdff-4688-9737-aa60034d9538" containerName="neutron-api" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.945784 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="44324be8-7883-46e8-8eda-4639355efd37" containerName="ceilometer-notification-agent" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.945797 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="838b96bb-fdff-4688-9737-aa60034d9538" containerName="neutron-httpd" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.945812 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="44324be8-7883-46e8-8eda-4639355efd37" containerName="proxy-httpd" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.945823 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="44324be8-7883-46e8-8eda-4639355efd37" containerName="ceilometer-central-agent" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.947561 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.950388 4681 scope.go:117] "RemoveContainer" containerID="a902a836cf4090c6778dd1ea9a854aa87904560f44a7d684d54aa6480e772291" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.950668 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.955922 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.967969 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.974678 4681 scope.go:117] "RemoveContainer" containerID="52002e2c5815b666a31f3ac29a2f1dafe873d029b2109dc12adb38041a55c0ec" Apr 04 02:27:50 crc kubenswrapper[4681]: E0404 02:27:50.975158 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52002e2c5815b666a31f3ac29a2f1dafe873d029b2109dc12adb38041a55c0ec\": container with ID starting with 52002e2c5815b666a31f3ac29a2f1dafe873d029b2109dc12adb38041a55c0ec not found: ID does not exist" containerID="52002e2c5815b666a31f3ac29a2f1dafe873d029b2109dc12adb38041a55c0ec" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.975197 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52002e2c5815b666a31f3ac29a2f1dafe873d029b2109dc12adb38041a55c0ec"} err="failed to get container status \"52002e2c5815b666a31f3ac29a2f1dafe873d029b2109dc12adb38041a55c0ec\": rpc error: code = NotFound desc = could not find container \"52002e2c5815b666a31f3ac29a2f1dafe873d029b2109dc12adb38041a55c0ec\": container with ID starting with 52002e2c5815b666a31f3ac29a2f1dafe873d029b2109dc12adb38041a55c0ec not found: ID does not exist" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.975221 4681 scope.go:117] "RemoveContainer" containerID="a0a9bdd89d415ddb54275c9c36bdd4cab8e444790cb687ce2794842d9cf799c7" Apr 04 02:27:50 crc kubenswrapper[4681]: E0404 02:27:50.975535 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0a9bdd89d415ddb54275c9c36bdd4cab8e444790cb687ce2794842d9cf799c7\": container with ID starting with a0a9bdd89d415ddb54275c9c36bdd4cab8e444790cb687ce2794842d9cf799c7 not found: ID does not exist" containerID="a0a9bdd89d415ddb54275c9c36bdd4cab8e444790cb687ce2794842d9cf799c7" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.975558 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0a9bdd89d415ddb54275c9c36bdd4cab8e444790cb687ce2794842d9cf799c7"} err="failed to get container status \"a0a9bdd89d415ddb54275c9c36bdd4cab8e444790cb687ce2794842d9cf799c7\": rpc error: code = NotFound desc = could not find container \"a0a9bdd89d415ddb54275c9c36bdd4cab8e444790cb687ce2794842d9cf799c7\": container with ID starting with a0a9bdd89d415ddb54275c9c36bdd4cab8e444790cb687ce2794842d9cf799c7 not found: ID does not exist" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.975574 4681 scope.go:117] "RemoveContainer" containerID="959ab40c2e22e478b693833f11d21adfd34ba5eb3d2f8a6b4343469ea3df2c41" Apr 04 02:27:50 crc kubenswrapper[4681]: E0404 02:27:50.975812 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"959ab40c2e22e478b693833f11d21adfd34ba5eb3d2f8a6b4343469ea3df2c41\": container with ID starting with 959ab40c2e22e478b693833f11d21adfd34ba5eb3d2f8a6b4343469ea3df2c41 not found: ID does not exist" containerID="959ab40c2e22e478b693833f11d21adfd34ba5eb3d2f8a6b4343469ea3df2c41" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.975841 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"959ab40c2e22e478b693833f11d21adfd34ba5eb3d2f8a6b4343469ea3df2c41"} err="failed to get container status \"959ab40c2e22e478b693833f11d21adfd34ba5eb3d2f8a6b4343469ea3df2c41\": rpc error: code = NotFound desc = could not find container \"959ab40c2e22e478b693833f11d21adfd34ba5eb3d2f8a6b4343469ea3df2c41\": container with ID starting with 959ab40c2e22e478b693833f11d21adfd34ba5eb3d2f8a6b4343469ea3df2c41 not found: ID does not exist" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.975860 4681 scope.go:117] "RemoveContainer" containerID="a902a836cf4090c6778dd1ea9a854aa87904560f44a7d684d54aa6480e772291" Apr 04 02:27:50 crc kubenswrapper[4681]: E0404 02:27:50.986469 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a902a836cf4090c6778dd1ea9a854aa87904560f44a7d684d54aa6480e772291\": container with ID starting with a902a836cf4090c6778dd1ea9a854aa87904560f44a7d684d54aa6480e772291 not found: ID does not exist" containerID="a902a836cf4090c6778dd1ea9a854aa87904560f44a7d684d54aa6480e772291" Apr 04 02:27:50 crc kubenswrapper[4681]: I0404 02:27:50.986509 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a902a836cf4090c6778dd1ea9a854aa87904560f44a7d684d54aa6480e772291"} err="failed to get container status \"a902a836cf4090c6778dd1ea9a854aa87904560f44a7d684d54aa6480e772291\": rpc error: code = NotFound desc = could not find container \"a902a836cf4090c6778dd1ea9a854aa87904560f44a7d684d54aa6480e772291\": container with ID starting with a902a836cf4090c6778dd1ea9a854aa87904560f44a7d684d54aa6480e772291 not found: ID does not exist" Apr 04 02:27:51 crc kubenswrapper[4681]: I0404 02:27:51.009035 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jddnc\" (UniqueName: \"kubernetes.io/projected/3281467f-d73d-47b3-8cfe-44b4beb7b14a-kube-api-access-jddnc\") pod \"ceilometer-0\" (UID: \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\") " pod="openstack/ceilometer-0" Apr 04 02:27:51 crc kubenswrapper[4681]: I0404 02:27:51.009082 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3281467f-d73d-47b3-8cfe-44b4beb7b14a-config-data\") pod \"ceilometer-0\" (UID: \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\") " pod="openstack/ceilometer-0" Apr 04 02:27:51 crc kubenswrapper[4681]: I0404 02:27:51.009134 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3281467f-d73d-47b3-8cfe-44b4beb7b14a-log-httpd\") pod \"ceilometer-0\" (UID: \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\") " pod="openstack/ceilometer-0" Apr 04 02:27:51 crc kubenswrapper[4681]: I0404 02:27:51.009161 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3281467f-d73d-47b3-8cfe-44b4beb7b14a-scripts\") pod \"ceilometer-0\" (UID: \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\") " pod="openstack/ceilometer-0" Apr 04 02:27:51 crc kubenswrapper[4681]: I0404 02:27:51.009320 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3281467f-d73d-47b3-8cfe-44b4beb7b14a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\") " pod="openstack/ceilometer-0" Apr 04 02:27:51 crc kubenswrapper[4681]: I0404 02:27:51.009540 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3281467f-d73d-47b3-8cfe-44b4beb7b14a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\") " pod="openstack/ceilometer-0" Apr 04 02:27:51 crc kubenswrapper[4681]: I0404 02:27:51.009583 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3281467f-d73d-47b3-8cfe-44b4beb7b14a-run-httpd\") pod \"ceilometer-0\" (UID: \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\") " pod="openstack/ceilometer-0" Apr 04 02:27:51 crc kubenswrapper[4681]: I0404 02:27:51.111723 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jddnc\" (UniqueName: \"kubernetes.io/projected/3281467f-d73d-47b3-8cfe-44b4beb7b14a-kube-api-access-jddnc\") pod \"ceilometer-0\" (UID: \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\") " pod="openstack/ceilometer-0" Apr 04 02:27:51 crc kubenswrapper[4681]: I0404 02:27:51.111773 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3281467f-d73d-47b3-8cfe-44b4beb7b14a-config-data\") pod \"ceilometer-0\" (UID: \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\") " pod="openstack/ceilometer-0" Apr 04 02:27:51 crc kubenswrapper[4681]: I0404 02:27:51.111877 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3281467f-d73d-47b3-8cfe-44b4beb7b14a-log-httpd\") pod \"ceilometer-0\" (UID: \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\") " pod="openstack/ceilometer-0" Apr 04 02:27:51 crc kubenswrapper[4681]: I0404 02:27:51.111905 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3281467f-d73d-47b3-8cfe-44b4beb7b14a-scripts\") pod \"ceilometer-0\" (UID: \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\") " pod="openstack/ceilometer-0" Apr 04 02:27:51 crc kubenswrapper[4681]: I0404 02:27:51.111932 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3281467f-d73d-47b3-8cfe-44b4beb7b14a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\") " pod="openstack/ceilometer-0" Apr 04 02:27:51 crc kubenswrapper[4681]: I0404 02:27:51.111989 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3281467f-d73d-47b3-8cfe-44b4beb7b14a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\") " pod="openstack/ceilometer-0" Apr 04 02:27:51 crc kubenswrapper[4681]: I0404 02:27:51.112009 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3281467f-d73d-47b3-8cfe-44b4beb7b14a-run-httpd\") pod \"ceilometer-0\" (UID: \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\") " pod="openstack/ceilometer-0" Apr 04 02:27:51 crc kubenswrapper[4681]: I0404 02:27:51.112518 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3281467f-d73d-47b3-8cfe-44b4beb7b14a-run-httpd\") pod \"ceilometer-0\" (UID: \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\") " pod="openstack/ceilometer-0" Apr 04 02:27:51 crc kubenswrapper[4681]: I0404 02:27:51.113128 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3281467f-d73d-47b3-8cfe-44b4beb7b14a-log-httpd\") pod \"ceilometer-0\" (UID: \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\") " pod="openstack/ceilometer-0" Apr 04 02:27:51 crc kubenswrapper[4681]: I0404 02:27:51.115652 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3281467f-d73d-47b3-8cfe-44b4beb7b14a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\") " pod="openstack/ceilometer-0" Apr 04 02:27:51 crc kubenswrapper[4681]: I0404 02:27:51.115835 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3281467f-d73d-47b3-8cfe-44b4beb7b14a-scripts\") pod \"ceilometer-0\" (UID: \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\") " pod="openstack/ceilometer-0" Apr 04 02:27:51 crc kubenswrapper[4681]: I0404 02:27:51.118180 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3281467f-d73d-47b3-8cfe-44b4beb7b14a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\") " pod="openstack/ceilometer-0" Apr 04 02:27:51 crc kubenswrapper[4681]: I0404 02:27:51.118431 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3281467f-d73d-47b3-8cfe-44b4beb7b14a-config-data\") pod \"ceilometer-0\" (UID: \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\") " pod="openstack/ceilometer-0" Apr 04 02:27:51 crc kubenswrapper[4681]: I0404 02:27:51.138152 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jddnc\" (UniqueName: \"kubernetes.io/projected/3281467f-d73d-47b3-8cfe-44b4beb7b14a-kube-api-access-jddnc\") pod \"ceilometer-0\" (UID: \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\") " pod="openstack/ceilometer-0" Apr 04 02:27:51 crc kubenswrapper[4681]: I0404 02:27:51.212927 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44324be8-7883-46e8-8eda-4639355efd37" path="/var/lib/kubelet/pods/44324be8-7883-46e8-8eda-4639355efd37/volumes" Apr 04 02:27:51 crc kubenswrapper[4681]: I0404 02:27:51.273111 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:27:51 crc kubenswrapper[4681]: I0404 02:27:51.757712 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:27:51 crc kubenswrapper[4681]: I0404 02:27:51.894331 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3281467f-d73d-47b3-8cfe-44b4beb7b14a","Type":"ContainerStarted","Data":"932dc3afa6e111a8af5632545106b31c90123b43989e4da0201e15e457c85cd3"} Apr 04 02:27:52 crc kubenswrapper[4681]: I0404 02:27:52.906204 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3281467f-d73d-47b3-8cfe-44b4beb7b14a","Type":"ContainerStarted","Data":"459c06cd809192edf78ac623358ea754e0ca8f5b49f6fb7755df40d1f148c468"} Apr 04 02:27:52 crc kubenswrapper[4681]: I0404 02:27:52.906872 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3281467f-d73d-47b3-8cfe-44b4beb7b14a","Type":"ContainerStarted","Data":"c5eca742119d2cb291e484e36d8e5886d69c117055b411bf5be0f5e309cde66a"} Apr 04 02:27:53 crc kubenswrapper[4681]: I0404 02:27:53.919254 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3281467f-d73d-47b3-8cfe-44b4beb7b14a","Type":"ContainerStarted","Data":"9890d47df7f2165a2a2111ce5a9e7bec4e87a6fd1f43fb7380ba868f835e5c08"} Apr 04 02:27:55 crc kubenswrapper[4681]: I0404 02:27:55.939527 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3281467f-d73d-47b3-8cfe-44b4beb7b14a","Type":"ContainerStarted","Data":"42da8ad75335c5becfcb54d152bc67843e6986e0d8a5a9d34d9d03a922114826"} Apr 04 02:27:55 crc kubenswrapper[4681]: I0404 02:27:55.940136 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Apr 04 02:27:55 crc kubenswrapper[4681]: I0404 02:27:55.970599 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.020624301 podStartE2EDuration="5.970578725s" podCreationTimestamp="2026-04-04 02:27:50 +0000 UTC" firstStartedPulling="2026-04-04 02:27:51.75862387 +0000 UTC m=+1951.424399000" lastFinishedPulling="2026-04-04 02:27:54.708578294 +0000 UTC m=+1954.374353424" observedRunningTime="2026-04-04 02:27:55.96421753 +0000 UTC m=+1955.629992650" watchObservedRunningTime="2026-04-04 02:27:55.970578725 +0000 UTC m=+1955.636353845" Apr 04 02:27:56 crc kubenswrapper[4681]: I0404 02:27:56.950726 4681 generic.go:334] "Generic (PLEG): container finished" podID="eb3d2475-e275-4139-8d39-3b0518fa8e02" containerID="3728b19c83666aac895486a18b6e03ef521b83f65df191dcdd7eb17506ee7643" exitCode=0 Apr 04 02:27:56 crc kubenswrapper[4681]: I0404 02:27:56.950800 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dgfkz" event={"ID":"eb3d2475-e275-4139-8d39-3b0518fa8e02","Type":"ContainerDied","Data":"3728b19c83666aac895486a18b6e03ef521b83f65df191dcdd7eb17506ee7643"} Apr 04 02:27:58 crc kubenswrapper[4681]: I0404 02:27:58.344304 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dgfkz" Apr 04 02:27:58 crc kubenswrapper[4681]: I0404 02:27:58.470483 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-86bbd\" (UniqueName: \"kubernetes.io/projected/eb3d2475-e275-4139-8d39-3b0518fa8e02-kube-api-access-86bbd\") pod \"eb3d2475-e275-4139-8d39-3b0518fa8e02\" (UID: \"eb3d2475-e275-4139-8d39-3b0518fa8e02\") " Apr 04 02:27:58 crc kubenswrapper[4681]: I0404 02:27:58.470693 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb3d2475-e275-4139-8d39-3b0518fa8e02-config-data\") pod \"eb3d2475-e275-4139-8d39-3b0518fa8e02\" (UID: \"eb3d2475-e275-4139-8d39-3b0518fa8e02\") " Apr 04 02:27:58 crc kubenswrapper[4681]: I0404 02:27:58.470802 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb3d2475-e275-4139-8d39-3b0518fa8e02-scripts\") pod \"eb3d2475-e275-4139-8d39-3b0518fa8e02\" (UID: \"eb3d2475-e275-4139-8d39-3b0518fa8e02\") " Apr 04 02:27:58 crc kubenswrapper[4681]: I0404 02:27:58.470847 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb3d2475-e275-4139-8d39-3b0518fa8e02-combined-ca-bundle\") pod \"eb3d2475-e275-4139-8d39-3b0518fa8e02\" (UID: \"eb3d2475-e275-4139-8d39-3b0518fa8e02\") " Apr 04 02:27:58 crc kubenswrapper[4681]: I0404 02:27:58.476406 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb3d2475-e275-4139-8d39-3b0518fa8e02-scripts" (OuterVolumeSpecName: "scripts") pod "eb3d2475-e275-4139-8d39-3b0518fa8e02" (UID: "eb3d2475-e275-4139-8d39-3b0518fa8e02"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:27:58 crc kubenswrapper[4681]: I0404 02:27:58.476473 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb3d2475-e275-4139-8d39-3b0518fa8e02-kube-api-access-86bbd" (OuterVolumeSpecName: "kube-api-access-86bbd") pod "eb3d2475-e275-4139-8d39-3b0518fa8e02" (UID: "eb3d2475-e275-4139-8d39-3b0518fa8e02"). InnerVolumeSpecName "kube-api-access-86bbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:27:58 crc kubenswrapper[4681]: I0404 02:27:58.503185 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb3d2475-e275-4139-8d39-3b0518fa8e02-config-data" (OuterVolumeSpecName: "config-data") pod "eb3d2475-e275-4139-8d39-3b0518fa8e02" (UID: "eb3d2475-e275-4139-8d39-3b0518fa8e02"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:27:58 crc kubenswrapper[4681]: I0404 02:27:58.503319 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb3d2475-e275-4139-8d39-3b0518fa8e02-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb3d2475-e275-4139-8d39-3b0518fa8e02" (UID: "eb3d2475-e275-4139-8d39-3b0518fa8e02"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:27:58 crc kubenswrapper[4681]: I0404 02:27:58.573481 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb3d2475-e275-4139-8d39-3b0518fa8e02-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:58 crc kubenswrapper[4681]: I0404 02:27:58.573530 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb3d2475-e275-4139-8d39-3b0518fa8e02-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:58 crc kubenswrapper[4681]: I0404 02:27:58.573541 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb3d2475-e275-4139-8d39-3b0518fa8e02-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:58 crc kubenswrapper[4681]: I0404 02:27:58.573552 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-86bbd\" (UniqueName: \"kubernetes.io/projected/eb3d2475-e275-4139-8d39-3b0518fa8e02-kube-api-access-86bbd\") on node \"crc\" DevicePath \"\"" Apr 04 02:27:58 crc kubenswrapper[4681]: I0404 02:27:58.970003 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-dgfkz" Apr 04 02:27:58 crc kubenswrapper[4681]: I0404 02:27:58.969952 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-dgfkz" event={"ID":"eb3d2475-e275-4139-8d39-3b0518fa8e02","Type":"ContainerDied","Data":"afa2242ef4d8cd23d1599f7a3c04ac23753b7ec1bf03ba56d7e5162f16c4f267"} Apr 04 02:27:58 crc kubenswrapper[4681]: I0404 02:27:58.970139 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afa2242ef4d8cd23d1599f7a3c04ac23753b7ec1bf03ba56d7e5162f16c4f267" Apr 04 02:27:59 crc kubenswrapper[4681]: I0404 02:27:59.063784 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Apr 04 02:27:59 crc kubenswrapper[4681]: E0404 02:27:59.064433 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb3d2475-e275-4139-8d39-3b0518fa8e02" containerName="nova-cell0-conductor-db-sync" Apr 04 02:27:59 crc kubenswrapper[4681]: I0404 02:27:59.064457 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb3d2475-e275-4139-8d39-3b0518fa8e02" containerName="nova-cell0-conductor-db-sync" Apr 04 02:27:59 crc kubenswrapper[4681]: I0404 02:27:59.064710 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb3d2475-e275-4139-8d39-3b0518fa8e02" containerName="nova-cell0-conductor-db-sync" Apr 04 02:27:59 crc kubenswrapper[4681]: I0404 02:27:59.065485 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Apr 04 02:27:59 crc kubenswrapper[4681]: I0404 02:27:59.067156 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Apr 04 02:27:59 crc kubenswrapper[4681]: I0404 02:27:59.068568 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-l9nvw" Apr 04 02:27:59 crc kubenswrapper[4681]: I0404 02:27:59.076296 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Apr 04 02:27:59 crc kubenswrapper[4681]: I0404 02:27:59.183484 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qw5r\" (UniqueName: \"kubernetes.io/projected/f63a7210-378a-4a4e-a458-33f19fbc360b-kube-api-access-9qw5r\") pod \"nova-cell0-conductor-0\" (UID: \"f63a7210-378a-4a4e-a458-33f19fbc360b\") " pod="openstack/nova-cell0-conductor-0" Apr 04 02:27:59 crc kubenswrapper[4681]: I0404 02:27:59.183625 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f63a7210-378a-4a4e-a458-33f19fbc360b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f63a7210-378a-4a4e-a458-33f19fbc360b\") " pod="openstack/nova-cell0-conductor-0" Apr 04 02:27:59 crc kubenswrapper[4681]: I0404 02:27:59.183756 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f63a7210-378a-4a4e-a458-33f19fbc360b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f63a7210-378a-4a4e-a458-33f19fbc360b\") " pod="openstack/nova-cell0-conductor-0" Apr 04 02:27:59 crc kubenswrapper[4681]: I0404 02:27:59.285071 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f63a7210-378a-4a4e-a458-33f19fbc360b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f63a7210-378a-4a4e-a458-33f19fbc360b\") " pod="openstack/nova-cell0-conductor-0" Apr 04 02:27:59 crc kubenswrapper[4681]: I0404 02:27:59.285216 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f63a7210-378a-4a4e-a458-33f19fbc360b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f63a7210-378a-4a4e-a458-33f19fbc360b\") " pod="openstack/nova-cell0-conductor-0" Apr 04 02:27:59 crc kubenswrapper[4681]: I0404 02:27:59.285336 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qw5r\" (UniqueName: \"kubernetes.io/projected/f63a7210-378a-4a4e-a458-33f19fbc360b-kube-api-access-9qw5r\") pod \"nova-cell0-conductor-0\" (UID: \"f63a7210-378a-4a4e-a458-33f19fbc360b\") " pod="openstack/nova-cell0-conductor-0" Apr 04 02:27:59 crc kubenswrapper[4681]: I0404 02:27:59.288858 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f63a7210-378a-4a4e-a458-33f19fbc360b-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f63a7210-378a-4a4e-a458-33f19fbc360b\") " pod="openstack/nova-cell0-conductor-0" Apr 04 02:27:59 crc kubenswrapper[4681]: I0404 02:27:59.298938 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f63a7210-378a-4a4e-a458-33f19fbc360b-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f63a7210-378a-4a4e-a458-33f19fbc360b\") " pod="openstack/nova-cell0-conductor-0" Apr 04 02:27:59 crc kubenswrapper[4681]: I0404 02:27:59.301754 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qw5r\" (UniqueName: \"kubernetes.io/projected/f63a7210-378a-4a4e-a458-33f19fbc360b-kube-api-access-9qw5r\") pod \"nova-cell0-conductor-0\" (UID: \"f63a7210-378a-4a4e-a458-33f19fbc360b\") " pod="openstack/nova-cell0-conductor-0" Apr 04 02:27:59 crc kubenswrapper[4681]: I0404 02:27:59.385220 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Apr 04 02:27:59 crc kubenswrapper[4681]: I0404 02:27:59.805694 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Apr 04 02:27:59 crc kubenswrapper[4681]: I0404 02:27:59.982102 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f63a7210-378a-4a4e-a458-33f19fbc360b","Type":"ContainerStarted","Data":"239ccc8327e0af1ba27d0f1ddce394d5ac6c7f6a40b3c9c5c5a355ff7d774e58"} Apr 04 02:27:59 crc kubenswrapper[4681]: I0404 02:27:59.982166 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f63a7210-378a-4a4e-a458-33f19fbc360b","Type":"ContainerStarted","Data":"5bbd31bd4019badfb39f9365469dfd47d6286225b86241cf99aa7297d213ecde"} Apr 04 02:27:59 crc kubenswrapper[4681]: I0404 02:27:59.982208 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Apr 04 02:28:00 crc kubenswrapper[4681]: I0404 02:28:00.007330 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.007245588 podStartE2EDuration="1.007245588s" podCreationTimestamp="2026-04-04 02:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:27:59.998742034 +0000 UTC m=+1959.664517154" watchObservedRunningTime="2026-04-04 02:28:00.007245588 +0000 UTC m=+1959.673020708" Apr 04 02:28:00 crc kubenswrapper[4681]: I0404 02:28:00.134177 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587828-ngsn5"] Apr 04 02:28:00 crc kubenswrapper[4681]: I0404 02:28:00.135958 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587828-ngsn5" Apr 04 02:28:00 crc kubenswrapper[4681]: I0404 02:28:00.138544 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 02:28:00 crc kubenswrapper[4681]: I0404 02:28:00.138676 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 02:28:00 crc kubenswrapper[4681]: I0404 02:28:00.141557 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 02:28:00 crc kubenswrapper[4681]: I0404 02:28:00.146746 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587828-ngsn5"] Apr 04 02:28:00 crc kubenswrapper[4681]: I0404 02:28:00.201296 4681 scope.go:117] "RemoveContainer" containerID="a115e63e478f2dcd96d6a1ea5579c4abd8544184899aee1395f08415f92823da" Apr 04 02:28:00 crc kubenswrapper[4681]: E0404 02:28:00.202211 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:28:00 crc kubenswrapper[4681]: I0404 02:28:00.204144 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdkvz\" (UniqueName: \"kubernetes.io/projected/25c42585-4d6b-4b0b-8ae4-d2913e833b34-kube-api-access-pdkvz\") pod \"auto-csr-approver-29587828-ngsn5\" (UID: \"25c42585-4d6b-4b0b-8ae4-d2913e833b34\") " pod="openshift-infra/auto-csr-approver-29587828-ngsn5" Apr 04 02:28:00 crc kubenswrapper[4681]: I0404 02:28:00.306397 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdkvz\" (UniqueName: \"kubernetes.io/projected/25c42585-4d6b-4b0b-8ae4-d2913e833b34-kube-api-access-pdkvz\") pod \"auto-csr-approver-29587828-ngsn5\" (UID: \"25c42585-4d6b-4b0b-8ae4-d2913e833b34\") " pod="openshift-infra/auto-csr-approver-29587828-ngsn5" Apr 04 02:28:00 crc kubenswrapper[4681]: I0404 02:28:00.341768 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdkvz\" (UniqueName: \"kubernetes.io/projected/25c42585-4d6b-4b0b-8ae4-d2913e833b34-kube-api-access-pdkvz\") pod \"auto-csr-approver-29587828-ngsn5\" (UID: \"25c42585-4d6b-4b0b-8ae4-d2913e833b34\") " pod="openshift-infra/auto-csr-approver-29587828-ngsn5" Apr 04 02:28:00 crc kubenswrapper[4681]: I0404 02:28:00.452745 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587828-ngsn5" Apr 04 02:28:00 crc kubenswrapper[4681]: I0404 02:28:00.924620 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587828-ngsn5"] Apr 04 02:28:00 crc kubenswrapper[4681]: W0404 02:28:00.925752 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25c42585_4d6b_4b0b_8ae4_d2913e833b34.slice/crio-27cc907e726f42691c3c0a86a2c73bbdfc4e6072a4565a24b8de86c93c2f23a6 WatchSource:0}: Error finding container 27cc907e726f42691c3c0a86a2c73bbdfc4e6072a4565a24b8de86c93c2f23a6: Status 404 returned error can't find the container with id 27cc907e726f42691c3c0a86a2c73bbdfc4e6072a4565a24b8de86c93c2f23a6 Apr 04 02:28:00 crc kubenswrapper[4681]: I0404 02:28:00.994769 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587828-ngsn5" event={"ID":"25c42585-4d6b-4b0b-8ae4-d2913e833b34","Type":"ContainerStarted","Data":"27cc907e726f42691c3c0a86a2c73bbdfc4e6072a4565a24b8de86c93c2f23a6"} Apr 04 02:28:03 crc kubenswrapper[4681]: I0404 02:28:03.015433 4681 generic.go:334] "Generic (PLEG): container finished" podID="25c42585-4d6b-4b0b-8ae4-d2913e833b34" containerID="aeaccd3eb19be27ef00b5d7f9e2bc27017b9474b0500bc6ac26f0013dfc3b9f2" exitCode=0 Apr 04 02:28:03 crc kubenswrapper[4681]: I0404 02:28:03.015545 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587828-ngsn5" event={"ID":"25c42585-4d6b-4b0b-8ae4-d2913e833b34","Type":"ContainerDied","Data":"aeaccd3eb19be27ef00b5d7f9e2bc27017b9474b0500bc6ac26f0013dfc3b9f2"} Apr 04 02:28:04 crc kubenswrapper[4681]: I0404 02:28:04.419582 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587828-ngsn5" Apr 04 02:28:04 crc kubenswrapper[4681]: I0404 02:28:04.424391 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Apr 04 02:28:04 crc kubenswrapper[4681]: I0404 02:28:04.496345 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdkvz\" (UniqueName: \"kubernetes.io/projected/25c42585-4d6b-4b0b-8ae4-d2913e833b34-kube-api-access-pdkvz\") pod \"25c42585-4d6b-4b0b-8ae4-d2913e833b34\" (UID: \"25c42585-4d6b-4b0b-8ae4-d2913e833b34\") " Apr 04 02:28:04 crc kubenswrapper[4681]: I0404 02:28:04.505174 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25c42585-4d6b-4b0b-8ae4-d2913e833b34-kube-api-access-pdkvz" (OuterVolumeSpecName: "kube-api-access-pdkvz") pod "25c42585-4d6b-4b0b-8ae4-d2913e833b34" (UID: "25c42585-4d6b-4b0b-8ae4-d2913e833b34"). InnerVolumeSpecName "kube-api-access-pdkvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:28:04 crc kubenswrapper[4681]: I0404 02:28:04.598618 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdkvz\" (UniqueName: \"kubernetes.io/projected/25c42585-4d6b-4b0b-8ae4-d2913e833b34-kube-api-access-pdkvz\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:04 crc kubenswrapper[4681]: I0404 02:28:04.938637 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-zbjjd"] Apr 04 02:28:04 crc kubenswrapper[4681]: E0404 02:28:04.939053 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25c42585-4d6b-4b0b-8ae4-d2913e833b34" containerName="oc" Apr 04 02:28:04 crc kubenswrapper[4681]: I0404 02:28:04.939068 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="25c42585-4d6b-4b0b-8ae4-d2913e833b34" containerName="oc" Apr 04 02:28:04 crc kubenswrapper[4681]: I0404 02:28:04.939257 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="25c42585-4d6b-4b0b-8ae4-d2913e833b34" containerName="oc" Apr 04 02:28:04 crc kubenswrapper[4681]: I0404 02:28:04.939926 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zbjjd" Apr 04 02:28:04 crc kubenswrapper[4681]: I0404 02:28:04.942174 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Apr 04 02:28:04 crc kubenswrapper[4681]: I0404 02:28:04.942194 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Apr 04 02:28:04 crc kubenswrapper[4681]: I0404 02:28:04.994720 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-zbjjd"] Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.007201 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b124b58f-5f56-47cf-a141-28e8786a0673-scripts\") pod \"nova-cell0-cell-mapping-zbjjd\" (UID: \"b124b58f-5f56-47cf-a141-28e8786a0673\") " pod="openstack/nova-cell0-cell-mapping-zbjjd" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.007290 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwl8f\" (UniqueName: \"kubernetes.io/projected/b124b58f-5f56-47cf-a141-28e8786a0673-kube-api-access-bwl8f\") pod \"nova-cell0-cell-mapping-zbjjd\" (UID: \"b124b58f-5f56-47cf-a141-28e8786a0673\") " pod="openstack/nova-cell0-cell-mapping-zbjjd" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.007466 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b124b58f-5f56-47cf-a141-28e8786a0673-config-data\") pod \"nova-cell0-cell-mapping-zbjjd\" (UID: \"b124b58f-5f56-47cf-a141-28e8786a0673\") " pod="openstack/nova-cell0-cell-mapping-zbjjd" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.007737 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b124b58f-5f56-47cf-a141-28e8786a0673-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zbjjd\" (UID: \"b124b58f-5f56-47cf-a141-28e8786a0673\") " pod="openstack/nova-cell0-cell-mapping-zbjjd" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.043178 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587828-ngsn5" event={"ID":"25c42585-4d6b-4b0b-8ae4-d2913e833b34","Type":"ContainerDied","Data":"27cc907e726f42691c3c0a86a2c73bbdfc4e6072a4565a24b8de86c93c2f23a6"} Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.043229 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27cc907e726f42691c3c0a86a2c73bbdfc4e6072a4565a24b8de86c93c2f23a6" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.043316 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587828-ngsn5" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.109831 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b124b58f-5f56-47cf-a141-28e8786a0673-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zbjjd\" (UID: \"b124b58f-5f56-47cf-a141-28e8786a0673\") " pod="openstack/nova-cell0-cell-mapping-zbjjd" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.109953 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b124b58f-5f56-47cf-a141-28e8786a0673-scripts\") pod \"nova-cell0-cell-mapping-zbjjd\" (UID: \"b124b58f-5f56-47cf-a141-28e8786a0673\") " pod="openstack/nova-cell0-cell-mapping-zbjjd" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.110014 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwl8f\" (UniqueName: \"kubernetes.io/projected/b124b58f-5f56-47cf-a141-28e8786a0673-kube-api-access-bwl8f\") pod \"nova-cell0-cell-mapping-zbjjd\" (UID: \"b124b58f-5f56-47cf-a141-28e8786a0673\") " pod="openstack/nova-cell0-cell-mapping-zbjjd" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.110094 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b124b58f-5f56-47cf-a141-28e8786a0673-config-data\") pod \"nova-cell0-cell-mapping-zbjjd\" (UID: \"b124b58f-5f56-47cf-a141-28e8786a0673\") " pod="openstack/nova-cell0-cell-mapping-zbjjd" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.116398 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b124b58f-5f56-47cf-a141-28e8786a0673-config-data\") pod \"nova-cell0-cell-mapping-zbjjd\" (UID: \"b124b58f-5f56-47cf-a141-28e8786a0673\") " pod="openstack/nova-cell0-cell-mapping-zbjjd" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.126331 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b124b58f-5f56-47cf-a141-28e8786a0673-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zbjjd\" (UID: \"b124b58f-5f56-47cf-a141-28e8786a0673\") " pod="openstack/nova-cell0-cell-mapping-zbjjd" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.128947 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b124b58f-5f56-47cf-a141-28e8786a0673-scripts\") pod \"nova-cell0-cell-mapping-zbjjd\" (UID: \"b124b58f-5f56-47cf-a141-28e8786a0673\") " pod="openstack/nova-cell0-cell-mapping-zbjjd" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.155543 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwl8f\" (UniqueName: \"kubernetes.io/projected/b124b58f-5f56-47cf-a141-28e8786a0673-kube-api-access-bwl8f\") pod \"nova-cell0-cell-mapping-zbjjd\" (UID: \"b124b58f-5f56-47cf-a141-28e8786a0673\") " pod="openstack/nova-cell0-cell-mapping-zbjjd" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.195743 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.198354 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.212592 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.255933 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.265529 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zbjjd" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.270068 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.277894 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.292334 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.318516 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/459acb42-4c8c-4498-83dc-9035ae5e38c0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"459acb42-4c8c-4498-83dc-9035ae5e38c0\") " pod="openstack/nova-scheduler-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.325581 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/459acb42-4c8c-4498-83dc-9035ae5e38c0-config-data\") pod \"nova-scheduler-0\" (UID: \"459acb42-4c8c-4498-83dc-9035ae5e38c0\") " pod="openstack/nova-scheduler-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.325787 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px7r5\" (UniqueName: \"kubernetes.io/projected/459acb42-4c8c-4498-83dc-9035ae5e38c0-kube-api-access-px7r5\") pod \"nova-scheduler-0\" (UID: \"459acb42-4c8c-4498-83dc-9035ae5e38c0\") " pod="openstack/nova-scheduler-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.337401 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.429421 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px7r5\" (UniqueName: \"kubernetes.io/projected/459acb42-4c8c-4498-83dc-9035ae5e38c0-kube-api-access-px7r5\") pod \"nova-scheduler-0\" (UID: \"459acb42-4c8c-4498-83dc-9035ae5e38c0\") " pod="openstack/nova-scheduler-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.429477 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df6bd6d2-059b-413b-9975-1db785d83944-logs\") pod \"nova-api-0\" (UID: \"df6bd6d2-059b-413b-9975-1db785d83944\") " pod="openstack/nova-api-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.429531 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/459acb42-4c8c-4498-83dc-9035ae5e38c0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"459acb42-4c8c-4498-83dc-9035ae5e38c0\") " pod="openstack/nova-scheduler-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.429615 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df6bd6d2-059b-413b-9975-1db785d83944-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"df6bd6d2-059b-413b-9975-1db785d83944\") " pod="openstack/nova-api-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.429659 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r648d\" (UniqueName: \"kubernetes.io/projected/df6bd6d2-059b-413b-9975-1db785d83944-kube-api-access-r648d\") pod \"nova-api-0\" (UID: \"df6bd6d2-059b-413b-9975-1db785d83944\") " pod="openstack/nova-api-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.429753 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df6bd6d2-059b-413b-9975-1db785d83944-config-data\") pod \"nova-api-0\" (UID: \"df6bd6d2-059b-413b-9975-1db785d83944\") " pod="openstack/nova-api-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.429787 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/459acb42-4c8c-4498-83dc-9035ae5e38c0-config-data\") pod \"nova-scheduler-0\" (UID: \"459acb42-4c8c-4498-83dc-9035ae5e38c0\") " pod="openstack/nova-scheduler-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.432177 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.450538 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.455771 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/459acb42-4c8c-4498-83dc-9035ae5e38c0-config-data\") pod \"nova-scheduler-0\" (UID: \"459acb42-4c8c-4498-83dc-9035ae5e38c0\") " pod="openstack/nova-scheduler-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.456734 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/459acb42-4c8c-4498-83dc-9035ae5e38c0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"459acb42-4c8c-4498-83dc-9035ae5e38c0\") " pod="openstack/nova-scheduler-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.459569 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.465325 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.480964 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.482394 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.496814 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.503018 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px7r5\" (UniqueName: \"kubernetes.io/projected/459acb42-4c8c-4498-83dc-9035ae5e38c0-kube-api-access-px7r5\") pod \"nova-scheduler-0\" (UID: \"459acb42-4c8c-4498-83dc-9035ae5e38c0\") " pod="openstack/nova-scheduler-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.523493 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.537098 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a49a198d-f1ad-4eff-9c8d-a0360b9fedb4-config-data\") pod \"nova-metadata-0\" (UID: \"a49a198d-f1ad-4eff-9c8d-a0360b9fedb4\") " pod="openstack/nova-metadata-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.537381 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eda4f6e7-db6f-43af-9519-87fb93938d78-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"eda4f6e7-db6f-43af-9519-87fb93938d78\") " pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.537466 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a49a198d-f1ad-4eff-9c8d-a0360b9fedb4-logs\") pod \"nova-metadata-0\" (UID: \"a49a198d-f1ad-4eff-9c8d-a0360b9fedb4\") " pod="openstack/nova-metadata-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.537551 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df6bd6d2-059b-413b-9975-1db785d83944-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"df6bd6d2-059b-413b-9975-1db785d83944\") " pod="openstack/nova-api-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.537681 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eda4f6e7-db6f-43af-9519-87fb93938d78-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"eda4f6e7-db6f-43af-9519-87fb93938d78\") " pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.537806 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r648d\" (UniqueName: \"kubernetes.io/projected/df6bd6d2-059b-413b-9975-1db785d83944-kube-api-access-r648d\") pod \"nova-api-0\" (UID: \"df6bd6d2-059b-413b-9975-1db785d83944\") " pod="openstack/nova-api-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.537915 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj5j5\" (UniqueName: \"kubernetes.io/projected/eda4f6e7-db6f-43af-9519-87fb93938d78-kube-api-access-qj5j5\") pod \"nova-cell1-novncproxy-0\" (UID: \"eda4f6e7-db6f-43af-9519-87fb93938d78\") " pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.538062 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df6bd6d2-059b-413b-9975-1db785d83944-config-data\") pod \"nova-api-0\" (UID: \"df6bd6d2-059b-413b-9975-1db785d83944\") " pod="openstack/nova-api-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.538186 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df6bd6d2-059b-413b-9975-1db785d83944-logs\") pod \"nova-api-0\" (UID: \"df6bd6d2-059b-413b-9975-1db785d83944\") " pod="openstack/nova-api-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.538285 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a49a198d-f1ad-4eff-9c8d-a0360b9fedb4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a49a198d-f1ad-4eff-9c8d-a0360b9fedb4\") " pod="openstack/nova-metadata-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.538373 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjpss\" (UniqueName: \"kubernetes.io/projected/a49a198d-f1ad-4eff-9c8d-a0360b9fedb4-kube-api-access-cjpss\") pod \"nova-metadata-0\" (UID: \"a49a198d-f1ad-4eff-9c8d-a0360b9fedb4\") " pod="openstack/nova-metadata-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.546190 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df6bd6d2-059b-413b-9975-1db785d83944-logs\") pod \"nova-api-0\" (UID: \"df6bd6d2-059b-413b-9975-1db785d83944\") " pod="openstack/nova-api-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.558144 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df6bd6d2-059b-413b-9975-1db785d83944-config-data\") pod \"nova-api-0\" (UID: \"df6bd6d2-059b-413b-9975-1db785d83944\") " pod="openstack/nova-api-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.558898 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df6bd6d2-059b-413b-9975-1db785d83944-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"df6bd6d2-059b-413b-9975-1db785d83944\") " pod="openstack/nova-api-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.577933 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.609207 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r648d\" (UniqueName: \"kubernetes.io/projected/df6bd6d2-059b-413b-9975-1db785d83944-kube-api-access-r648d\") pod \"nova-api-0\" (UID: \"df6bd6d2-059b-413b-9975-1db785d83944\") " pod="openstack/nova-api-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.641895 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eda4f6e7-db6f-43af-9519-87fb93938d78-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"eda4f6e7-db6f-43af-9519-87fb93938d78\") " pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.641977 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj5j5\" (UniqueName: \"kubernetes.io/projected/eda4f6e7-db6f-43af-9519-87fb93938d78-kube-api-access-qj5j5\") pod \"nova-cell1-novncproxy-0\" (UID: \"eda4f6e7-db6f-43af-9519-87fb93938d78\") " pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.642144 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a49a198d-f1ad-4eff-9c8d-a0360b9fedb4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a49a198d-f1ad-4eff-9c8d-a0360b9fedb4\") " pod="openstack/nova-metadata-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.642175 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjpss\" (UniqueName: \"kubernetes.io/projected/a49a198d-f1ad-4eff-9c8d-a0360b9fedb4-kube-api-access-cjpss\") pod \"nova-metadata-0\" (UID: \"a49a198d-f1ad-4eff-9c8d-a0360b9fedb4\") " pod="openstack/nova-metadata-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.642303 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a49a198d-f1ad-4eff-9c8d-a0360b9fedb4-config-data\") pod \"nova-metadata-0\" (UID: \"a49a198d-f1ad-4eff-9c8d-a0360b9fedb4\") " pod="openstack/nova-metadata-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.642332 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eda4f6e7-db6f-43af-9519-87fb93938d78-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"eda4f6e7-db6f-43af-9519-87fb93938d78\") " pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.642393 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a49a198d-f1ad-4eff-9c8d-a0360b9fedb4-logs\") pod \"nova-metadata-0\" (UID: \"a49a198d-f1ad-4eff-9c8d-a0360b9fedb4\") " pod="openstack/nova-metadata-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.642886 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a49a198d-f1ad-4eff-9c8d-a0360b9fedb4-logs\") pod \"nova-metadata-0\" (UID: \"a49a198d-f1ad-4eff-9c8d-a0360b9fedb4\") " pod="openstack/nova-metadata-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.648650 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a49a198d-f1ad-4eff-9c8d-a0360b9fedb4-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a49a198d-f1ad-4eff-9c8d-a0360b9fedb4\") " pod="openstack/nova-metadata-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.649201 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a49a198d-f1ad-4eff-9c8d-a0360b9fedb4-config-data\") pod \"nova-metadata-0\" (UID: \"a49a198d-f1ad-4eff-9c8d-a0360b9fedb4\") " pod="openstack/nova-metadata-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.655017 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eda4f6e7-db6f-43af-9519-87fb93938d78-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"eda4f6e7-db6f-43af-9519-87fb93938d78\") " pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.664249 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eda4f6e7-db6f-43af-9519-87fb93938d78-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"eda4f6e7-db6f-43af-9519-87fb93938d78\") " pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.668286 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj5j5\" (UniqueName: \"kubernetes.io/projected/eda4f6e7-db6f-43af-9519-87fb93938d78-kube-api-access-qj5j5\") pod \"nova-cell1-novncproxy-0\" (UID: \"eda4f6e7-db6f-43af-9519-87fb93938d78\") " pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.669006 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjpss\" (UniqueName: \"kubernetes.io/projected/a49a198d-f1ad-4eff-9c8d-a0360b9fedb4-kube-api-access-cjpss\") pod \"nova-metadata-0\" (UID: \"a49a198d-f1ad-4eff-9c8d-a0360b9fedb4\") " pod="openstack/nova-metadata-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.681407 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d899d57cc-vrn4r"] Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.684108 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.685887 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.721684 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d899d57cc-vrn4r"] Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.748388 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae22ed73-322c-41fc-821e-b0f2e7217ab6-config\") pod \"dnsmasq-dns-7d899d57cc-vrn4r\" (UID: \"ae22ed73-322c-41fc-821e-b0f2e7217ab6\") " pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.748439 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae22ed73-322c-41fc-821e-b0f2e7217ab6-dns-swift-storage-0\") pod \"dnsmasq-dns-7d899d57cc-vrn4r\" (UID: \"ae22ed73-322c-41fc-821e-b0f2e7217ab6\") " pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.748496 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae22ed73-322c-41fc-821e-b0f2e7217ab6-ovsdbserver-nb\") pod \"dnsmasq-dns-7d899d57cc-vrn4r\" (UID: \"ae22ed73-322c-41fc-821e-b0f2e7217ab6\") " pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.748516 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c97lg\" (UniqueName: \"kubernetes.io/projected/ae22ed73-322c-41fc-821e-b0f2e7217ab6-kube-api-access-c97lg\") pod \"dnsmasq-dns-7d899d57cc-vrn4r\" (UID: \"ae22ed73-322c-41fc-821e-b0f2e7217ab6\") " pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.748592 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae22ed73-322c-41fc-821e-b0f2e7217ab6-ovsdbserver-sb\") pod \"dnsmasq-dns-7d899d57cc-vrn4r\" (UID: \"ae22ed73-322c-41fc-821e-b0f2e7217ab6\") " pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.748633 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae22ed73-322c-41fc-821e-b0f2e7217ab6-dns-svc\") pod \"dnsmasq-dns-7d899d57cc-vrn4r\" (UID: \"ae22ed73-322c-41fc-821e-b0f2e7217ab6\") " pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.748828 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.765374 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587822-jdx8j"] Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.830096 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587822-jdx8j"] Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.852951 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae22ed73-322c-41fc-821e-b0f2e7217ab6-ovsdbserver-sb\") pod \"dnsmasq-dns-7d899d57cc-vrn4r\" (UID: \"ae22ed73-322c-41fc-821e-b0f2e7217ab6\") " pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.853025 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae22ed73-322c-41fc-821e-b0f2e7217ab6-dns-svc\") pod \"dnsmasq-dns-7d899d57cc-vrn4r\" (UID: \"ae22ed73-322c-41fc-821e-b0f2e7217ab6\") " pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.853289 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae22ed73-322c-41fc-821e-b0f2e7217ab6-config\") pod \"dnsmasq-dns-7d899d57cc-vrn4r\" (UID: \"ae22ed73-322c-41fc-821e-b0f2e7217ab6\") " pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.853336 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae22ed73-322c-41fc-821e-b0f2e7217ab6-dns-swift-storage-0\") pod \"dnsmasq-dns-7d899d57cc-vrn4r\" (UID: \"ae22ed73-322c-41fc-821e-b0f2e7217ab6\") " pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.853469 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae22ed73-322c-41fc-821e-b0f2e7217ab6-ovsdbserver-nb\") pod \"dnsmasq-dns-7d899d57cc-vrn4r\" (UID: \"ae22ed73-322c-41fc-821e-b0f2e7217ab6\") " pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.853507 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c97lg\" (UniqueName: \"kubernetes.io/projected/ae22ed73-322c-41fc-821e-b0f2e7217ab6-kube-api-access-c97lg\") pod \"dnsmasq-dns-7d899d57cc-vrn4r\" (UID: \"ae22ed73-322c-41fc-821e-b0f2e7217ab6\") " pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.854252 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae22ed73-322c-41fc-821e-b0f2e7217ab6-dns-svc\") pod \"dnsmasq-dns-7d899d57cc-vrn4r\" (UID: \"ae22ed73-322c-41fc-821e-b0f2e7217ab6\") " pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.855010 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae22ed73-322c-41fc-821e-b0f2e7217ab6-config\") pod \"dnsmasq-dns-7d899d57cc-vrn4r\" (UID: \"ae22ed73-322c-41fc-821e-b0f2e7217ab6\") " pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.855012 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae22ed73-322c-41fc-821e-b0f2e7217ab6-ovsdbserver-sb\") pod \"dnsmasq-dns-7d899d57cc-vrn4r\" (UID: \"ae22ed73-322c-41fc-821e-b0f2e7217ab6\") " pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.864631 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae22ed73-322c-41fc-821e-b0f2e7217ab6-ovsdbserver-nb\") pod \"dnsmasq-dns-7d899d57cc-vrn4r\" (UID: \"ae22ed73-322c-41fc-821e-b0f2e7217ab6\") " pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.865076 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae22ed73-322c-41fc-821e-b0f2e7217ab6-dns-swift-storage-0\") pod \"dnsmasq-dns-7d899d57cc-vrn4r\" (UID: \"ae22ed73-322c-41fc-821e-b0f2e7217ab6\") " pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.880985 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c97lg\" (UniqueName: \"kubernetes.io/projected/ae22ed73-322c-41fc-821e-b0f2e7217ab6-kube-api-access-c97lg\") pod \"dnsmasq-dns-7d899d57cc-vrn4r\" (UID: \"ae22ed73-322c-41fc-821e-b0f2e7217ab6\") " pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" Apr 04 02:28:05 crc kubenswrapper[4681]: I0404 02:28:05.958570 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Apr 04 02:28:06 crc kubenswrapper[4681]: I0404 02:28:06.077791 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" Apr 04 02:28:06 crc kubenswrapper[4681]: I0404 02:28:06.228238 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-zbjjd"] Apr 04 02:28:06 crc kubenswrapper[4681]: W0404 02:28:06.238539 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb124b58f_5f56_47cf_a141_28e8786a0673.slice/crio-94eac1c45641ebadcea693c2537d5b52a73519b3108951203576347544943f96 WatchSource:0}: Error finding container 94eac1c45641ebadcea693c2537d5b52a73519b3108951203576347544943f96: Status 404 returned error can't find the container with id 94eac1c45641ebadcea693c2537d5b52a73519b3108951203576347544943f96 Apr 04 02:28:06 crc kubenswrapper[4681]: I0404 02:28:06.462993 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Apr 04 02:28:06 crc kubenswrapper[4681]: I0404 02:28:06.533108 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Apr 04 02:28:06 crc kubenswrapper[4681]: I0404 02:28:06.810298 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Apr 04 02:28:06 crc kubenswrapper[4681]: W0404 02:28:06.811081 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf6bd6d2_059b_413b_9975_1db785d83944.slice/crio-0aeeb92eaecb23cfb3ab18f9c73a5c8a69bd67c42734c30ca4dfcafd92cff1a0 WatchSource:0}: Error finding container 0aeeb92eaecb23cfb3ab18f9c73a5c8a69bd67c42734c30ca4dfcafd92cff1a0: Status 404 returned error can't find the container with id 0aeeb92eaecb23cfb3ab18f9c73a5c8a69bd67c42734c30ca4dfcafd92cff1a0 Apr 04 02:28:06 crc kubenswrapper[4681]: I0404 02:28:06.988710 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nnf4c"] Apr 04 02:28:06 crc kubenswrapper[4681]: I0404 02:28:06.990159 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nnf4c" Apr 04 02:28:06 crc kubenswrapper[4681]: I0404 02:28:06.993577 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Apr 04 02:28:06 crc kubenswrapper[4681]: I0404 02:28:06.993603 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Apr 04 02:28:07 crc kubenswrapper[4681]: W0404 02:28:07.005747 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda49a198d_f1ad_4eff_9c8d_a0360b9fedb4.slice/crio-f72e4ba71a6925c6323ed9aa811647a3264fd739b31e74e24dd939fa1985eab7 WatchSource:0}: Error finding container f72e4ba71a6925c6323ed9aa811647a3264fd739b31e74e24dd939fa1985eab7: Status 404 returned error can't find the container with id f72e4ba71a6925c6323ed9aa811647a3264fd739b31e74e24dd939fa1985eab7 Apr 04 02:28:07 crc kubenswrapper[4681]: I0404 02:28:07.007464 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Apr 04 02:28:07 crc kubenswrapper[4681]: I0404 02:28:07.046533 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nnf4c"] Apr 04 02:28:07 crc kubenswrapper[4681]: I0404 02:28:07.076536 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zbjjd" event={"ID":"b124b58f-5f56-47cf-a141-28e8786a0673","Type":"ContainerStarted","Data":"69baf56b8e9d8b8f84b6a5cd7d34cf35744bb98017478f06e79186c84a1cc68c"} Apr 04 02:28:07 crc kubenswrapper[4681]: I0404 02:28:07.076585 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zbjjd" event={"ID":"b124b58f-5f56-47cf-a141-28e8786a0673","Type":"ContainerStarted","Data":"94eac1c45641ebadcea693c2537d5b52a73519b3108951203576347544943f96"} Apr 04 02:28:07 crc kubenswrapper[4681]: I0404 02:28:07.078251 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"459acb42-4c8c-4498-83dc-9035ae5e38c0","Type":"ContainerStarted","Data":"777a01d670e29ccdac66e4851921c269bcc0646c0621f9a9782c797c9ee6b6dd"} Apr 04 02:28:07 crc kubenswrapper[4681]: I0404 02:28:07.081132 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a49a198d-f1ad-4eff-9c8d-a0360b9fedb4","Type":"ContainerStarted","Data":"f72e4ba71a6925c6323ed9aa811647a3264fd739b31e74e24dd939fa1985eab7"} Apr 04 02:28:07 crc kubenswrapper[4681]: I0404 02:28:07.083460 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df6bd6d2-059b-413b-9975-1db785d83944","Type":"ContainerStarted","Data":"0aeeb92eaecb23cfb3ab18f9c73a5c8a69bd67c42734c30ca4dfcafd92cff1a0"} Apr 04 02:28:07 crc kubenswrapper[4681]: I0404 02:28:07.084921 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"eda4f6e7-db6f-43af-9519-87fb93938d78","Type":"ContainerStarted","Data":"80ffc0175c11ceeb403b29ca3a37089f3f3cd8df3757a9f0d5a08600034d4da7"} Apr 04 02:28:07 crc kubenswrapper[4681]: I0404 02:28:07.092636 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d899d57cc-vrn4r"] Apr 04 02:28:07 crc kubenswrapper[4681]: I0404 02:28:07.121972 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-zbjjd" podStartSLOduration=3.121948168 podStartE2EDuration="3.121948168s" podCreationTimestamp="2026-04-04 02:28:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:28:07.099206983 +0000 UTC m=+1966.764982173" watchObservedRunningTime="2026-04-04 02:28:07.121948168 +0000 UTC m=+1966.787723288" Apr 04 02:28:07 crc kubenswrapper[4681]: I0404 02:28:07.136387 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2883f88-a3fd-46b2-8452-32974b0c6b4f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nnf4c\" (UID: \"f2883f88-a3fd-46b2-8452-32974b0c6b4f\") " pod="openstack/nova-cell1-conductor-db-sync-nnf4c" Apr 04 02:28:07 crc kubenswrapper[4681]: I0404 02:28:07.136482 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2883f88-a3fd-46b2-8452-32974b0c6b4f-config-data\") pod \"nova-cell1-conductor-db-sync-nnf4c\" (UID: \"f2883f88-a3fd-46b2-8452-32974b0c6b4f\") " pod="openstack/nova-cell1-conductor-db-sync-nnf4c" Apr 04 02:28:07 crc kubenswrapper[4681]: I0404 02:28:07.136527 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ng7d\" (UniqueName: \"kubernetes.io/projected/f2883f88-a3fd-46b2-8452-32974b0c6b4f-kube-api-access-8ng7d\") pod \"nova-cell1-conductor-db-sync-nnf4c\" (UID: \"f2883f88-a3fd-46b2-8452-32974b0c6b4f\") " pod="openstack/nova-cell1-conductor-db-sync-nnf4c" Apr 04 02:28:07 crc kubenswrapper[4681]: I0404 02:28:07.136605 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2883f88-a3fd-46b2-8452-32974b0c6b4f-scripts\") pod \"nova-cell1-conductor-db-sync-nnf4c\" (UID: \"f2883f88-a3fd-46b2-8452-32974b0c6b4f\") " pod="openstack/nova-cell1-conductor-db-sync-nnf4c" Apr 04 02:28:07 crc kubenswrapper[4681]: I0404 02:28:07.217944 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a536c680-7c89-488e-befb-087242236628" path="/var/lib/kubelet/pods/a536c680-7c89-488e-befb-087242236628/volumes" Apr 04 02:28:07 crc kubenswrapper[4681]: I0404 02:28:07.238803 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2883f88-a3fd-46b2-8452-32974b0c6b4f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nnf4c\" (UID: \"f2883f88-a3fd-46b2-8452-32974b0c6b4f\") " pod="openstack/nova-cell1-conductor-db-sync-nnf4c" Apr 04 02:28:07 crc kubenswrapper[4681]: I0404 02:28:07.238897 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2883f88-a3fd-46b2-8452-32974b0c6b4f-config-data\") pod \"nova-cell1-conductor-db-sync-nnf4c\" (UID: \"f2883f88-a3fd-46b2-8452-32974b0c6b4f\") " pod="openstack/nova-cell1-conductor-db-sync-nnf4c" Apr 04 02:28:07 crc kubenswrapper[4681]: I0404 02:28:07.238939 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ng7d\" (UniqueName: \"kubernetes.io/projected/f2883f88-a3fd-46b2-8452-32974b0c6b4f-kube-api-access-8ng7d\") pod \"nova-cell1-conductor-db-sync-nnf4c\" (UID: \"f2883f88-a3fd-46b2-8452-32974b0c6b4f\") " pod="openstack/nova-cell1-conductor-db-sync-nnf4c" Apr 04 02:28:07 crc kubenswrapper[4681]: I0404 02:28:07.239024 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2883f88-a3fd-46b2-8452-32974b0c6b4f-scripts\") pod \"nova-cell1-conductor-db-sync-nnf4c\" (UID: \"f2883f88-a3fd-46b2-8452-32974b0c6b4f\") " pod="openstack/nova-cell1-conductor-db-sync-nnf4c" Apr 04 02:28:07 crc kubenswrapper[4681]: I0404 02:28:07.243736 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2883f88-a3fd-46b2-8452-32974b0c6b4f-scripts\") pod \"nova-cell1-conductor-db-sync-nnf4c\" (UID: \"f2883f88-a3fd-46b2-8452-32974b0c6b4f\") " pod="openstack/nova-cell1-conductor-db-sync-nnf4c" Apr 04 02:28:07 crc kubenswrapper[4681]: I0404 02:28:07.245144 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2883f88-a3fd-46b2-8452-32974b0c6b4f-config-data\") pod \"nova-cell1-conductor-db-sync-nnf4c\" (UID: \"f2883f88-a3fd-46b2-8452-32974b0c6b4f\") " pod="openstack/nova-cell1-conductor-db-sync-nnf4c" Apr 04 02:28:07 crc kubenswrapper[4681]: I0404 02:28:07.245411 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2883f88-a3fd-46b2-8452-32974b0c6b4f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nnf4c\" (UID: \"f2883f88-a3fd-46b2-8452-32974b0c6b4f\") " pod="openstack/nova-cell1-conductor-db-sync-nnf4c" Apr 04 02:28:07 crc kubenswrapper[4681]: I0404 02:28:07.261050 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ng7d\" (UniqueName: \"kubernetes.io/projected/f2883f88-a3fd-46b2-8452-32974b0c6b4f-kube-api-access-8ng7d\") pod \"nova-cell1-conductor-db-sync-nnf4c\" (UID: \"f2883f88-a3fd-46b2-8452-32974b0c6b4f\") " pod="openstack/nova-cell1-conductor-db-sync-nnf4c" Apr 04 02:28:07 crc kubenswrapper[4681]: I0404 02:28:07.325609 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nnf4c" Apr 04 02:28:07 crc kubenswrapper[4681]: I0404 02:28:07.854585 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nnf4c"] Apr 04 02:28:08 crc kubenswrapper[4681]: I0404 02:28:08.107645 4681 generic.go:334] "Generic (PLEG): container finished" podID="ae22ed73-322c-41fc-821e-b0f2e7217ab6" containerID="3a5cd134d53b4dd5cb2ad4a9b33e01d47db4c43856148d962308e8d387b51070" exitCode=0 Apr 04 02:28:08 crc kubenswrapper[4681]: I0404 02:28:08.107707 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" event={"ID":"ae22ed73-322c-41fc-821e-b0f2e7217ab6","Type":"ContainerDied","Data":"3a5cd134d53b4dd5cb2ad4a9b33e01d47db4c43856148d962308e8d387b51070"} Apr 04 02:28:08 crc kubenswrapper[4681]: I0404 02:28:08.107757 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" event={"ID":"ae22ed73-322c-41fc-821e-b0f2e7217ab6","Type":"ContainerStarted","Data":"145b92508a5c1190614848f83e3add85e47d8247d3264eb0baaab3ee03c6bb8c"} Apr 04 02:28:08 crc kubenswrapper[4681]: I0404 02:28:08.917500 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Apr 04 02:28:08 crc kubenswrapper[4681]: I0404 02:28:08.930939 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Apr 04 02:28:10 crc kubenswrapper[4681]: I0404 02:28:10.132140 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nnf4c" event={"ID":"f2883f88-a3fd-46b2-8452-32974b0c6b4f","Type":"ContainerStarted","Data":"f259a6db10d1325f93339d80c39fe6057aab2319d05a927b85e124fb71c14649"} Apr 04 02:28:12 crc kubenswrapper[4681]: I0404 02:28:12.165412 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"eda4f6e7-db6f-43af-9519-87fb93938d78","Type":"ContainerStarted","Data":"21512de4eae6000338472fdb5e2d0c1e659a12e304f47f5775eaec8544cc1040"} Apr 04 02:28:12 crc kubenswrapper[4681]: I0404 02:28:12.165484 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="eda4f6e7-db6f-43af-9519-87fb93938d78" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://21512de4eae6000338472fdb5e2d0c1e659a12e304f47f5775eaec8544cc1040" gracePeriod=30 Apr 04 02:28:12 crc kubenswrapper[4681]: I0404 02:28:12.173572 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nnf4c" event={"ID":"f2883f88-a3fd-46b2-8452-32974b0c6b4f","Type":"ContainerStarted","Data":"47636a03276739c914fbe54c0af431ff55d270f3ca8f7e1ea8fee46b07d26d00"} Apr 04 02:28:12 crc kubenswrapper[4681]: I0404 02:28:12.179122 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"459acb42-4c8c-4498-83dc-9035ae5e38c0","Type":"ContainerStarted","Data":"d6ebdb8d45b54f4061fd8e740d7ab9a9e746712f4d56542185310fabfed1502e"} Apr 04 02:28:12 crc kubenswrapper[4681]: I0404 02:28:12.181323 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a49a198d-f1ad-4eff-9c8d-a0360b9fedb4","Type":"ContainerStarted","Data":"a579325584373d5d7a5fb02f7e0c57088ff075be8189098b371a6001de24be83"} Apr 04 02:28:12 crc kubenswrapper[4681]: I0404 02:28:12.181360 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a49a198d-f1ad-4eff-9c8d-a0360b9fedb4","Type":"ContainerStarted","Data":"b2c51758671230e20382a4fa5410d4e92d757fa6881f7e4769aacf2731a0bef6"} Apr 04 02:28:12 crc kubenswrapper[4681]: I0404 02:28:12.181478 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a49a198d-f1ad-4eff-9c8d-a0360b9fedb4" containerName="nova-metadata-log" containerID="cri-o://b2c51758671230e20382a4fa5410d4e92d757fa6881f7e4769aacf2731a0bef6" gracePeriod=30 Apr 04 02:28:12 crc kubenswrapper[4681]: I0404 02:28:12.181773 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a49a198d-f1ad-4eff-9c8d-a0360b9fedb4" containerName="nova-metadata-metadata" containerID="cri-o://a579325584373d5d7a5fb02f7e0c57088ff075be8189098b371a6001de24be83" gracePeriod=30 Apr 04 02:28:12 crc kubenswrapper[4681]: I0404 02:28:12.195935 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.295996327 podStartE2EDuration="7.195913181s" podCreationTimestamp="2026-04-04 02:28:05 +0000 UTC" firstStartedPulling="2026-04-04 02:28:06.557499788 +0000 UTC m=+1966.223274908" lastFinishedPulling="2026-04-04 02:28:11.457416642 +0000 UTC m=+1971.123191762" observedRunningTime="2026-04-04 02:28:12.180451635 +0000 UTC m=+1971.846226755" watchObservedRunningTime="2026-04-04 02:28:12.195913181 +0000 UTC m=+1971.861688301" Apr 04 02:28:12 crc kubenswrapper[4681]: I0404 02:28:12.206242 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df6bd6d2-059b-413b-9975-1db785d83944","Type":"ContainerStarted","Data":"4f285f734237888eee51499ac0a34d512828f0a1e55f4c0a5fea04815f23369a"} Apr 04 02:28:12 crc kubenswrapper[4681]: I0404 02:28:12.206309 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df6bd6d2-059b-413b-9975-1db785d83944","Type":"ContainerStarted","Data":"d76f93c8bb757c42f4830ad919b8f952336b3a48fc935c240556977a72321c1d"} Apr 04 02:28:12 crc kubenswrapper[4681]: I0404 02:28:12.219866 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" event={"ID":"ae22ed73-322c-41fc-821e-b0f2e7217ab6","Type":"ContainerStarted","Data":"f99760b19a10d6991768c5b00c03b065466eb4182071e7c363261547edb8ae3b"} Apr 04 02:28:12 crc kubenswrapper[4681]: I0404 02:28:12.220154 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" Apr 04 02:28:12 crc kubenswrapper[4681]: I0404 02:28:12.237058 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.287250236 podStartE2EDuration="7.236983211s" podCreationTimestamp="2026-04-04 02:28:05 +0000 UTC" firstStartedPulling="2026-04-04 02:28:06.534100494 +0000 UTC m=+1966.199875614" lastFinishedPulling="2026-04-04 02:28:11.483833469 +0000 UTC m=+1971.149608589" observedRunningTime="2026-04-04 02:28:12.213877484 +0000 UTC m=+1971.879652604" watchObservedRunningTime="2026-04-04 02:28:12.236983211 +0000 UTC m=+1971.902758331" Apr 04 02:28:12 crc kubenswrapper[4681]: I0404 02:28:12.253851 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-nnf4c" podStartSLOduration=6.253829584 podStartE2EDuration="6.253829584s" podCreationTimestamp="2026-04-04 02:28:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:28:12.236064395 +0000 UTC m=+1971.901839515" watchObservedRunningTime="2026-04-04 02:28:12.253829584 +0000 UTC m=+1971.919604704" Apr 04 02:28:12 crc kubenswrapper[4681]: I0404 02:28:12.275330 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.825880515 podStartE2EDuration="7.275255383s" podCreationTimestamp="2026-04-04 02:28:05 +0000 UTC" firstStartedPulling="2026-04-04 02:28:07.008017123 +0000 UTC m=+1966.673792243" lastFinishedPulling="2026-04-04 02:28:11.457391991 +0000 UTC m=+1971.123167111" observedRunningTime="2026-04-04 02:28:12.260678642 +0000 UTC m=+1971.926453762" watchObservedRunningTime="2026-04-04 02:28:12.275255383 +0000 UTC m=+1971.941030503" Apr 04 02:28:12 crc kubenswrapper[4681]: I0404 02:28:12.377842 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" podStartSLOduration=7.377819856 podStartE2EDuration="7.377819856s" podCreationTimestamp="2026-04-04 02:28:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:28:12.281913237 +0000 UTC m=+1971.947688357" watchObservedRunningTime="2026-04-04 02:28:12.377819856 +0000 UTC m=+1972.043594996" Apr 04 02:28:12 crc kubenswrapper[4681]: I0404 02:28:12.389490 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.7191062969999997 podStartE2EDuration="7.389473476s" podCreationTimestamp="2026-04-04 02:28:05 +0000 UTC" firstStartedPulling="2026-04-04 02:28:06.813629945 +0000 UTC m=+1966.479405065" lastFinishedPulling="2026-04-04 02:28:11.483997124 +0000 UTC m=+1971.149772244" observedRunningTime="2026-04-04 02:28:12.333459105 +0000 UTC m=+1971.999234225" watchObservedRunningTime="2026-04-04 02:28:12.389473476 +0000 UTC m=+1972.055248596" Apr 04 02:28:13 crc kubenswrapper[4681]: I0404 02:28:13.231221 4681 generic.go:334] "Generic (PLEG): container finished" podID="a49a198d-f1ad-4eff-9c8d-a0360b9fedb4" containerID="b2c51758671230e20382a4fa5410d4e92d757fa6881f7e4769aacf2731a0bef6" exitCode=143 Apr 04 02:28:13 crc kubenswrapper[4681]: I0404 02:28:13.231319 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a49a198d-f1ad-4eff-9c8d-a0360b9fedb4","Type":"ContainerDied","Data":"b2c51758671230e20382a4fa5410d4e92d757fa6881f7e4769aacf2731a0bef6"} Apr 04 02:28:14 crc kubenswrapper[4681]: I0404 02:28:14.201077 4681 scope.go:117] "RemoveContainer" containerID="a115e63e478f2dcd96d6a1ea5579c4abd8544184899aee1395f08415f92823da" Apr 04 02:28:14 crc kubenswrapper[4681]: E0404 02:28:14.201654 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:28:15 crc kubenswrapper[4681]: I0404 02:28:15.579217 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Apr 04 02:28:15 crc kubenswrapper[4681]: I0404 02:28:15.579285 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Apr 04 02:28:15 crc kubenswrapper[4681]: I0404 02:28:15.611936 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Apr 04 02:28:15 crc kubenswrapper[4681]: I0404 02:28:15.687593 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:15 crc kubenswrapper[4681]: I0404 02:28:15.771120 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Apr 04 02:28:15 crc kubenswrapper[4681]: I0404 02:28:15.773623 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Apr 04 02:28:16 crc kubenswrapper[4681]: I0404 02:28:16.080477 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" Apr 04 02:28:16 crc kubenswrapper[4681]: I0404 02:28:16.154026 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59c75684f5-bch4x"] Apr 04 02:28:16 crc kubenswrapper[4681]: I0404 02:28:16.154364 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-59c75684f5-bch4x" podUID="c7b18d01-a152-407d-94a5-993382ffd32f" containerName="dnsmasq-dns" containerID="cri-o://5a037ab7c6105493761488a21dba9444577bf51cd20cd68c1034da93d6eff562" gracePeriod=10 Apr 04 02:28:16 crc kubenswrapper[4681]: I0404 02:28:16.265639 4681 generic.go:334] "Generic (PLEG): container finished" podID="b124b58f-5f56-47cf-a141-28e8786a0673" containerID="69baf56b8e9d8b8f84b6a5cd7d34cf35744bb98017478f06e79186c84a1cc68c" exitCode=0 Apr 04 02:28:16 crc kubenswrapper[4681]: I0404 02:28:16.266725 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zbjjd" event={"ID":"b124b58f-5f56-47cf-a141-28e8786a0673","Type":"ContainerDied","Data":"69baf56b8e9d8b8f84b6a5cd7d34cf35744bb98017478f06e79186c84a1cc68c"} Apr 04 02:28:16 crc kubenswrapper[4681]: I0404 02:28:16.350407 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Apr 04 02:28:16 crc kubenswrapper[4681]: I0404 02:28:16.667223 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-59c75684f5-bch4x" podUID="c7b18d01-a152-407d-94a5-993382ffd32f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.190:5353: connect: connection refused" Apr 04 02:28:16 crc kubenswrapper[4681]: I0404 02:28:16.813514 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="df6bd6d2-059b-413b-9975-1db785d83944" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.216:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 04 02:28:16 crc kubenswrapper[4681]: I0404 02:28:16.855495 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="df6bd6d2-059b-413b-9975-1db785d83944" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.216:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 04 02:28:17 crc kubenswrapper[4681]: I0404 02:28:17.308350 4681 generic.go:334] "Generic (PLEG): container finished" podID="c7b18d01-a152-407d-94a5-993382ffd32f" containerID="5a037ab7c6105493761488a21dba9444577bf51cd20cd68c1034da93d6eff562" exitCode=0 Apr 04 02:28:17 crc kubenswrapper[4681]: I0404 02:28:17.308601 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c75684f5-bch4x" event={"ID":"c7b18d01-a152-407d-94a5-993382ffd32f","Type":"ContainerDied","Data":"5a037ab7c6105493761488a21dba9444577bf51cd20cd68c1034da93d6eff562"} Apr 04 02:28:17 crc kubenswrapper[4681]: I0404 02:28:17.458431 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c75684f5-bch4x" Apr 04 02:28:17 crc kubenswrapper[4681]: I0404 02:28:17.600570 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7b18d01-a152-407d-94a5-993382ffd32f-config\") pod \"c7b18d01-a152-407d-94a5-993382ffd32f\" (UID: \"c7b18d01-a152-407d-94a5-993382ffd32f\") " Apr 04 02:28:17 crc kubenswrapper[4681]: I0404 02:28:17.600633 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf99b\" (UniqueName: \"kubernetes.io/projected/c7b18d01-a152-407d-94a5-993382ffd32f-kube-api-access-xf99b\") pod \"c7b18d01-a152-407d-94a5-993382ffd32f\" (UID: \"c7b18d01-a152-407d-94a5-993382ffd32f\") " Apr 04 02:28:17 crc kubenswrapper[4681]: I0404 02:28:17.600781 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7b18d01-a152-407d-94a5-993382ffd32f-ovsdbserver-sb\") pod \"c7b18d01-a152-407d-94a5-993382ffd32f\" (UID: \"c7b18d01-a152-407d-94a5-993382ffd32f\") " Apr 04 02:28:17 crc kubenswrapper[4681]: I0404 02:28:17.600891 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7b18d01-a152-407d-94a5-993382ffd32f-ovsdbserver-nb\") pod \"c7b18d01-a152-407d-94a5-993382ffd32f\" (UID: \"c7b18d01-a152-407d-94a5-993382ffd32f\") " Apr 04 02:28:17 crc kubenswrapper[4681]: I0404 02:28:17.600916 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7b18d01-a152-407d-94a5-993382ffd32f-dns-swift-storage-0\") pod \"c7b18d01-a152-407d-94a5-993382ffd32f\" (UID: \"c7b18d01-a152-407d-94a5-993382ffd32f\") " Apr 04 02:28:17 crc kubenswrapper[4681]: I0404 02:28:17.600933 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7b18d01-a152-407d-94a5-993382ffd32f-dns-svc\") pod \"c7b18d01-a152-407d-94a5-993382ffd32f\" (UID: \"c7b18d01-a152-407d-94a5-993382ffd32f\") " Apr 04 02:28:17 crc kubenswrapper[4681]: I0404 02:28:17.611566 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7b18d01-a152-407d-94a5-993382ffd32f-kube-api-access-xf99b" (OuterVolumeSpecName: "kube-api-access-xf99b") pod "c7b18d01-a152-407d-94a5-993382ffd32f" (UID: "c7b18d01-a152-407d-94a5-993382ffd32f"). InnerVolumeSpecName "kube-api-access-xf99b". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:28:17 crc kubenswrapper[4681]: I0404 02:28:17.671655 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7b18d01-a152-407d-94a5-993382ffd32f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c7b18d01-a152-407d-94a5-993382ffd32f" (UID: "c7b18d01-a152-407d-94a5-993382ffd32f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:28:17 crc kubenswrapper[4681]: I0404 02:28:17.689135 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7b18d01-a152-407d-94a5-993382ffd32f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c7b18d01-a152-407d-94a5-993382ffd32f" (UID: "c7b18d01-a152-407d-94a5-993382ffd32f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:28:17 crc kubenswrapper[4681]: I0404 02:28:17.703505 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c7b18d01-a152-407d-94a5-993382ffd32f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:17 crc kubenswrapper[4681]: I0404 02:28:17.703545 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c7b18d01-a152-407d-94a5-993382ffd32f-dns-svc\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:17 crc kubenswrapper[4681]: I0404 02:28:17.703557 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf99b\" (UniqueName: \"kubernetes.io/projected/c7b18d01-a152-407d-94a5-993382ffd32f-kube-api-access-xf99b\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:17 crc kubenswrapper[4681]: I0404 02:28:17.708173 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7b18d01-a152-407d-94a5-993382ffd32f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c7b18d01-a152-407d-94a5-993382ffd32f" (UID: "c7b18d01-a152-407d-94a5-993382ffd32f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:28:17 crc kubenswrapper[4681]: I0404 02:28:17.763337 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7b18d01-a152-407d-94a5-993382ffd32f-config" (OuterVolumeSpecName: "config") pod "c7b18d01-a152-407d-94a5-993382ffd32f" (UID: "c7b18d01-a152-407d-94a5-993382ffd32f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:28:17 crc kubenswrapper[4681]: I0404 02:28:17.780047 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7b18d01-a152-407d-94a5-993382ffd32f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c7b18d01-a152-407d-94a5-993382ffd32f" (UID: "c7b18d01-a152-407d-94a5-993382ffd32f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:28:17 crc kubenswrapper[4681]: I0404 02:28:17.807301 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c7b18d01-a152-407d-94a5-993382ffd32f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:17 crc kubenswrapper[4681]: I0404 02:28:17.807341 4681 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c7b18d01-a152-407d-94a5-993382ffd32f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:17 crc kubenswrapper[4681]: I0404 02:28:17.807355 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c7b18d01-a152-407d-94a5-993382ffd32f-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:17 crc kubenswrapper[4681]: I0404 02:28:17.846683 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zbjjd" Apr 04 02:28:17 crc kubenswrapper[4681]: I0404 02:28:17.908256 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b124b58f-5f56-47cf-a141-28e8786a0673-config-data\") pod \"b124b58f-5f56-47cf-a141-28e8786a0673\" (UID: \"b124b58f-5f56-47cf-a141-28e8786a0673\") " Apr 04 02:28:17 crc kubenswrapper[4681]: I0404 02:28:17.908885 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b124b58f-5f56-47cf-a141-28e8786a0673-combined-ca-bundle\") pod \"b124b58f-5f56-47cf-a141-28e8786a0673\" (UID: \"b124b58f-5f56-47cf-a141-28e8786a0673\") " Apr 04 02:28:17 crc kubenswrapper[4681]: I0404 02:28:17.909653 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b124b58f-5f56-47cf-a141-28e8786a0673-scripts\") pod \"b124b58f-5f56-47cf-a141-28e8786a0673\" (UID: \"b124b58f-5f56-47cf-a141-28e8786a0673\") " Apr 04 02:28:17 crc kubenswrapper[4681]: I0404 02:28:17.909698 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwl8f\" (UniqueName: \"kubernetes.io/projected/b124b58f-5f56-47cf-a141-28e8786a0673-kube-api-access-bwl8f\") pod \"b124b58f-5f56-47cf-a141-28e8786a0673\" (UID: \"b124b58f-5f56-47cf-a141-28e8786a0673\") " Apr 04 02:28:17 crc kubenswrapper[4681]: I0404 02:28:17.912688 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b124b58f-5f56-47cf-a141-28e8786a0673-scripts" (OuterVolumeSpecName: "scripts") pod "b124b58f-5f56-47cf-a141-28e8786a0673" (UID: "b124b58f-5f56-47cf-a141-28e8786a0673"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:28:17 crc kubenswrapper[4681]: I0404 02:28:17.913469 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b124b58f-5f56-47cf-a141-28e8786a0673-kube-api-access-bwl8f" (OuterVolumeSpecName: "kube-api-access-bwl8f") pod "b124b58f-5f56-47cf-a141-28e8786a0673" (UID: "b124b58f-5f56-47cf-a141-28e8786a0673"). InnerVolumeSpecName "kube-api-access-bwl8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:28:17 crc kubenswrapper[4681]: I0404 02:28:17.937915 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b124b58f-5f56-47cf-a141-28e8786a0673-config-data" (OuterVolumeSpecName: "config-data") pod "b124b58f-5f56-47cf-a141-28e8786a0673" (UID: "b124b58f-5f56-47cf-a141-28e8786a0673"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:28:17 crc kubenswrapper[4681]: I0404 02:28:17.938665 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b124b58f-5f56-47cf-a141-28e8786a0673-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b124b58f-5f56-47cf-a141-28e8786a0673" (UID: "b124b58f-5f56-47cf-a141-28e8786a0673"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:28:18 crc kubenswrapper[4681]: I0404 02:28:18.013042 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b124b58f-5f56-47cf-a141-28e8786a0673-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:18 crc kubenswrapper[4681]: I0404 02:28:18.013079 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b124b58f-5f56-47cf-a141-28e8786a0673-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:18 crc kubenswrapper[4681]: I0404 02:28:18.013089 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwl8f\" (UniqueName: \"kubernetes.io/projected/b124b58f-5f56-47cf-a141-28e8786a0673-kube-api-access-bwl8f\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:18 crc kubenswrapper[4681]: I0404 02:28:18.013101 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b124b58f-5f56-47cf-a141-28e8786a0673-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:18 crc kubenswrapper[4681]: I0404 02:28:18.322226 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-59c75684f5-bch4x" event={"ID":"c7b18d01-a152-407d-94a5-993382ffd32f","Type":"ContainerDied","Data":"1a37f8c308fab2ba97c7919fa47e010ff2e51eb483df45bb49e0f50f8c0e2d92"} Apr 04 02:28:18 crc kubenswrapper[4681]: I0404 02:28:18.322308 4681 scope.go:117] "RemoveContainer" containerID="5a037ab7c6105493761488a21dba9444577bf51cd20cd68c1034da93d6eff562" Apr 04 02:28:18 crc kubenswrapper[4681]: I0404 02:28:18.322467 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-59c75684f5-bch4x" Apr 04 02:28:18 crc kubenswrapper[4681]: I0404 02:28:18.341693 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zbjjd" event={"ID":"b124b58f-5f56-47cf-a141-28e8786a0673","Type":"ContainerDied","Data":"94eac1c45641ebadcea693c2537d5b52a73519b3108951203576347544943f96"} Apr 04 02:28:18 crc kubenswrapper[4681]: I0404 02:28:18.341920 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94eac1c45641ebadcea693c2537d5b52a73519b3108951203576347544943f96" Apr 04 02:28:18 crc kubenswrapper[4681]: I0404 02:28:18.341982 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zbjjd" Apr 04 02:28:18 crc kubenswrapper[4681]: I0404 02:28:18.401640 4681 scope.go:117] "RemoveContainer" containerID="b11b15c807303d02a07c1c177900ee213efa673f59eb41ff80f451e08936052a" Apr 04 02:28:18 crc kubenswrapper[4681]: I0404 02:28:18.403733 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-59c75684f5-bch4x"] Apr 04 02:28:18 crc kubenswrapper[4681]: I0404 02:28:18.414421 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-59c75684f5-bch4x"] Apr 04 02:28:18 crc kubenswrapper[4681]: I0404 02:28:18.477448 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Apr 04 02:28:18 crc kubenswrapper[4681]: I0404 02:28:18.477971 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="df6bd6d2-059b-413b-9975-1db785d83944" containerName="nova-api-log" containerID="cri-o://d76f93c8bb757c42f4830ad919b8f952336b3a48fc935c240556977a72321c1d" gracePeriod=30 Apr 04 02:28:18 crc kubenswrapper[4681]: I0404 02:28:18.478106 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="df6bd6d2-059b-413b-9975-1db785d83944" containerName="nova-api-api" containerID="cri-o://4f285f734237888eee51499ac0a34d512828f0a1e55f4c0a5fea04815f23369a" gracePeriod=30 Apr 04 02:28:18 crc kubenswrapper[4681]: I0404 02:28:18.504045 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Apr 04 02:28:18 crc kubenswrapper[4681]: I0404 02:28:18.505039 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="459acb42-4c8c-4498-83dc-9035ae5e38c0" containerName="nova-scheduler-scheduler" containerID="cri-o://d6ebdb8d45b54f4061fd8e740d7ab9a9e746712f4d56542185310fabfed1502e" gracePeriod=30 Apr 04 02:28:19 crc kubenswrapper[4681]: I0404 02:28:19.212641 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7b18d01-a152-407d-94a5-993382ffd32f" path="/var/lib/kubelet/pods/c7b18d01-a152-407d-94a5-993382ffd32f/volumes" Apr 04 02:28:19 crc kubenswrapper[4681]: I0404 02:28:19.354311 4681 generic.go:334] "Generic (PLEG): container finished" podID="df6bd6d2-059b-413b-9975-1db785d83944" containerID="d76f93c8bb757c42f4830ad919b8f952336b3a48fc935c240556977a72321c1d" exitCode=143 Apr 04 02:28:19 crc kubenswrapper[4681]: I0404 02:28:19.354355 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df6bd6d2-059b-413b-9975-1db785d83944","Type":"ContainerDied","Data":"d76f93c8bb757c42f4830ad919b8f952336b3a48fc935c240556977a72321c1d"} Apr 04 02:28:20 crc kubenswrapper[4681]: E0404 02:28:20.587474 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d6ebdb8d45b54f4061fd8e740d7ab9a9e746712f4d56542185310fabfed1502e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Apr 04 02:28:20 crc kubenswrapper[4681]: E0404 02:28:20.589492 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d6ebdb8d45b54f4061fd8e740d7ab9a9e746712f4d56542185310fabfed1502e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Apr 04 02:28:20 crc kubenswrapper[4681]: E0404 02:28:20.591756 4681 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d6ebdb8d45b54f4061fd8e740d7ab9a9e746712f4d56542185310fabfed1502e" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Apr 04 02:28:20 crc kubenswrapper[4681]: E0404 02:28:20.591818 4681 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="459acb42-4c8c-4498-83dc-9035ae5e38c0" containerName="nova-scheduler-scheduler" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.180423 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.276963 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/459acb42-4c8c-4498-83dc-9035ae5e38c0-combined-ca-bundle\") pod \"459acb42-4c8c-4498-83dc-9035ae5e38c0\" (UID: \"459acb42-4c8c-4498-83dc-9035ae5e38c0\") " Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.277063 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px7r5\" (UniqueName: \"kubernetes.io/projected/459acb42-4c8c-4498-83dc-9035ae5e38c0-kube-api-access-px7r5\") pod \"459acb42-4c8c-4498-83dc-9035ae5e38c0\" (UID: \"459acb42-4c8c-4498-83dc-9035ae5e38c0\") " Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.277109 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/459acb42-4c8c-4498-83dc-9035ae5e38c0-config-data\") pod \"459acb42-4c8c-4498-83dc-9035ae5e38c0\" (UID: \"459acb42-4c8c-4498-83dc-9035ae5e38c0\") " Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.287693 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/459acb42-4c8c-4498-83dc-9035ae5e38c0-kube-api-access-px7r5" (OuterVolumeSpecName: "kube-api-access-px7r5") pod "459acb42-4c8c-4498-83dc-9035ae5e38c0" (UID: "459acb42-4c8c-4498-83dc-9035ae5e38c0"). InnerVolumeSpecName "kube-api-access-px7r5". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.287766 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.318973 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/459acb42-4c8c-4498-83dc-9035ae5e38c0-config-data" (OuterVolumeSpecName: "config-data") pod "459acb42-4c8c-4498-83dc-9035ae5e38c0" (UID: "459acb42-4c8c-4498-83dc-9035ae5e38c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.331421 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/459acb42-4c8c-4498-83dc-9035ae5e38c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "459acb42-4c8c-4498-83dc-9035ae5e38c0" (UID: "459acb42-4c8c-4498-83dc-9035ae5e38c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.381109 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/459acb42-4c8c-4498-83dc-9035ae5e38c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.381144 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px7r5\" (UniqueName: \"kubernetes.io/projected/459acb42-4c8c-4498-83dc-9035ae5e38c0-kube-api-access-px7r5\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.381158 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/459acb42-4c8c-4498-83dc-9035ae5e38c0-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.384803 4681 generic.go:334] "Generic (PLEG): container finished" podID="459acb42-4c8c-4498-83dc-9035ae5e38c0" containerID="d6ebdb8d45b54f4061fd8e740d7ab9a9e746712f4d56542185310fabfed1502e" exitCode=0 Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.384860 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.384855 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"459acb42-4c8c-4498-83dc-9035ae5e38c0","Type":"ContainerDied","Data":"d6ebdb8d45b54f4061fd8e740d7ab9a9e746712f4d56542185310fabfed1502e"} Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.385063 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"459acb42-4c8c-4498-83dc-9035ae5e38c0","Type":"ContainerDied","Data":"777a01d670e29ccdac66e4851921c269bcc0646c0621f9a9782c797c9ee6b6dd"} Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.385133 4681 scope.go:117] "RemoveContainer" containerID="d6ebdb8d45b54f4061fd8e740d7ab9a9e746712f4d56542185310fabfed1502e" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.461909 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.478361 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.482107 4681 scope.go:117] "RemoveContainer" containerID="d6ebdb8d45b54f4061fd8e740d7ab9a9e746712f4d56542185310fabfed1502e" Apr 04 02:28:21 crc kubenswrapper[4681]: E0404 02:28:21.484166 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6ebdb8d45b54f4061fd8e740d7ab9a9e746712f4d56542185310fabfed1502e\": container with ID starting with d6ebdb8d45b54f4061fd8e740d7ab9a9e746712f4d56542185310fabfed1502e not found: ID does not exist" containerID="d6ebdb8d45b54f4061fd8e740d7ab9a9e746712f4d56542185310fabfed1502e" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.484212 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6ebdb8d45b54f4061fd8e740d7ab9a9e746712f4d56542185310fabfed1502e"} err="failed to get container status \"d6ebdb8d45b54f4061fd8e740d7ab9a9e746712f4d56542185310fabfed1502e\": rpc error: code = NotFound desc = could not find container \"d6ebdb8d45b54f4061fd8e740d7ab9a9e746712f4d56542185310fabfed1502e\": container with ID starting with d6ebdb8d45b54f4061fd8e740d7ab9a9e746712f4d56542185310fabfed1502e not found: ID does not exist" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.489495 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Apr 04 02:28:21 crc kubenswrapper[4681]: E0404 02:28:21.490033 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b18d01-a152-407d-94a5-993382ffd32f" containerName="dnsmasq-dns" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.490057 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b18d01-a152-407d-94a5-993382ffd32f" containerName="dnsmasq-dns" Apr 04 02:28:21 crc kubenswrapper[4681]: E0404 02:28:21.490077 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="459acb42-4c8c-4498-83dc-9035ae5e38c0" containerName="nova-scheduler-scheduler" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.490085 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="459acb42-4c8c-4498-83dc-9035ae5e38c0" containerName="nova-scheduler-scheduler" Apr 04 02:28:21 crc kubenswrapper[4681]: E0404 02:28:21.490101 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7b18d01-a152-407d-94a5-993382ffd32f" containerName="init" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.490108 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7b18d01-a152-407d-94a5-993382ffd32f" containerName="init" Apr 04 02:28:21 crc kubenswrapper[4681]: E0404 02:28:21.490117 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b124b58f-5f56-47cf-a141-28e8786a0673" containerName="nova-manage" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.490123 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="b124b58f-5f56-47cf-a141-28e8786a0673" containerName="nova-manage" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.490378 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="b124b58f-5f56-47cf-a141-28e8786a0673" containerName="nova-manage" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.490403 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="459acb42-4c8c-4498-83dc-9035ae5e38c0" containerName="nova-scheduler-scheduler" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.490423 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7b18d01-a152-407d-94a5-993382ffd32f" containerName="dnsmasq-dns" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.491148 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.493483 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.507121 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.585276 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9616ac2a-6a60-414d-a440-59105ea678ee-config-data\") pod \"nova-scheduler-0\" (UID: \"9616ac2a-6a60-414d-a440-59105ea678ee\") " pod="openstack/nova-scheduler-0" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.585430 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9616ac2a-6a60-414d-a440-59105ea678ee-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9616ac2a-6a60-414d-a440-59105ea678ee\") " pod="openstack/nova-scheduler-0" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.585548 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4w59\" (UniqueName: \"kubernetes.io/projected/9616ac2a-6a60-414d-a440-59105ea678ee-kube-api-access-j4w59\") pod \"nova-scheduler-0\" (UID: \"9616ac2a-6a60-414d-a440-59105ea678ee\") " pod="openstack/nova-scheduler-0" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.687684 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4w59\" (UniqueName: \"kubernetes.io/projected/9616ac2a-6a60-414d-a440-59105ea678ee-kube-api-access-j4w59\") pod \"nova-scheduler-0\" (UID: \"9616ac2a-6a60-414d-a440-59105ea678ee\") " pod="openstack/nova-scheduler-0" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.687862 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9616ac2a-6a60-414d-a440-59105ea678ee-config-data\") pod \"nova-scheduler-0\" (UID: \"9616ac2a-6a60-414d-a440-59105ea678ee\") " pod="openstack/nova-scheduler-0" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.687952 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9616ac2a-6a60-414d-a440-59105ea678ee-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9616ac2a-6a60-414d-a440-59105ea678ee\") " pod="openstack/nova-scheduler-0" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.694390 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9616ac2a-6a60-414d-a440-59105ea678ee-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"9616ac2a-6a60-414d-a440-59105ea678ee\") " pod="openstack/nova-scheduler-0" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.705378 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9616ac2a-6a60-414d-a440-59105ea678ee-config-data\") pod \"nova-scheduler-0\" (UID: \"9616ac2a-6a60-414d-a440-59105ea678ee\") " pod="openstack/nova-scheduler-0" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.707378 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4w59\" (UniqueName: \"kubernetes.io/projected/9616ac2a-6a60-414d-a440-59105ea678ee-kube-api-access-j4w59\") pod \"nova-scheduler-0\" (UID: \"9616ac2a-6a60-414d-a440-59105ea678ee\") " pod="openstack/nova-scheduler-0" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.823436 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Apr 04 02:28:21 crc kubenswrapper[4681]: I0404 02:28:21.964710 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.095647 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df6bd6d2-059b-413b-9975-1db785d83944-combined-ca-bundle\") pod \"df6bd6d2-059b-413b-9975-1db785d83944\" (UID: \"df6bd6d2-059b-413b-9975-1db785d83944\") " Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.095709 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r648d\" (UniqueName: \"kubernetes.io/projected/df6bd6d2-059b-413b-9975-1db785d83944-kube-api-access-r648d\") pod \"df6bd6d2-059b-413b-9975-1db785d83944\" (UID: \"df6bd6d2-059b-413b-9975-1db785d83944\") " Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.095757 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df6bd6d2-059b-413b-9975-1db785d83944-config-data\") pod \"df6bd6d2-059b-413b-9975-1db785d83944\" (UID: \"df6bd6d2-059b-413b-9975-1db785d83944\") " Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.095785 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df6bd6d2-059b-413b-9975-1db785d83944-logs\") pod \"df6bd6d2-059b-413b-9975-1db785d83944\" (UID: \"df6bd6d2-059b-413b-9975-1db785d83944\") " Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.096389 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df6bd6d2-059b-413b-9975-1db785d83944-logs" (OuterVolumeSpecName: "logs") pod "df6bd6d2-059b-413b-9975-1db785d83944" (UID: "df6bd6d2-059b-413b-9975-1db785d83944"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.096992 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df6bd6d2-059b-413b-9975-1db785d83944-logs\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.106010 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df6bd6d2-059b-413b-9975-1db785d83944-kube-api-access-r648d" (OuterVolumeSpecName: "kube-api-access-r648d") pod "df6bd6d2-059b-413b-9975-1db785d83944" (UID: "df6bd6d2-059b-413b-9975-1db785d83944"). InnerVolumeSpecName "kube-api-access-r648d". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.133192 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df6bd6d2-059b-413b-9975-1db785d83944-config-data" (OuterVolumeSpecName: "config-data") pod "df6bd6d2-059b-413b-9975-1db785d83944" (UID: "df6bd6d2-059b-413b-9975-1db785d83944"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.147861 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df6bd6d2-059b-413b-9975-1db785d83944-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df6bd6d2-059b-413b-9975-1db785d83944" (UID: "df6bd6d2-059b-413b-9975-1db785d83944"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.203145 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df6bd6d2-059b-413b-9975-1db785d83944-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.203189 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r648d\" (UniqueName: \"kubernetes.io/projected/df6bd6d2-059b-413b-9975-1db785d83944-kube-api-access-r648d\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.203202 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df6bd6d2-059b-413b-9975-1db785d83944-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.317881 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.397592 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9616ac2a-6a60-414d-a440-59105ea678ee","Type":"ContainerStarted","Data":"fdfac131e8ddae49808f6bd7c1a09e4cdc0fe21c455b6bad8c7e724bce571e9e"} Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.401437 4681 generic.go:334] "Generic (PLEG): container finished" podID="df6bd6d2-059b-413b-9975-1db785d83944" containerID="4f285f734237888eee51499ac0a34d512828f0a1e55f4c0a5fea04815f23369a" exitCode=0 Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.401490 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df6bd6d2-059b-413b-9975-1db785d83944","Type":"ContainerDied","Data":"4f285f734237888eee51499ac0a34d512828f0a1e55f4c0a5fea04815f23369a"} Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.401544 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df6bd6d2-059b-413b-9975-1db785d83944","Type":"ContainerDied","Data":"0aeeb92eaecb23cfb3ab18f9c73a5c8a69bd67c42734c30ca4dfcafd92cff1a0"} Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.401565 4681 scope.go:117] "RemoveContainer" containerID="4f285f734237888eee51499ac0a34d512828f0a1e55f4c0a5fea04815f23369a" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.401566 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.425162 4681 scope.go:117] "RemoveContainer" containerID="d76f93c8bb757c42f4830ad919b8f952336b3a48fc935c240556977a72321c1d" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.450539 4681 scope.go:117] "RemoveContainer" containerID="4f285f734237888eee51499ac0a34d512828f0a1e55f4c0a5fea04815f23369a" Apr 04 02:28:22 crc kubenswrapper[4681]: E0404 02:28:22.451407 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f285f734237888eee51499ac0a34d512828f0a1e55f4c0a5fea04815f23369a\": container with ID starting with 4f285f734237888eee51499ac0a34d512828f0a1e55f4c0a5fea04815f23369a not found: ID does not exist" containerID="4f285f734237888eee51499ac0a34d512828f0a1e55f4c0a5fea04815f23369a" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.451441 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f285f734237888eee51499ac0a34d512828f0a1e55f4c0a5fea04815f23369a"} err="failed to get container status \"4f285f734237888eee51499ac0a34d512828f0a1e55f4c0a5fea04815f23369a\": rpc error: code = NotFound desc = could not find container \"4f285f734237888eee51499ac0a34d512828f0a1e55f4c0a5fea04815f23369a\": container with ID starting with 4f285f734237888eee51499ac0a34d512828f0a1e55f4c0a5fea04815f23369a not found: ID does not exist" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.451461 4681 scope.go:117] "RemoveContainer" containerID="d76f93c8bb757c42f4830ad919b8f952336b3a48fc935c240556977a72321c1d" Apr 04 02:28:22 crc kubenswrapper[4681]: E0404 02:28:22.451751 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d76f93c8bb757c42f4830ad919b8f952336b3a48fc935c240556977a72321c1d\": container with ID starting with d76f93c8bb757c42f4830ad919b8f952336b3a48fc935c240556977a72321c1d not found: ID does not exist" containerID="d76f93c8bb757c42f4830ad919b8f952336b3a48fc935c240556977a72321c1d" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.451775 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d76f93c8bb757c42f4830ad919b8f952336b3a48fc935c240556977a72321c1d"} err="failed to get container status \"d76f93c8bb757c42f4830ad919b8f952336b3a48fc935c240556977a72321c1d\": rpc error: code = NotFound desc = could not find container \"d76f93c8bb757c42f4830ad919b8f952336b3a48fc935c240556977a72321c1d\": container with ID starting with d76f93c8bb757c42f4830ad919b8f952336b3a48fc935c240556977a72321c1d not found: ID does not exist" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.465714 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.484375 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.496582 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Apr 04 02:28:22 crc kubenswrapper[4681]: E0404 02:28:22.497013 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df6bd6d2-059b-413b-9975-1db785d83944" containerName="nova-api-api" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.497043 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="df6bd6d2-059b-413b-9975-1db785d83944" containerName="nova-api-api" Apr 04 02:28:22 crc kubenswrapper[4681]: E0404 02:28:22.497063 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df6bd6d2-059b-413b-9975-1db785d83944" containerName="nova-api-log" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.497069 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="df6bd6d2-059b-413b-9975-1db785d83944" containerName="nova-api-log" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.497274 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="df6bd6d2-059b-413b-9975-1db785d83944" containerName="nova-api-log" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.497305 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="df6bd6d2-059b-413b-9975-1db785d83944" containerName="nova-api-api" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.498382 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.505547 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.506646 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.611315 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f784c8-6944-46b5-b4c2-e81f403dfa44-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d8f784c8-6944-46b5-b4c2-e81f403dfa44\") " pod="openstack/nova-api-0" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.611840 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8f784c8-6944-46b5-b4c2-e81f403dfa44-config-data\") pod \"nova-api-0\" (UID: \"d8f784c8-6944-46b5-b4c2-e81f403dfa44\") " pod="openstack/nova-api-0" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.612001 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8f784c8-6944-46b5-b4c2-e81f403dfa44-logs\") pod \"nova-api-0\" (UID: \"d8f784c8-6944-46b5-b4c2-e81f403dfa44\") " pod="openstack/nova-api-0" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.612142 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n65vs\" (UniqueName: \"kubernetes.io/projected/d8f784c8-6944-46b5-b4c2-e81f403dfa44-kube-api-access-n65vs\") pod \"nova-api-0\" (UID: \"d8f784c8-6944-46b5-b4c2-e81f403dfa44\") " pod="openstack/nova-api-0" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.714486 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n65vs\" (UniqueName: \"kubernetes.io/projected/d8f784c8-6944-46b5-b4c2-e81f403dfa44-kube-api-access-n65vs\") pod \"nova-api-0\" (UID: \"d8f784c8-6944-46b5-b4c2-e81f403dfa44\") " pod="openstack/nova-api-0" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.714543 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f784c8-6944-46b5-b4c2-e81f403dfa44-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d8f784c8-6944-46b5-b4c2-e81f403dfa44\") " pod="openstack/nova-api-0" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.714617 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8f784c8-6944-46b5-b4c2-e81f403dfa44-config-data\") pod \"nova-api-0\" (UID: \"d8f784c8-6944-46b5-b4c2-e81f403dfa44\") " pod="openstack/nova-api-0" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.714684 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8f784c8-6944-46b5-b4c2-e81f403dfa44-logs\") pod \"nova-api-0\" (UID: \"d8f784c8-6944-46b5-b4c2-e81f403dfa44\") " pod="openstack/nova-api-0" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.715121 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8f784c8-6944-46b5-b4c2-e81f403dfa44-logs\") pod \"nova-api-0\" (UID: \"d8f784c8-6944-46b5-b4c2-e81f403dfa44\") " pod="openstack/nova-api-0" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.720226 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8f784c8-6944-46b5-b4c2-e81f403dfa44-config-data\") pod \"nova-api-0\" (UID: \"d8f784c8-6944-46b5-b4c2-e81f403dfa44\") " pod="openstack/nova-api-0" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.729475 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f784c8-6944-46b5-b4c2-e81f403dfa44-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d8f784c8-6944-46b5-b4c2-e81f403dfa44\") " pod="openstack/nova-api-0" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.741826 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n65vs\" (UniqueName: \"kubernetes.io/projected/d8f784c8-6944-46b5-b4c2-e81f403dfa44-kube-api-access-n65vs\") pod \"nova-api-0\" (UID: \"d8f784c8-6944-46b5-b4c2-e81f403dfa44\") " pod="openstack/nova-api-0" Apr 04 02:28:22 crc kubenswrapper[4681]: I0404 02:28:22.814659 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Apr 04 02:28:23 crc kubenswrapper[4681]: I0404 02:28:23.214384 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="459acb42-4c8c-4498-83dc-9035ae5e38c0" path="/var/lib/kubelet/pods/459acb42-4c8c-4498-83dc-9035ae5e38c0/volumes" Apr 04 02:28:23 crc kubenswrapper[4681]: I0404 02:28:23.215829 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df6bd6d2-059b-413b-9975-1db785d83944" path="/var/lib/kubelet/pods/df6bd6d2-059b-413b-9975-1db785d83944/volumes" Apr 04 02:28:23 crc kubenswrapper[4681]: I0404 02:28:23.322863 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Apr 04 02:28:23 crc kubenswrapper[4681]: I0404 02:28:23.429481 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8f784c8-6944-46b5-b4c2-e81f403dfa44","Type":"ContainerStarted","Data":"c81020721508cd4cee5827cd3862e5f5e2054b0d8337f9f9616b2ce44cf0d5c4"} Apr 04 02:28:23 crc kubenswrapper[4681]: I0404 02:28:23.433633 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9616ac2a-6a60-414d-a440-59105ea678ee","Type":"ContainerStarted","Data":"e57fbec9638241c28657308939ec81020cacd82283192da6742fd4fe223deeaa"} Apr 04 02:28:23 crc kubenswrapper[4681]: I0404 02:28:23.463671 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.463622754 podStartE2EDuration="2.463622754s" podCreationTimestamp="2026-04-04 02:28:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:28:23.448640702 +0000 UTC m=+1983.114415822" watchObservedRunningTime="2026-04-04 02:28:23.463622754 +0000 UTC m=+1983.129397874" Apr 04 02:28:23 crc kubenswrapper[4681]: I0404 02:28:23.958751 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Apr 04 02:28:23 crc kubenswrapper[4681]: I0404 02:28:23.959116 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Apr 04 02:28:24 crc kubenswrapper[4681]: I0404 02:28:24.452342 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8f784c8-6944-46b5-b4c2-e81f403dfa44","Type":"ContainerStarted","Data":"ca0b277dc7b5ac3f390bf53923fef73cdddc38eb8fee3c48603d42a2007294f4"} Apr 04 02:28:24 crc kubenswrapper[4681]: I0404 02:28:24.452657 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8f784c8-6944-46b5-b4c2-e81f403dfa44","Type":"ContainerStarted","Data":"388d9c8a0ee93836a8f063a1b05a3017f12521424b0669f7ef0d98c941cd82a1"} Apr 04 02:28:24 crc kubenswrapper[4681]: I0404 02:28:24.481067 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.481043217 podStartE2EDuration="2.481043217s" podCreationTimestamp="2026-04-04 02:28:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:28:24.469319394 +0000 UTC m=+1984.135094524" watchObservedRunningTime="2026-04-04 02:28:24.481043217 +0000 UTC m=+1984.146818337" Apr 04 02:28:25 crc kubenswrapper[4681]: I0404 02:28:25.705071 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Apr 04 02:28:25 crc kubenswrapper[4681]: I0404 02:28:25.705405 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="e1febd11-574c-4fc6-967c-d74bef4e351a" containerName="kube-state-metrics" containerID="cri-o://4e8db8d5c0ddf84e62042ca36be9c30db3a79e75a7e6c04276ba26e3ed4cc379" gracePeriod=30 Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.265161 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.335760 4681 scope.go:117] "RemoveContainer" containerID="bf6e5429e571e07de9ffe75d154b44f596a33fd05322ef8437505dd567e4b225" Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.369851 4681 scope.go:117] "RemoveContainer" containerID="4e8db8d5c0ddf84e62042ca36be9c30db3a79e75a7e6c04276ba26e3ed4cc379" Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.411400 4681 scope.go:117] "RemoveContainer" containerID="3f7e807ead5438442082ebc11cfbfe063147e5777f588d021a586b1b4cdae775" Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.417358 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8jfb\" (UniqueName: \"kubernetes.io/projected/e1febd11-574c-4fc6-967c-d74bef4e351a-kube-api-access-t8jfb\") pod \"e1febd11-574c-4fc6-967c-d74bef4e351a\" (UID: \"e1febd11-574c-4fc6-967c-d74bef4e351a\") " Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.423430 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1febd11-574c-4fc6-967c-d74bef4e351a-kube-api-access-t8jfb" (OuterVolumeSpecName: "kube-api-access-t8jfb") pod "e1febd11-574c-4fc6-967c-d74bef4e351a" (UID: "e1febd11-574c-4fc6-967c-d74bef4e351a"). InnerVolumeSpecName "kube-api-access-t8jfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.438692 4681 scope.go:117] "RemoveContainer" containerID="c7e1b6a03e18aea1164db3877c25464aaa6d1293ab041182ec06680694ceba31" Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.473826 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e1febd11-574c-4fc6-967c-d74bef4e351a","Type":"ContainerDied","Data":"4e8db8d5c0ddf84e62042ca36be9c30db3a79e75a7e6c04276ba26e3ed4cc379"} Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.473859 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.473867 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e1febd11-574c-4fc6-967c-d74bef4e351a","Type":"ContainerDied","Data":"5270ac1c78bc95c1c239197d463aadf43af48b2e1404bc9d97a5edd151413efa"} Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.478280 4681 generic.go:334] "Generic (PLEG): container finished" podID="f2883f88-a3fd-46b2-8452-32974b0c6b4f" containerID="47636a03276739c914fbe54c0af431ff55d270f3ca8f7e1ea8fee46b07d26d00" exitCode=0 Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.478305 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nnf4c" event={"ID":"f2883f88-a3fd-46b2-8452-32974b0c6b4f","Type":"ContainerDied","Data":"47636a03276739c914fbe54c0af431ff55d270f3ca8f7e1ea8fee46b07d26d00"} Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.519768 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8jfb\" (UniqueName: \"kubernetes.io/projected/e1febd11-574c-4fc6-967c-d74bef4e351a-kube-api-access-t8jfb\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.534346 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.546528 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.560314 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Apr 04 02:28:26 crc kubenswrapper[4681]: E0404 02:28:26.560869 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1febd11-574c-4fc6-967c-d74bef4e351a" containerName="kube-state-metrics" Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.560888 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1febd11-574c-4fc6-967c-d74bef4e351a" containerName="kube-state-metrics" Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.561175 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1febd11-574c-4fc6-967c-d74bef4e351a" containerName="kube-state-metrics" Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.561953 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.564966 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.565167 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.569802 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.723071 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6e40c22d-4a3b-4321-ac7d-f623845423fc-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6e40c22d-4a3b-4321-ac7d-f623845423fc\") " pod="openstack/kube-state-metrics-0" Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.723132 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e40c22d-4a3b-4321-ac7d-f623845423fc-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6e40c22d-4a3b-4321-ac7d-f623845423fc\") " pod="openstack/kube-state-metrics-0" Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.723213 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e40c22d-4a3b-4321-ac7d-f623845423fc-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6e40c22d-4a3b-4321-ac7d-f623845423fc\") " pod="openstack/kube-state-metrics-0" Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.723286 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66lzw\" (UniqueName: \"kubernetes.io/projected/6e40c22d-4a3b-4321-ac7d-f623845423fc-kube-api-access-66lzw\") pod \"kube-state-metrics-0\" (UID: \"6e40c22d-4a3b-4321-ac7d-f623845423fc\") " pod="openstack/kube-state-metrics-0" Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.825560 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e40c22d-4a3b-4321-ac7d-f623845423fc-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6e40c22d-4a3b-4321-ac7d-f623845423fc\") " pod="openstack/kube-state-metrics-0" Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.825664 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66lzw\" (UniqueName: \"kubernetes.io/projected/6e40c22d-4a3b-4321-ac7d-f623845423fc-kube-api-access-66lzw\") pod \"kube-state-metrics-0\" (UID: \"6e40c22d-4a3b-4321-ac7d-f623845423fc\") " pod="openstack/kube-state-metrics-0" Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.825799 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6e40c22d-4a3b-4321-ac7d-f623845423fc-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6e40c22d-4a3b-4321-ac7d-f623845423fc\") " pod="openstack/kube-state-metrics-0" Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.825823 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e40c22d-4a3b-4321-ac7d-f623845423fc-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6e40c22d-4a3b-4321-ac7d-f623845423fc\") " pod="openstack/kube-state-metrics-0" Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.828359 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.837384 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e40c22d-4a3b-4321-ac7d-f623845423fc-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6e40c22d-4a3b-4321-ac7d-f623845423fc\") " pod="openstack/kube-state-metrics-0" Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.844885 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6e40c22d-4a3b-4321-ac7d-f623845423fc-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6e40c22d-4a3b-4321-ac7d-f623845423fc\") " pod="openstack/kube-state-metrics-0" Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.848097 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e40c22d-4a3b-4321-ac7d-f623845423fc-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6e40c22d-4a3b-4321-ac7d-f623845423fc\") " pod="openstack/kube-state-metrics-0" Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.865124 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66lzw\" (UniqueName: \"kubernetes.io/projected/6e40c22d-4a3b-4321-ac7d-f623845423fc-kube-api-access-66lzw\") pod \"kube-state-metrics-0\" (UID: \"6e40c22d-4a3b-4321-ac7d-f623845423fc\") " pod="openstack/kube-state-metrics-0" Apr 04 02:28:26 crc kubenswrapper[4681]: I0404 02:28:26.904585 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Apr 04 02:28:27 crc kubenswrapper[4681]: I0404 02:28:27.215760 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1febd11-574c-4fc6-967c-d74bef4e351a" path="/var/lib/kubelet/pods/e1febd11-574c-4fc6-967c-d74bef4e351a/volumes" Apr 04 02:28:27 crc kubenswrapper[4681]: I0404 02:28:27.423111 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Apr 04 02:28:27 crc kubenswrapper[4681]: I0404 02:28:27.497487 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6e40c22d-4a3b-4321-ac7d-f623845423fc","Type":"ContainerStarted","Data":"b4df25d6ceadbd5628e3bf45511d8b8ed8c6d8d97239b9c8eff87cf66093c488"} Apr 04 02:28:27 crc kubenswrapper[4681]: I0404 02:28:27.776758 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nnf4c" Apr 04 02:28:27 crc kubenswrapper[4681]: I0404 02:28:27.843546 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:28:27 crc kubenswrapper[4681]: I0404 02:28:27.843979 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3281467f-d73d-47b3-8cfe-44b4beb7b14a" containerName="ceilometer-central-agent" containerID="cri-o://c5eca742119d2cb291e484e36d8e5886d69c117055b411bf5be0f5e309cde66a" gracePeriod=30 Apr 04 02:28:27 crc kubenswrapper[4681]: I0404 02:28:27.844136 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3281467f-d73d-47b3-8cfe-44b4beb7b14a" containerName="proxy-httpd" containerID="cri-o://42da8ad75335c5becfcb54d152bc67843e6986e0d8a5a9d34d9d03a922114826" gracePeriod=30 Apr 04 02:28:27 crc kubenswrapper[4681]: I0404 02:28:27.844184 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3281467f-d73d-47b3-8cfe-44b4beb7b14a" containerName="sg-core" containerID="cri-o://9890d47df7f2165a2a2111ce5a9e7bec4e87a6fd1f43fb7380ba868f835e5c08" gracePeriod=30 Apr 04 02:28:27 crc kubenswrapper[4681]: I0404 02:28:27.844228 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3281467f-d73d-47b3-8cfe-44b4beb7b14a" containerName="ceilometer-notification-agent" containerID="cri-o://459c06cd809192edf78ac623358ea754e0ca8f5b49f6fb7755df40d1f148c468" gracePeriod=30 Apr 04 02:28:27 crc kubenswrapper[4681]: I0404 02:28:27.857926 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2883f88-a3fd-46b2-8452-32974b0c6b4f-config-data\") pod \"f2883f88-a3fd-46b2-8452-32974b0c6b4f\" (UID: \"f2883f88-a3fd-46b2-8452-32974b0c6b4f\") " Apr 04 02:28:27 crc kubenswrapper[4681]: I0404 02:28:27.858002 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2883f88-a3fd-46b2-8452-32974b0c6b4f-scripts\") pod \"f2883f88-a3fd-46b2-8452-32974b0c6b4f\" (UID: \"f2883f88-a3fd-46b2-8452-32974b0c6b4f\") " Apr 04 02:28:27 crc kubenswrapper[4681]: I0404 02:28:27.858069 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ng7d\" (UniqueName: \"kubernetes.io/projected/f2883f88-a3fd-46b2-8452-32974b0c6b4f-kube-api-access-8ng7d\") pod \"f2883f88-a3fd-46b2-8452-32974b0c6b4f\" (UID: \"f2883f88-a3fd-46b2-8452-32974b0c6b4f\") " Apr 04 02:28:27 crc kubenswrapper[4681]: I0404 02:28:27.858185 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2883f88-a3fd-46b2-8452-32974b0c6b4f-combined-ca-bundle\") pod \"f2883f88-a3fd-46b2-8452-32974b0c6b4f\" (UID: \"f2883f88-a3fd-46b2-8452-32974b0c6b4f\") " Apr 04 02:28:27 crc kubenswrapper[4681]: I0404 02:28:27.865151 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2883f88-a3fd-46b2-8452-32974b0c6b4f-scripts" (OuterVolumeSpecName: "scripts") pod "f2883f88-a3fd-46b2-8452-32974b0c6b4f" (UID: "f2883f88-a3fd-46b2-8452-32974b0c6b4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:28:27 crc kubenswrapper[4681]: I0404 02:28:27.867462 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2883f88-a3fd-46b2-8452-32974b0c6b4f-kube-api-access-8ng7d" (OuterVolumeSpecName: "kube-api-access-8ng7d") pod "f2883f88-a3fd-46b2-8452-32974b0c6b4f" (UID: "f2883f88-a3fd-46b2-8452-32974b0c6b4f"). InnerVolumeSpecName "kube-api-access-8ng7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:28:27 crc kubenswrapper[4681]: I0404 02:28:27.910437 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2883f88-a3fd-46b2-8452-32974b0c6b4f-config-data" (OuterVolumeSpecName: "config-data") pod "f2883f88-a3fd-46b2-8452-32974b0c6b4f" (UID: "f2883f88-a3fd-46b2-8452-32974b0c6b4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:28:27 crc kubenswrapper[4681]: I0404 02:28:27.920368 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2883f88-a3fd-46b2-8452-32974b0c6b4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2883f88-a3fd-46b2-8452-32974b0c6b4f" (UID: "f2883f88-a3fd-46b2-8452-32974b0c6b4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:28:27 crc kubenswrapper[4681]: I0404 02:28:27.960216 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2883f88-a3fd-46b2-8452-32974b0c6b4f-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:27 crc kubenswrapper[4681]: I0404 02:28:27.960272 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2883f88-a3fd-46b2-8452-32974b0c6b4f-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:27 crc kubenswrapper[4681]: I0404 02:28:27.960282 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ng7d\" (UniqueName: \"kubernetes.io/projected/f2883f88-a3fd-46b2-8452-32974b0c6b4f-kube-api-access-8ng7d\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:27 crc kubenswrapper[4681]: I0404 02:28:27.960291 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2883f88-a3fd-46b2-8452-32974b0c6b4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:28 crc kubenswrapper[4681]: I0404 02:28:28.511518 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6e40c22d-4a3b-4321-ac7d-f623845423fc","Type":"ContainerStarted","Data":"1d184e87c275fbb6583ebadc6bd8ad066c254a7160ff59773859b63b401433cb"} Apr 04 02:28:28 crc kubenswrapper[4681]: I0404 02:28:28.512132 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Apr 04 02:28:28 crc kubenswrapper[4681]: I0404 02:28:28.515047 4681 generic.go:334] "Generic (PLEG): container finished" podID="3281467f-d73d-47b3-8cfe-44b4beb7b14a" containerID="42da8ad75335c5becfcb54d152bc67843e6986e0d8a5a9d34d9d03a922114826" exitCode=0 Apr 04 02:28:28 crc kubenswrapper[4681]: I0404 02:28:28.515082 4681 generic.go:334] "Generic (PLEG): container finished" podID="3281467f-d73d-47b3-8cfe-44b4beb7b14a" containerID="9890d47df7f2165a2a2111ce5a9e7bec4e87a6fd1f43fb7380ba868f835e5c08" exitCode=2 Apr 04 02:28:28 crc kubenswrapper[4681]: I0404 02:28:28.515089 4681 generic.go:334] "Generic (PLEG): container finished" podID="3281467f-d73d-47b3-8cfe-44b4beb7b14a" containerID="c5eca742119d2cb291e484e36d8e5886d69c117055b411bf5be0f5e309cde66a" exitCode=0 Apr 04 02:28:28 crc kubenswrapper[4681]: I0404 02:28:28.515119 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3281467f-d73d-47b3-8cfe-44b4beb7b14a","Type":"ContainerDied","Data":"42da8ad75335c5becfcb54d152bc67843e6986e0d8a5a9d34d9d03a922114826"} Apr 04 02:28:28 crc kubenswrapper[4681]: I0404 02:28:28.515162 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3281467f-d73d-47b3-8cfe-44b4beb7b14a","Type":"ContainerDied","Data":"9890d47df7f2165a2a2111ce5a9e7bec4e87a6fd1f43fb7380ba868f835e5c08"} Apr 04 02:28:28 crc kubenswrapper[4681]: I0404 02:28:28.515178 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3281467f-d73d-47b3-8cfe-44b4beb7b14a","Type":"ContainerDied","Data":"c5eca742119d2cb291e484e36d8e5886d69c117055b411bf5be0f5e309cde66a"} Apr 04 02:28:28 crc kubenswrapper[4681]: I0404 02:28:28.516941 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nnf4c" event={"ID":"f2883f88-a3fd-46b2-8452-32974b0c6b4f","Type":"ContainerDied","Data":"f259a6db10d1325f93339d80c39fe6057aab2319d05a927b85e124fb71c14649"} Apr 04 02:28:28 crc kubenswrapper[4681]: I0404 02:28:28.516968 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f259a6db10d1325f93339d80c39fe6057aab2319d05a927b85e124fb71c14649" Apr 04 02:28:28 crc kubenswrapper[4681]: I0404 02:28:28.517050 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nnf4c" Apr 04 02:28:28 crc kubenswrapper[4681]: I0404 02:28:28.561446 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.19353803 podStartE2EDuration="2.561426382s" podCreationTimestamp="2026-04-04 02:28:26 +0000 UTC" firstStartedPulling="2026-04-04 02:28:27.419206756 +0000 UTC m=+1987.084981876" lastFinishedPulling="2026-04-04 02:28:27.787095108 +0000 UTC m=+1987.452870228" observedRunningTime="2026-04-04 02:28:28.54536916 +0000 UTC m=+1988.211144280" watchObservedRunningTime="2026-04-04 02:28:28.561426382 +0000 UTC m=+1988.227201513" Apr 04 02:28:28 crc kubenswrapper[4681]: I0404 02:28:28.577110 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Apr 04 02:28:28 crc kubenswrapper[4681]: E0404 02:28:28.577533 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2883f88-a3fd-46b2-8452-32974b0c6b4f" containerName="nova-cell1-conductor-db-sync" Apr 04 02:28:28 crc kubenswrapper[4681]: I0404 02:28:28.577551 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2883f88-a3fd-46b2-8452-32974b0c6b4f" containerName="nova-cell1-conductor-db-sync" Apr 04 02:28:28 crc kubenswrapper[4681]: I0404 02:28:28.577772 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2883f88-a3fd-46b2-8452-32974b0c6b4f" containerName="nova-cell1-conductor-db-sync" Apr 04 02:28:28 crc kubenswrapper[4681]: I0404 02:28:28.578449 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Apr 04 02:28:28 crc kubenswrapper[4681]: I0404 02:28:28.584294 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Apr 04 02:28:28 crc kubenswrapper[4681]: I0404 02:28:28.594244 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Apr 04 02:28:28 crc kubenswrapper[4681]: I0404 02:28:28.677824 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e7ba727-658d-49f6-9e24-68da37adca06-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"0e7ba727-658d-49f6-9e24-68da37adca06\") " pod="openstack/nova-cell1-conductor-0" Apr 04 02:28:28 crc kubenswrapper[4681]: I0404 02:28:28.677977 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9v99\" (UniqueName: \"kubernetes.io/projected/0e7ba727-658d-49f6-9e24-68da37adca06-kube-api-access-n9v99\") pod \"nova-cell1-conductor-0\" (UID: \"0e7ba727-658d-49f6-9e24-68da37adca06\") " pod="openstack/nova-cell1-conductor-0" Apr 04 02:28:28 crc kubenswrapper[4681]: I0404 02:28:28.678078 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7ba727-658d-49f6-9e24-68da37adca06-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"0e7ba727-658d-49f6-9e24-68da37adca06\") " pod="openstack/nova-cell1-conductor-0" Apr 04 02:28:28 crc kubenswrapper[4681]: I0404 02:28:28.779371 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9v99\" (UniqueName: \"kubernetes.io/projected/0e7ba727-658d-49f6-9e24-68da37adca06-kube-api-access-n9v99\") pod \"nova-cell1-conductor-0\" (UID: \"0e7ba727-658d-49f6-9e24-68da37adca06\") " pod="openstack/nova-cell1-conductor-0" Apr 04 02:28:28 crc kubenswrapper[4681]: I0404 02:28:28.779493 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7ba727-658d-49f6-9e24-68da37adca06-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"0e7ba727-658d-49f6-9e24-68da37adca06\") " pod="openstack/nova-cell1-conductor-0" Apr 04 02:28:28 crc kubenswrapper[4681]: I0404 02:28:28.779546 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e7ba727-658d-49f6-9e24-68da37adca06-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"0e7ba727-658d-49f6-9e24-68da37adca06\") " pod="openstack/nova-cell1-conductor-0" Apr 04 02:28:28 crc kubenswrapper[4681]: I0404 02:28:28.784664 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e7ba727-658d-49f6-9e24-68da37adca06-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"0e7ba727-658d-49f6-9e24-68da37adca06\") " pod="openstack/nova-cell1-conductor-0" Apr 04 02:28:28 crc kubenswrapper[4681]: I0404 02:28:28.784813 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e7ba727-658d-49f6-9e24-68da37adca06-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"0e7ba727-658d-49f6-9e24-68da37adca06\") " pod="openstack/nova-cell1-conductor-0" Apr 04 02:28:28 crc kubenswrapper[4681]: I0404 02:28:28.795523 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9v99\" (UniqueName: \"kubernetes.io/projected/0e7ba727-658d-49f6-9e24-68da37adca06-kube-api-access-n9v99\") pod \"nova-cell1-conductor-0\" (UID: \"0e7ba727-658d-49f6-9e24-68da37adca06\") " pod="openstack/nova-cell1-conductor-0" Apr 04 02:28:28 crc kubenswrapper[4681]: I0404 02:28:28.899732 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Apr 04 02:28:28 crc kubenswrapper[4681]: E0404 02:28:28.976490 4681 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/20e78395a05ec4eab33c0aa8985da7e88d19b6438585ae4cf81348073fa51bbd/diff" to get inode usage: stat /var/lib/containers/storage/overlay/20e78395a05ec4eab33c0aa8985da7e88d19b6438585ae4cf81348073fa51bbd/diff: no such file or directory, extraDiskErr: Apr 04 02:28:29 crc kubenswrapper[4681]: I0404 02:28:29.201697 4681 scope.go:117] "RemoveContainer" containerID="a115e63e478f2dcd96d6a1ea5579c4abd8544184899aee1395f08415f92823da" Apr 04 02:28:29 crc kubenswrapper[4681]: E0404 02:28:29.202123 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:28:29 crc kubenswrapper[4681]: I0404 02:28:29.357443 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Apr 04 02:28:29 crc kubenswrapper[4681]: I0404 02:28:29.527178 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"0e7ba727-658d-49f6-9e24-68da37adca06","Type":"ContainerStarted","Data":"bd9a4d53cffb9983789a5abe73312c72e458e31873afca00766501c9cb4309ef"} Apr 04 02:28:30 crc kubenswrapper[4681]: I0404 02:28:30.541950 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"0e7ba727-658d-49f6-9e24-68da37adca06","Type":"ContainerStarted","Data":"fbde8de9a3334db4d1e3dc866fc07bfdc4e86be30d992e9c7f7f25906f1944f9"} Apr 04 02:28:30 crc kubenswrapper[4681]: I0404 02:28:30.542326 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Apr 04 02:28:30 crc kubenswrapper[4681]: I0404 02:28:30.579437 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.579414604 podStartE2EDuration="2.579414604s" podCreationTimestamp="2026-04-04 02:28:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:28:30.562504319 +0000 UTC m=+1990.228279449" watchObservedRunningTime="2026-04-04 02:28:30.579414604 +0000 UTC m=+1990.245189724" Apr 04 02:28:31 crc kubenswrapper[4681]: I0404 02:28:31.823966 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Apr 04 02:28:31 crc kubenswrapper[4681]: I0404 02:28:31.852308 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Apr 04 02:28:32 crc kubenswrapper[4681]: I0404 02:28:32.714962 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Apr 04 02:28:32 crc kubenswrapper[4681]: I0404 02:28:32.815606 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Apr 04 02:28:32 crc kubenswrapper[4681]: I0404 02:28:32.815711 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Apr 04 02:28:33 crc kubenswrapper[4681]: I0404 02:28:33.856525 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d8f784c8-6944-46b5-b4c2-e81f403dfa44" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.222:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 04 02:28:33 crc kubenswrapper[4681]: I0404 02:28:33.856746 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d8f784c8-6944-46b5-b4c2-e81f403dfa44" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.222:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Apr 04 02:28:34 crc kubenswrapper[4681]: I0404 02:28:34.703945 4681 generic.go:334] "Generic (PLEG): container finished" podID="3281467f-d73d-47b3-8cfe-44b4beb7b14a" containerID="459c06cd809192edf78ac623358ea754e0ca8f5b49f6fb7755df40d1f148c468" exitCode=0 Apr 04 02:28:34 crc kubenswrapper[4681]: I0404 02:28:34.704221 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3281467f-d73d-47b3-8cfe-44b4beb7b14a","Type":"ContainerDied","Data":"459c06cd809192edf78ac623358ea754e0ca8f5b49f6fb7755df40d1f148c468"} Apr 04 02:28:34 crc kubenswrapper[4681]: I0404 02:28:34.704247 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3281467f-d73d-47b3-8cfe-44b4beb7b14a","Type":"ContainerDied","Data":"932dc3afa6e111a8af5632545106b31c90123b43989e4da0201e15e457c85cd3"} Apr 04 02:28:34 crc kubenswrapper[4681]: I0404 02:28:34.704257 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="932dc3afa6e111a8af5632545106b31c90123b43989e4da0201e15e457c85cd3" Apr 04 02:28:34 crc kubenswrapper[4681]: I0404 02:28:34.772096 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:28:34 crc kubenswrapper[4681]: I0404 02:28:34.909165 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3281467f-d73d-47b3-8cfe-44b4beb7b14a-scripts\") pod \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\" (UID: \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\") " Apr 04 02:28:34 crc kubenswrapper[4681]: I0404 02:28:34.909488 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3281467f-d73d-47b3-8cfe-44b4beb7b14a-combined-ca-bundle\") pod \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\" (UID: \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\") " Apr 04 02:28:34 crc kubenswrapper[4681]: I0404 02:28:34.909610 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jddnc\" (UniqueName: \"kubernetes.io/projected/3281467f-d73d-47b3-8cfe-44b4beb7b14a-kube-api-access-jddnc\") pod \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\" (UID: \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\") " Apr 04 02:28:34 crc kubenswrapper[4681]: I0404 02:28:34.909708 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3281467f-d73d-47b3-8cfe-44b4beb7b14a-log-httpd\") pod \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\" (UID: \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\") " Apr 04 02:28:34 crc kubenswrapper[4681]: I0404 02:28:34.909764 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3281467f-d73d-47b3-8cfe-44b4beb7b14a-sg-core-conf-yaml\") pod \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\" (UID: \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\") " Apr 04 02:28:34 crc kubenswrapper[4681]: I0404 02:28:34.909818 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3281467f-d73d-47b3-8cfe-44b4beb7b14a-config-data\") pod \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\" (UID: \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\") " Apr 04 02:28:34 crc kubenswrapper[4681]: I0404 02:28:34.909889 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3281467f-d73d-47b3-8cfe-44b4beb7b14a-run-httpd\") pod \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\" (UID: \"3281467f-d73d-47b3-8cfe-44b4beb7b14a\") " Apr 04 02:28:34 crc kubenswrapper[4681]: I0404 02:28:34.910250 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3281467f-d73d-47b3-8cfe-44b4beb7b14a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3281467f-d73d-47b3-8cfe-44b4beb7b14a" (UID: "3281467f-d73d-47b3-8cfe-44b4beb7b14a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:28:34 crc kubenswrapper[4681]: I0404 02:28:34.910432 4681 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3281467f-d73d-47b3-8cfe-44b4beb7b14a-log-httpd\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:34 crc kubenswrapper[4681]: I0404 02:28:34.910472 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3281467f-d73d-47b3-8cfe-44b4beb7b14a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3281467f-d73d-47b3-8cfe-44b4beb7b14a" (UID: "3281467f-d73d-47b3-8cfe-44b4beb7b14a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:28:34 crc kubenswrapper[4681]: I0404 02:28:34.914435 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3281467f-d73d-47b3-8cfe-44b4beb7b14a-scripts" (OuterVolumeSpecName: "scripts") pod "3281467f-d73d-47b3-8cfe-44b4beb7b14a" (UID: "3281467f-d73d-47b3-8cfe-44b4beb7b14a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:28:34 crc kubenswrapper[4681]: I0404 02:28:34.914847 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3281467f-d73d-47b3-8cfe-44b4beb7b14a-kube-api-access-jddnc" (OuterVolumeSpecName: "kube-api-access-jddnc") pod "3281467f-d73d-47b3-8cfe-44b4beb7b14a" (UID: "3281467f-d73d-47b3-8cfe-44b4beb7b14a"). InnerVolumeSpecName "kube-api-access-jddnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:28:34 crc kubenswrapper[4681]: I0404 02:28:34.940501 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3281467f-d73d-47b3-8cfe-44b4beb7b14a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3281467f-d73d-47b3-8cfe-44b4beb7b14a" (UID: "3281467f-d73d-47b3-8cfe-44b4beb7b14a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.013036 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jddnc\" (UniqueName: \"kubernetes.io/projected/3281467f-d73d-47b3-8cfe-44b4beb7b14a-kube-api-access-jddnc\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.013069 4681 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3281467f-d73d-47b3-8cfe-44b4beb7b14a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.013082 4681 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3281467f-d73d-47b3-8cfe-44b4beb7b14a-run-httpd\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.013092 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3281467f-d73d-47b3-8cfe-44b4beb7b14a-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.016418 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3281467f-d73d-47b3-8cfe-44b4beb7b14a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3281467f-d73d-47b3-8cfe-44b4beb7b14a" (UID: "3281467f-d73d-47b3-8cfe-44b4beb7b14a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.016802 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3281467f-d73d-47b3-8cfe-44b4beb7b14a-config-data" (OuterVolumeSpecName: "config-data") pod "3281467f-d73d-47b3-8cfe-44b4beb7b14a" (UID: "3281467f-d73d-47b3-8cfe-44b4beb7b14a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.115585 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3281467f-d73d-47b3-8cfe-44b4beb7b14a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.115633 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3281467f-d73d-47b3-8cfe-44b4beb7b14a-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.711763 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.741108 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.753312 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.763923 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:28:35 crc kubenswrapper[4681]: E0404 02:28:35.764532 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3281467f-d73d-47b3-8cfe-44b4beb7b14a" containerName="proxy-httpd" Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.764557 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="3281467f-d73d-47b3-8cfe-44b4beb7b14a" containerName="proxy-httpd" Apr 04 02:28:35 crc kubenswrapper[4681]: E0404 02:28:35.764568 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3281467f-d73d-47b3-8cfe-44b4beb7b14a" containerName="ceilometer-central-agent" Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.764577 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="3281467f-d73d-47b3-8cfe-44b4beb7b14a" containerName="ceilometer-central-agent" Apr 04 02:28:35 crc kubenswrapper[4681]: E0404 02:28:35.764586 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3281467f-d73d-47b3-8cfe-44b4beb7b14a" containerName="sg-core" Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.764593 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="3281467f-d73d-47b3-8cfe-44b4beb7b14a" containerName="sg-core" Apr 04 02:28:35 crc kubenswrapper[4681]: E0404 02:28:35.764626 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3281467f-d73d-47b3-8cfe-44b4beb7b14a" containerName="ceilometer-notification-agent" Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.764633 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="3281467f-d73d-47b3-8cfe-44b4beb7b14a" containerName="ceilometer-notification-agent" Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.765058 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="3281467f-d73d-47b3-8cfe-44b4beb7b14a" containerName="sg-core" Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.765085 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="3281467f-d73d-47b3-8cfe-44b4beb7b14a" containerName="proxy-httpd" Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.765104 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="3281467f-d73d-47b3-8cfe-44b4beb7b14a" containerName="ceilometer-notification-agent" Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.765118 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="3281467f-d73d-47b3-8cfe-44b4beb7b14a" containerName="ceilometer-central-agent" Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.767378 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.773758 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.774510 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.774629 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.783295 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.933359 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-log-httpd\") pod \"ceilometer-0\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " pod="openstack/ceilometer-0" Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.933510 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " pod="openstack/ceilometer-0" Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.933743 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-run-httpd\") pod \"ceilometer-0\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " pod="openstack/ceilometer-0" Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.933890 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-config-data\") pod \"ceilometer-0\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " pod="openstack/ceilometer-0" Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.933947 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " pod="openstack/ceilometer-0" Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.934001 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddlpb\" (UniqueName: \"kubernetes.io/projected/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-kube-api-access-ddlpb\") pod \"ceilometer-0\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " pod="openstack/ceilometer-0" Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.934123 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " pod="openstack/ceilometer-0" Apr 04 02:28:35 crc kubenswrapper[4681]: I0404 02:28:35.934166 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-scripts\") pod \"ceilometer-0\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " pod="openstack/ceilometer-0" Apr 04 02:28:36 crc kubenswrapper[4681]: I0404 02:28:36.036222 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-run-httpd\") pod \"ceilometer-0\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " pod="openstack/ceilometer-0" Apr 04 02:28:36 crc kubenswrapper[4681]: I0404 02:28:36.036893 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-config-data\") pod \"ceilometer-0\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " pod="openstack/ceilometer-0" Apr 04 02:28:36 crc kubenswrapper[4681]: I0404 02:28:36.037032 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " pod="openstack/ceilometer-0" Apr 04 02:28:36 crc kubenswrapper[4681]: I0404 02:28:36.037161 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddlpb\" (UniqueName: \"kubernetes.io/projected/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-kube-api-access-ddlpb\") pod \"ceilometer-0\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " pod="openstack/ceilometer-0" Apr 04 02:28:36 crc kubenswrapper[4681]: I0404 02:28:36.037324 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " pod="openstack/ceilometer-0" Apr 04 02:28:36 crc kubenswrapper[4681]: I0404 02:28:36.037439 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-scripts\") pod \"ceilometer-0\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " pod="openstack/ceilometer-0" Apr 04 02:28:36 crc kubenswrapper[4681]: I0404 02:28:36.037169 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-run-httpd\") pod \"ceilometer-0\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " pod="openstack/ceilometer-0" Apr 04 02:28:36 crc kubenswrapper[4681]: I0404 02:28:36.037564 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-log-httpd\") pod \"ceilometer-0\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " pod="openstack/ceilometer-0" Apr 04 02:28:36 crc kubenswrapper[4681]: I0404 02:28:36.037692 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " pod="openstack/ceilometer-0" Apr 04 02:28:36 crc kubenswrapper[4681]: I0404 02:28:36.038036 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-log-httpd\") pod \"ceilometer-0\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " pod="openstack/ceilometer-0" Apr 04 02:28:36 crc kubenswrapper[4681]: I0404 02:28:36.041061 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " pod="openstack/ceilometer-0" Apr 04 02:28:36 crc kubenswrapper[4681]: I0404 02:28:36.041344 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-config-data\") pod \"ceilometer-0\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " pod="openstack/ceilometer-0" Apr 04 02:28:36 crc kubenswrapper[4681]: I0404 02:28:36.045826 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " pod="openstack/ceilometer-0" Apr 04 02:28:36 crc kubenswrapper[4681]: I0404 02:28:36.045891 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " pod="openstack/ceilometer-0" Apr 04 02:28:36 crc kubenswrapper[4681]: I0404 02:28:36.049205 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-scripts\") pod \"ceilometer-0\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " pod="openstack/ceilometer-0" Apr 04 02:28:36 crc kubenswrapper[4681]: I0404 02:28:36.056025 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddlpb\" (UniqueName: \"kubernetes.io/projected/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-kube-api-access-ddlpb\") pod \"ceilometer-0\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " pod="openstack/ceilometer-0" Apr 04 02:28:36 crc kubenswrapper[4681]: I0404 02:28:36.092820 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:28:36 crc kubenswrapper[4681]: I0404 02:28:36.565116 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:28:36 crc kubenswrapper[4681]: I0404 02:28:36.722895 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2","Type":"ContainerStarted","Data":"5c219d17d8dc89e9342e88e20b6b7c6bcf5fc913abaf820b52687daee0188da9"} Apr 04 02:28:36 crc kubenswrapper[4681]: I0404 02:28:36.914240 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Apr 04 02:28:37 crc kubenswrapper[4681]: I0404 02:28:37.217756 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3281467f-d73d-47b3-8cfe-44b4beb7b14a" path="/var/lib/kubelet/pods/3281467f-d73d-47b3-8cfe-44b4beb7b14a/volumes" Apr 04 02:28:37 crc kubenswrapper[4681]: I0404 02:28:37.734252 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2","Type":"ContainerStarted","Data":"0b754dfab6e10f5bff69352930844341e9ef06b8c84633cb14947a2c7404145c"} Apr 04 02:28:38 crc kubenswrapper[4681]: I0404 02:28:38.743234 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2","Type":"ContainerStarted","Data":"ed8c84566f7b23c510564455fcf27b42c2bc928a722478f2535fa7c3a678f070"} Apr 04 02:28:38 crc kubenswrapper[4681]: I0404 02:28:38.935715 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Apr 04 02:28:39 crc kubenswrapper[4681]: I0404 02:28:39.754650 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2","Type":"ContainerStarted","Data":"8e24430e03521f4331387e89f4d6208727be6c4c35c8eed9b08d9aacf0fae40f"} Apr 04 02:28:40 crc kubenswrapper[4681]: I0404 02:28:40.815805 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Apr 04 02:28:40 crc kubenswrapper[4681]: I0404 02:28:40.815864 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Apr 04 02:28:40 crc kubenswrapper[4681]: I0404 02:28:40.864140 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jtq9h"] Apr 04 02:28:40 crc kubenswrapper[4681]: I0404 02:28:40.866899 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jtq9h" Apr 04 02:28:40 crc kubenswrapper[4681]: I0404 02:28:40.881860 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jtq9h"] Apr 04 02:28:40 crc kubenswrapper[4681]: I0404 02:28:40.949512 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080f79db-04a5-4733-8669-83b168e7e448-catalog-content\") pod \"community-operators-jtq9h\" (UID: \"080f79db-04a5-4733-8669-83b168e7e448\") " pod="openshift-marketplace/community-operators-jtq9h" Apr 04 02:28:40 crc kubenswrapper[4681]: I0404 02:28:40.949839 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080f79db-04a5-4733-8669-83b168e7e448-utilities\") pod \"community-operators-jtq9h\" (UID: \"080f79db-04a5-4733-8669-83b168e7e448\") " pod="openshift-marketplace/community-operators-jtq9h" Apr 04 02:28:40 crc kubenswrapper[4681]: I0404 02:28:40.950084 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6k6c\" (UniqueName: \"kubernetes.io/projected/080f79db-04a5-4733-8669-83b168e7e448-kube-api-access-l6k6c\") pod \"community-operators-jtq9h\" (UID: \"080f79db-04a5-4733-8669-83b168e7e448\") " pod="openshift-marketplace/community-operators-jtq9h" Apr 04 02:28:41 crc kubenswrapper[4681]: I0404 02:28:41.052533 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6k6c\" (UniqueName: \"kubernetes.io/projected/080f79db-04a5-4733-8669-83b168e7e448-kube-api-access-l6k6c\") pod \"community-operators-jtq9h\" (UID: \"080f79db-04a5-4733-8669-83b168e7e448\") " pod="openshift-marketplace/community-operators-jtq9h" Apr 04 02:28:41 crc kubenswrapper[4681]: I0404 02:28:41.052669 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080f79db-04a5-4733-8669-83b168e7e448-catalog-content\") pod \"community-operators-jtq9h\" (UID: \"080f79db-04a5-4733-8669-83b168e7e448\") " pod="openshift-marketplace/community-operators-jtq9h" Apr 04 02:28:41 crc kubenswrapper[4681]: I0404 02:28:41.052712 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080f79db-04a5-4733-8669-83b168e7e448-utilities\") pod \"community-operators-jtq9h\" (UID: \"080f79db-04a5-4733-8669-83b168e7e448\") " pod="openshift-marketplace/community-operators-jtq9h" Apr 04 02:28:41 crc kubenswrapper[4681]: I0404 02:28:41.053230 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080f79db-04a5-4733-8669-83b168e7e448-utilities\") pod \"community-operators-jtq9h\" (UID: \"080f79db-04a5-4733-8669-83b168e7e448\") " pod="openshift-marketplace/community-operators-jtq9h" Apr 04 02:28:41 crc kubenswrapper[4681]: I0404 02:28:41.053689 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080f79db-04a5-4733-8669-83b168e7e448-catalog-content\") pod \"community-operators-jtq9h\" (UID: \"080f79db-04a5-4733-8669-83b168e7e448\") " pod="openshift-marketplace/community-operators-jtq9h" Apr 04 02:28:41 crc kubenswrapper[4681]: I0404 02:28:41.071637 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6k6c\" (UniqueName: \"kubernetes.io/projected/080f79db-04a5-4733-8669-83b168e7e448-kube-api-access-l6k6c\") pod \"community-operators-jtq9h\" (UID: \"080f79db-04a5-4733-8669-83b168e7e448\") " pod="openshift-marketplace/community-operators-jtq9h" Apr 04 02:28:41 crc kubenswrapper[4681]: I0404 02:28:41.188495 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jtq9h" Apr 04 02:28:41 crc kubenswrapper[4681]: I0404 02:28:41.212870 4681 scope.go:117] "RemoveContainer" containerID="a115e63e478f2dcd96d6a1ea5579c4abd8544184899aee1395f08415f92823da" Apr 04 02:28:41 crc kubenswrapper[4681]: E0404 02:28:41.213104 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:28:41 crc kubenswrapper[4681]: I0404 02:28:41.778416 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2","Type":"ContainerStarted","Data":"92478cdaf637f692b722213ee9f123d26d0eb703bd75deee9e652843eaeba8f3"} Apr 04 02:28:41 crc kubenswrapper[4681]: I0404 02:28:41.778774 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Apr 04 02:28:41 crc kubenswrapper[4681]: I0404 02:28:41.805242 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jtq9h"] Apr 04 02:28:42 crc kubenswrapper[4681]: E0404 02:28:42.263866 4681 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/00cb3ad7ae9ac3aa3ba8520970a1a26650b7fe8679d9db0328a984539f4a583f/diff" to get inode usage: stat /var/lib/containers/storage/overlay/00cb3ad7ae9ac3aa3ba8520970a1a26650b7fe8679d9db0328a984539f4a583f/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_kube-state-metrics-0_e1febd11-574c-4fc6-967c-d74bef4e351a/kube-state-metrics/0.log" to get inode usage: stat /var/log/pods/openstack_kube-state-metrics-0_e1febd11-574c-4fc6-967c-d74bef4e351a/kube-state-metrics/0.log: no such file or directory Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.622659 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.641511 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.065504442 podStartE2EDuration="7.641483373s" podCreationTimestamp="2026-04-04 02:28:35 +0000 UTC" firstStartedPulling="2026-04-04 02:28:36.575487597 +0000 UTC m=+1996.241262717" lastFinishedPulling="2026-04-04 02:28:41.151466528 +0000 UTC m=+2000.817241648" observedRunningTime="2026-04-04 02:28:41.81160148 +0000 UTC m=+2001.477376600" watchObservedRunningTime="2026-04-04 02:28:42.641483373 +0000 UTC m=+2002.307258513" Apr 04 02:28:42 crc kubenswrapper[4681]: E0404 02:28:42.659116 4681 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2883f88_a3fd_46b2_8452_32974b0c6b4f.slice/crio-conmon-47636a03276739c914fbe54c0af431ff55d270f3ca8f7e1ea8fee46b07d26d00.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda49a198d_f1ad_4eff_9c8d_a0360b9fedb4.slice/crio-a579325584373d5d7a5fb02f7e0c57088ff075be8189098b371a6001de24be83.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda49a198d_f1ad_4eff_9c8d_a0360b9fedb4.slice/crio-conmon-a579325584373d5d7a5fb02f7e0c57088ff075be8189098b371a6001de24be83.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3281467f_d73d_47b3_8cfe_44b4beb7b14a.slice/crio-c5eca742119d2cb291e484e36d8e5886d69c117055b411bf5be0f5e309cde66a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3281467f_d73d_47b3_8cfe_44b4beb7b14a.slice/crio-42da8ad75335c5becfcb54d152bc67843e6986e0d8a5a9d34d9d03a922114826.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3281467f_d73d_47b3_8cfe_44b4beb7b14a.slice/crio-conmon-9890d47df7f2165a2a2111ce5a9e7bec4e87a6fd1f43fb7380ba868f835e5c08.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2883f88_a3fd_46b2_8452_32974b0c6b4f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3281467f_d73d_47b3_8cfe_44b4beb7b14a.slice/crio-conmon-459c06cd809192edf78ac623358ea754e0ca8f5b49f6fb7755df40d1f148c468.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2883f88_a3fd_46b2_8452_32974b0c6b4f.slice/crio-f259a6db10d1325f93339d80c39fe6057aab2319d05a927b85e124fb71c14649\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3281467f_d73d_47b3_8cfe_44b4beb7b14a.slice/crio-conmon-42da8ad75335c5becfcb54d152bc67843e6986e0d8a5a9d34d9d03a922114826.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3281467f_d73d_47b3_8cfe_44b4beb7b14a.slice/crio-932dc3afa6e111a8af5632545106b31c90123b43989e4da0201e15e457c85cd3\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1febd11_574c_4fc6_967c_d74bef4e351a.slice/crio-5270ac1c78bc95c1c239197d463aadf43af48b2e1404bc9d97a5edd151413efa\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3281467f_d73d_47b3_8cfe_44b4beb7b14a.slice/crio-9890d47df7f2165a2a2111ce5a9e7bec4e87a6fd1f43fb7380ba868f835e5c08.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3281467f_d73d_47b3_8cfe_44b4beb7b14a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1febd11_574c_4fc6_967c_d74bef4e351a.slice/crio-conmon-4e8db8d5c0ddf84e62042ca36be9c30db3a79e75a7e6c04276ba26e3ed4cc379.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeda4f6e7_db6f_43af_9519_87fb93938d78.slice/crio-21512de4eae6000338472fdb5e2d0c1e659a12e304f47f5775eaec8544cc1040.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3281467f_d73d_47b3_8cfe_44b4beb7b14a.slice/crio-conmon-c5eca742119d2cb291e484e36d8e5886d69c117055b411bf5be0f5e309cde66a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1febd11_574c_4fc6_967c_d74bef4e351a.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3281467f_d73d_47b3_8cfe_44b4beb7b14a.slice/crio-459c06cd809192edf78ac623358ea754e0ca8f5b49f6fb7755df40d1f148c468.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1febd11_574c_4fc6_967c_d74bef4e351a.slice/crio-4e8db8d5c0ddf84e62042ca36be9c30db3a79e75a7e6c04276ba26e3ed4cc379.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2883f88_a3fd_46b2_8452_32974b0c6b4f.slice/crio-47636a03276739c914fbe54c0af431ff55d270f3ca8f7e1ea8fee46b07d26d00.scope\": RecentStats: unable to find data in memory cache]" Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.670083 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.788062 4681 generic.go:334] "Generic (PLEG): container finished" podID="eda4f6e7-db6f-43af-9519-87fb93938d78" containerID="21512de4eae6000338472fdb5e2d0c1e659a12e304f47f5775eaec8544cc1040" exitCode=137 Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.788104 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"eda4f6e7-db6f-43af-9519-87fb93938d78","Type":"ContainerDied","Data":"21512de4eae6000338472fdb5e2d0c1e659a12e304f47f5775eaec8544cc1040"} Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.788152 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"eda4f6e7-db6f-43af-9519-87fb93938d78","Type":"ContainerDied","Data":"80ffc0175c11ceeb403b29ca3a37089f3f3cd8df3757a9f0d5a08600034d4da7"} Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.788181 4681 scope.go:117] "RemoveContainer" containerID="21512de4eae6000338472fdb5e2d0c1e659a12e304f47f5775eaec8544cc1040" Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.788610 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.790112 4681 generic.go:334] "Generic (PLEG): container finished" podID="080f79db-04a5-4733-8669-83b168e7e448" containerID="9523bb2de993dfc16de3e5a56ee3568542c792f5bfb677a5d39327773aabf945" exitCode=0 Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.790159 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jtq9h" event={"ID":"080f79db-04a5-4733-8669-83b168e7e448","Type":"ContainerDied","Data":"9523bb2de993dfc16de3e5a56ee3568542c792f5bfb677a5d39327773aabf945"} Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.790176 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jtq9h" event={"ID":"080f79db-04a5-4733-8669-83b168e7e448","Type":"ContainerStarted","Data":"0e6ffe2b13c39180945c6c590076d3ae00e8159d214b5401165d32f3840b73d1"} Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.796544 4681 generic.go:334] "Generic (PLEG): container finished" podID="a49a198d-f1ad-4eff-9c8d-a0360b9fedb4" containerID="a579325584373d5d7a5fb02f7e0c57088ff075be8189098b371a6001de24be83" exitCode=137 Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.796893 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a49a198d-f1ad-4eff-9c8d-a0360b9fedb4-config-data\") pod \"a49a198d-f1ad-4eff-9c8d-a0360b9fedb4\" (UID: \"a49a198d-f1ad-4eff-9c8d-a0360b9fedb4\") " Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.796980 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj5j5\" (UniqueName: \"kubernetes.io/projected/eda4f6e7-db6f-43af-9519-87fb93938d78-kube-api-access-qj5j5\") pod \"eda4f6e7-db6f-43af-9519-87fb93938d78\" (UID: \"eda4f6e7-db6f-43af-9519-87fb93938d78\") " Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.797023 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eda4f6e7-db6f-43af-9519-87fb93938d78-config-data\") pod \"eda4f6e7-db6f-43af-9519-87fb93938d78\" (UID: \"eda4f6e7-db6f-43af-9519-87fb93938d78\") " Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.797050 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a49a198d-f1ad-4eff-9c8d-a0360b9fedb4-combined-ca-bundle\") pod \"a49a198d-f1ad-4eff-9c8d-a0360b9fedb4\" (UID: \"a49a198d-f1ad-4eff-9c8d-a0360b9fedb4\") " Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.797096 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eda4f6e7-db6f-43af-9519-87fb93938d78-combined-ca-bundle\") pod \"eda4f6e7-db6f-43af-9519-87fb93938d78\" (UID: \"eda4f6e7-db6f-43af-9519-87fb93938d78\") " Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.797125 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjpss\" (UniqueName: \"kubernetes.io/projected/a49a198d-f1ad-4eff-9c8d-a0360b9fedb4-kube-api-access-cjpss\") pod \"a49a198d-f1ad-4eff-9c8d-a0360b9fedb4\" (UID: \"a49a198d-f1ad-4eff-9c8d-a0360b9fedb4\") " Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.797169 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a49a198d-f1ad-4eff-9c8d-a0360b9fedb4-logs\") pod \"a49a198d-f1ad-4eff-9c8d-a0360b9fedb4\" (UID: \"a49a198d-f1ad-4eff-9c8d-a0360b9fedb4\") " Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.797599 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.797790 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a49a198d-f1ad-4eff-9c8d-a0360b9fedb4","Type":"ContainerDied","Data":"a579325584373d5d7a5fb02f7e0c57088ff075be8189098b371a6001de24be83"} Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.797834 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a49a198d-f1ad-4eff-9c8d-a0360b9fedb4","Type":"ContainerDied","Data":"f72e4ba71a6925c6323ed9aa811647a3264fd739b31e74e24dd939fa1985eab7"} Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.798479 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a49a198d-f1ad-4eff-9c8d-a0360b9fedb4-logs" (OuterVolumeSpecName: "logs") pod "a49a198d-f1ad-4eff-9c8d-a0360b9fedb4" (UID: "a49a198d-f1ad-4eff-9c8d-a0360b9fedb4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.802980 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a49a198d-f1ad-4eff-9c8d-a0360b9fedb4-kube-api-access-cjpss" (OuterVolumeSpecName: "kube-api-access-cjpss") pod "a49a198d-f1ad-4eff-9c8d-a0360b9fedb4" (UID: "a49a198d-f1ad-4eff-9c8d-a0360b9fedb4"). InnerVolumeSpecName "kube-api-access-cjpss". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.803128 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eda4f6e7-db6f-43af-9519-87fb93938d78-kube-api-access-qj5j5" (OuterVolumeSpecName: "kube-api-access-qj5j5") pod "eda4f6e7-db6f-43af-9519-87fb93938d78" (UID: "eda4f6e7-db6f-43af-9519-87fb93938d78"). InnerVolumeSpecName "kube-api-access-qj5j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.816433 4681 scope.go:117] "RemoveContainer" containerID="21512de4eae6000338472fdb5e2d0c1e659a12e304f47f5775eaec8544cc1040" Apr 04 02:28:42 crc kubenswrapper[4681]: E0404 02:28:42.817096 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21512de4eae6000338472fdb5e2d0c1e659a12e304f47f5775eaec8544cc1040\": container with ID starting with 21512de4eae6000338472fdb5e2d0c1e659a12e304f47f5775eaec8544cc1040 not found: ID does not exist" containerID="21512de4eae6000338472fdb5e2d0c1e659a12e304f47f5775eaec8544cc1040" Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.817207 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21512de4eae6000338472fdb5e2d0c1e659a12e304f47f5775eaec8544cc1040"} err="failed to get container status \"21512de4eae6000338472fdb5e2d0c1e659a12e304f47f5775eaec8544cc1040\": rpc error: code = NotFound desc = could not find container \"21512de4eae6000338472fdb5e2d0c1e659a12e304f47f5775eaec8544cc1040\": container with ID starting with 21512de4eae6000338472fdb5e2d0c1e659a12e304f47f5775eaec8544cc1040 not found: ID does not exist" Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.817326 4681 scope.go:117] "RemoveContainer" containerID="a579325584373d5d7a5fb02f7e0c57088ff075be8189098b371a6001de24be83" Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.823849 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.825493 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.832802 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.835138 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a49a198d-f1ad-4eff-9c8d-a0360b9fedb4-config-data" (OuterVolumeSpecName: "config-data") pod "a49a198d-f1ad-4eff-9c8d-a0360b9fedb4" (UID: "a49a198d-f1ad-4eff-9c8d-a0360b9fedb4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.848826 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eda4f6e7-db6f-43af-9519-87fb93938d78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eda4f6e7-db6f-43af-9519-87fb93938d78" (UID: "eda4f6e7-db6f-43af-9519-87fb93938d78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.851313 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eda4f6e7-db6f-43af-9519-87fb93938d78-config-data" (OuterVolumeSpecName: "config-data") pod "eda4f6e7-db6f-43af-9519-87fb93938d78" (UID: "eda4f6e7-db6f-43af-9519-87fb93938d78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.851468 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a49a198d-f1ad-4eff-9c8d-a0360b9fedb4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a49a198d-f1ad-4eff-9c8d-a0360b9fedb4" (UID: "a49a198d-f1ad-4eff-9c8d-a0360b9fedb4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.851706 4681 scope.go:117] "RemoveContainer" containerID="b2c51758671230e20382a4fa5410d4e92d757fa6881f7e4769aacf2731a0bef6" Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.878885 4681 scope.go:117] "RemoveContainer" containerID="a579325584373d5d7a5fb02f7e0c57088ff075be8189098b371a6001de24be83" Apr 04 02:28:42 crc kubenswrapper[4681]: E0404 02:28:42.879462 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a579325584373d5d7a5fb02f7e0c57088ff075be8189098b371a6001de24be83\": container with ID starting with a579325584373d5d7a5fb02f7e0c57088ff075be8189098b371a6001de24be83 not found: ID does not exist" containerID="a579325584373d5d7a5fb02f7e0c57088ff075be8189098b371a6001de24be83" Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.879495 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a579325584373d5d7a5fb02f7e0c57088ff075be8189098b371a6001de24be83"} err="failed to get container status \"a579325584373d5d7a5fb02f7e0c57088ff075be8189098b371a6001de24be83\": rpc error: code = NotFound desc = could not find container \"a579325584373d5d7a5fb02f7e0c57088ff075be8189098b371a6001de24be83\": container with ID starting with a579325584373d5d7a5fb02f7e0c57088ff075be8189098b371a6001de24be83 not found: ID does not exist" Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.879516 4681 scope.go:117] "RemoveContainer" containerID="b2c51758671230e20382a4fa5410d4e92d757fa6881f7e4769aacf2731a0bef6" Apr 04 02:28:42 crc kubenswrapper[4681]: E0404 02:28:42.880161 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2c51758671230e20382a4fa5410d4e92d757fa6881f7e4769aacf2731a0bef6\": container with ID starting with b2c51758671230e20382a4fa5410d4e92d757fa6881f7e4769aacf2731a0bef6 not found: ID does not exist" containerID="b2c51758671230e20382a4fa5410d4e92d757fa6881f7e4769aacf2731a0bef6" Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.880214 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2c51758671230e20382a4fa5410d4e92d757fa6881f7e4769aacf2731a0bef6"} err="failed to get container status \"b2c51758671230e20382a4fa5410d4e92d757fa6881f7e4769aacf2731a0bef6\": rpc error: code = NotFound desc = could not find container \"b2c51758671230e20382a4fa5410d4e92d757fa6881f7e4769aacf2731a0bef6\": container with ID starting with b2c51758671230e20382a4fa5410d4e92d757fa6881f7e4769aacf2731a0bef6 not found: ID does not exist" Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.901660 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a49a198d-f1ad-4eff-9c8d-a0360b9fedb4-logs\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.901876 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a49a198d-f1ad-4eff-9c8d-a0360b9fedb4-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.901984 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qj5j5\" (UniqueName: \"kubernetes.io/projected/eda4f6e7-db6f-43af-9519-87fb93938d78-kube-api-access-qj5j5\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.902033 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eda4f6e7-db6f-43af-9519-87fb93938d78-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.902044 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a49a198d-f1ad-4eff-9c8d-a0360b9fedb4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.902054 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eda4f6e7-db6f-43af-9519-87fb93938d78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:42 crc kubenswrapper[4681]: I0404 02:28:42.902063 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjpss\" (UniqueName: \"kubernetes.io/projected/a49a198d-f1ad-4eff-9c8d-a0360b9fedb4-kube-api-access-cjpss\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.171760 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.187470 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.253927 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eda4f6e7-db6f-43af-9519-87fb93938d78" path="/var/lib/kubelet/pods/eda4f6e7-db6f-43af-9519-87fb93938d78/volumes" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.257153 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.257192 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.257213 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Apr 04 02:28:43 crc kubenswrapper[4681]: E0404 02:28:43.257547 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a49a198d-f1ad-4eff-9c8d-a0360b9fedb4" containerName="nova-metadata-log" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.257563 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="a49a198d-f1ad-4eff-9c8d-a0360b9fedb4" containerName="nova-metadata-log" Apr 04 02:28:43 crc kubenswrapper[4681]: E0404 02:28:43.257587 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a49a198d-f1ad-4eff-9c8d-a0360b9fedb4" containerName="nova-metadata-metadata" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.257594 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="a49a198d-f1ad-4eff-9c8d-a0360b9fedb4" containerName="nova-metadata-metadata" Apr 04 02:28:43 crc kubenswrapper[4681]: E0404 02:28:43.257620 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda4f6e7-db6f-43af-9519-87fb93938d78" containerName="nova-cell1-novncproxy-novncproxy" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.257627 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda4f6e7-db6f-43af-9519-87fb93938d78" containerName="nova-cell1-novncproxy-novncproxy" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.257805 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="a49a198d-f1ad-4eff-9c8d-a0360b9fedb4" containerName="nova-metadata-metadata" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.257815 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda4f6e7-db6f-43af-9519-87fb93938d78" containerName="nova-cell1-novncproxy-novncproxy" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.257832 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="a49a198d-f1ad-4eff-9c8d-a0360b9fedb4" containerName="nova-metadata-log" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.260514 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.260604 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.263399 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.263983 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.264220 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.276336 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.278804 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.281105 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.281501 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.294786 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.412164 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqkmk\" (UniqueName: \"kubernetes.io/projected/f5986086-65b9-41b2-bb40-8ad2c6b42d11-kube-api-access-bqkmk\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5986086-65b9-41b2-bb40-8ad2c6b42d11\") " pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.412551 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d83af60c-798f-4343-8a9f-a6a39b6ccb3f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d83af60c-798f-4343-8a9f-a6a39b6ccb3f\") " pod="openstack/nova-metadata-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.412733 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d83af60c-798f-4343-8a9f-a6a39b6ccb3f-logs\") pod \"nova-metadata-0\" (UID: \"d83af60c-798f-4343-8a9f-a6a39b6ccb3f\") " pod="openstack/nova-metadata-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.412820 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5986086-65b9-41b2-bb40-8ad2c6b42d11-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5986086-65b9-41b2-bb40-8ad2c6b42d11\") " pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.412957 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5986086-65b9-41b2-bb40-8ad2c6b42d11-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5986086-65b9-41b2-bb40-8ad2c6b42d11\") " pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.413097 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5986086-65b9-41b2-bb40-8ad2c6b42d11-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5986086-65b9-41b2-bb40-8ad2c6b42d11\") " pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.413239 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5986086-65b9-41b2-bb40-8ad2c6b42d11-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5986086-65b9-41b2-bb40-8ad2c6b42d11\") " pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.413393 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fnff\" (UniqueName: \"kubernetes.io/projected/d83af60c-798f-4343-8a9f-a6a39b6ccb3f-kube-api-access-7fnff\") pod \"nova-metadata-0\" (UID: \"d83af60c-798f-4343-8a9f-a6a39b6ccb3f\") " pod="openstack/nova-metadata-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.413749 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83af60c-798f-4343-8a9f-a6a39b6ccb3f-config-data\") pod \"nova-metadata-0\" (UID: \"d83af60c-798f-4343-8a9f-a6a39b6ccb3f\") " pod="openstack/nova-metadata-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.414401 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83af60c-798f-4343-8a9f-a6a39b6ccb3f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d83af60c-798f-4343-8a9f-a6a39b6ccb3f\") " pod="openstack/nova-metadata-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.516430 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d83af60c-798f-4343-8a9f-a6a39b6ccb3f-logs\") pod \"nova-metadata-0\" (UID: \"d83af60c-798f-4343-8a9f-a6a39b6ccb3f\") " pod="openstack/nova-metadata-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.516827 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5986086-65b9-41b2-bb40-8ad2c6b42d11-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5986086-65b9-41b2-bb40-8ad2c6b42d11\") " pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.516916 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5986086-65b9-41b2-bb40-8ad2c6b42d11-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5986086-65b9-41b2-bb40-8ad2c6b42d11\") " pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.516989 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5986086-65b9-41b2-bb40-8ad2c6b42d11-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5986086-65b9-41b2-bb40-8ad2c6b42d11\") " pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.517075 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5986086-65b9-41b2-bb40-8ad2c6b42d11-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5986086-65b9-41b2-bb40-8ad2c6b42d11\") " pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.517120 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fnff\" (UniqueName: \"kubernetes.io/projected/d83af60c-798f-4343-8a9f-a6a39b6ccb3f-kube-api-access-7fnff\") pod \"nova-metadata-0\" (UID: \"d83af60c-798f-4343-8a9f-a6a39b6ccb3f\") " pod="openstack/nova-metadata-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.517366 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83af60c-798f-4343-8a9f-a6a39b6ccb3f-config-data\") pod \"nova-metadata-0\" (UID: \"d83af60c-798f-4343-8a9f-a6a39b6ccb3f\") " pod="openstack/nova-metadata-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.517572 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83af60c-798f-4343-8a9f-a6a39b6ccb3f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d83af60c-798f-4343-8a9f-a6a39b6ccb3f\") " pod="openstack/nova-metadata-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.517667 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqkmk\" (UniqueName: \"kubernetes.io/projected/f5986086-65b9-41b2-bb40-8ad2c6b42d11-kube-api-access-bqkmk\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5986086-65b9-41b2-bb40-8ad2c6b42d11\") " pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.517783 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d83af60c-798f-4343-8a9f-a6a39b6ccb3f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d83af60c-798f-4343-8a9f-a6a39b6ccb3f\") " pod="openstack/nova-metadata-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.522230 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5986086-65b9-41b2-bb40-8ad2c6b42d11-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5986086-65b9-41b2-bb40-8ad2c6b42d11\") " pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.522399 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5986086-65b9-41b2-bb40-8ad2c6b42d11-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5986086-65b9-41b2-bb40-8ad2c6b42d11\") " pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.522807 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d83af60c-798f-4343-8a9f-a6a39b6ccb3f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d83af60c-798f-4343-8a9f-a6a39b6ccb3f\") " pod="openstack/nova-metadata-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.516693 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d83af60c-798f-4343-8a9f-a6a39b6ccb3f-logs\") pod \"nova-metadata-0\" (UID: \"d83af60c-798f-4343-8a9f-a6a39b6ccb3f\") " pod="openstack/nova-metadata-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.523559 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5986086-65b9-41b2-bb40-8ad2c6b42d11-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5986086-65b9-41b2-bb40-8ad2c6b42d11\") " pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.526422 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83af60c-798f-4343-8a9f-a6a39b6ccb3f-config-data\") pod \"nova-metadata-0\" (UID: \"d83af60c-798f-4343-8a9f-a6a39b6ccb3f\") " pod="openstack/nova-metadata-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.526861 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83af60c-798f-4343-8a9f-a6a39b6ccb3f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d83af60c-798f-4343-8a9f-a6a39b6ccb3f\") " pod="openstack/nova-metadata-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.542801 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5986086-65b9-41b2-bb40-8ad2c6b42d11-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5986086-65b9-41b2-bb40-8ad2c6b42d11\") " pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.547898 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqkmk\" (UniqueName: \"kubernetes.io/projected/f5986086-65b9-41b2-bb40-8ad2c6b42d11-kube-api-access-bqkmk\") pod \"nova-cell1-novncproxy-0\" (UID: \"f5986086-65b9-41b2-bb40-8ad2c6b42d11\") " pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.553000 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fnff\" (UniqueName: \"kubernetes.io/projected/d83af60c-798f-4343-8a9f-a6a39b6ccb3f-kube-api-access-7fnff\") pod \"nova-metadata-0\" (UID: \"d83af60c-798f-4343-8a9f-a6a39b6ccb3f\") " pod="openstack/nova-metadata-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.587950 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.605796 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.807789 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jtq9h" event={"ID":"080f79db-04a5-4733-8669-83b168e7e448","Type":"ContainerStarted","Data":"19d79900902a8fd0988f5781d9d9cb8727bb6a8740e5502d13e2f0c0c662aefb"} Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.829214 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Apr 04 02:28:43 crc kubenswrapper[4681]: I0404 02:28:43.999021 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54fd76d97c-j5cr2"] Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.006071 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.019338 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54fd76d97c-j5cr2"] Apr 04 02:28:44 crc kubenswrapper[4681]: W0404 02:28:44.090461 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5986086_65b9_41b2_bb40_8ad2c6b42d11.slice/crio-87888b34588d7a025e6d849c6a3b6e497fe5fba240beaffe089d5bebb26ca151 WatchSource:0}: Error finding container 87888b34588d7a025e6d849c6a3b6e497fe5fba240beaffe089d5bebb26ca151: Status 404 returned error can't find the container with id 87888b34588d7a025e6d849c6a3b6e497fe5fba240beaffe089d5bebb26ca151 Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.095850 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.132884 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3fb1572-f7e6-4be5-8839-647fa7e78e67-dns-swift-storage-0\") pod \"dnsmasq-dns-54fd76d97c-j5cr2\" (UID: \"d3fb1572-f7e6-4be5-8839-647fa7e78e67\") " pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.133240 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p5gr\" (UniqueName: \"kubernetes.io/projected/d3fb1572-f7e6-4be5-8839-647fa7e78e67-kube-api-access-7p5gr\") pod \"dnsmasq-dns-54fd76d97c-j5cr2\" (UID: \"d3fb1572-f7e6-4be5-8839-647fa7e78e67\") " pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.133366 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3fb1572-f7e6-4be5-8839-647fa7e78e67-ovsdbserver-nb\") pod \"dnsmasq-dns-54fd76d97c-j5cr2\" (UID: \"d3fb1572-f7e6-4be5-8839-647fa7e78e67\") " pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.133441 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3fb1572-f7e6-4be5-8839-647fa7e78e67-ovsdbserver-sb\") pod \"dnsmasq-dns-54fd76d97c-j5cr2\" (UID: \"d3fb1572-f7e6-4be5-8839-647fa7e78e67\") " pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.133475 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3fb1572-f7e6-4be5-8839-647fa7e78e67-dns-svc\") pod \"dnsmasq-dns-54fd76d97c-j5cr2\" (UID: \"d3fb1572-f7e6-4be5-8839-647fa7e78e67\") " pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.133586 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3fb1572-f7e6-4be5-8839-647fa7e78e67-config\") pod \"dnsmasq-dns-54fd76d97c-j5cr2\" (UID: \"d3fb1572-f7e6-4be5-8839-647fa7e78e67\") " pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.172642 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.238660 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3fb1572-f7e6-4be5-8839-647fa7e78e67-config\") pod \"dnsmasq-dns-54fd76d97c-j5cr2\" (UID: \"d3fb1572-f7e6-4be5-8839-647fa7e78e67\") " pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.238749 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3fb1572-f7e6-4be5-8839-647fa7e78e67-dns-swift-storage-0\") pod \"dnsmasq-dns-54fd76d97c-j5cr2\" (UID: \"d3fb1572-f7e6-4be5-8839-647fa7e78e67\") " pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.238790 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p5gr\" (UniqueName: \"kubernetes.io/projected/d3fb1572-f7e6-4be5-8839-647fa7e78e67-kube-api-access-7p5gr\") pod \"dnsmasq-dns-54fd76d97c-j5cr2\" (UID: \"d3fb1572-f7e6-4be5-8839-647fa7e78e67\") " pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.238868 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3fb1572-f7e6-4be5-8839-647fa7e78e67-ovsdbserver-nb\") pod \"dnsmasq-dns-54fd76d97c-j5cr2\" (UID: \"d3fb1572-f7e6-4be5-8839-647fa7e78e67\") " pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.238926 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3fb1572-f7e6-4be5-8839-647fa7e78e67-ovsdbserver-sb\") pod \"dnsmasq-dns-54fd76d97c-j5cr2\" (UID: \"d3fb1572-f7e6-4be5-8839-647fa7e78e67\") " pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.238951 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3fb1572-f7e6-4be5-8839-647fa7e78e67-dns-svc\") pod \"dnsmasq-dns-54fd76d97c-j5cr2\" (UID: \"d3fb1572-f7e6-4be5-8839-647fa7e78e67\") " pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.239979 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3fb1572-f7e6-4be5-8839-647fa7e78e67-dns-svc\") pod \"dnsmasq-dns-54fd76d97c-j5cr2\" (UID: \"d3fb1572-f7e6-4be5-8839-647fa7e78e67\") " pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.240644 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3fb1572-f7e6-4be5-8839-647fa7e78e67-config\") pod \"dnsmasq-dns-54fd76d97c-j5cr2\" (UID: \"d3fb1572-f7e6-4be5-8839-647fa7e78e67\") " pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.244059 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3fb1572-f7e6-4be5-8839-647fa7e78e67-ovsdbserver-nb\") pod \"dnsmasq-dns-54fd76d97c-j5cr2\" (UID: \"d3fb1572-f7e6-4be5-8839-647fa7e78e67\") " pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.245978 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3fb1572-f7e6-4be5-8839-647fa7e78e67-ovsdbserver-sb\") pod \"dnsmasq-dns-54fd76d97c-j5cr2\" (UID: \"d3fb1572-f7e6-4be5-8839-647fa7e78e67\") " pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.246394 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3fb1572-f7e6-4be5-8839-647fa7e78e67-dns-swift-storage-0\") pod \"dnsmasq-dns-54fd76d97c-j5cr2\" (UID: \"d3fb1572-f7e6-4be5-8839-647fa7e78e67\") " pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.260070 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p5gr\" (UniqueName: \"kubernetes.io/projected/d3fb1572-f7e6-4be5-8839-647fa7e78e67-kube-api-access-7p5gr\") pod \"dnsmasq-dns-54fd76d97c-j5cr2\" (UID: \"d3fb1572-f7e6-4be5-8839-647fa7e78e67\") " pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.328287 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.831293 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f5986086-65b9-41b2-bb40-8ad2c6b42d11","Type":"ContainerStarted","Data":"1e3d38bd30c50e750fe7d63e13cd842b6c5105ccdb9a588ef3824dc42b2716ef"} Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.831707 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f5986086-65b9-41b2-bb40-8ad2c6b42d11","Type":"ContainerStarted","Data":"87888b34588d7a025e6d849c6a3b6e497fe5fba240beaffe089d5bebb26ca151"} Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.833907 4681 generic.go:334] "Generic (PLEG): container finished" podID="080f79db-04a5-4733-8669-83b168e7e448" containerID="19d79900902a8fd0988f5781d9d9cb8727bb6a8740e5502d13e2f0c0c662aefb" exitCode=0 Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.834131 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jtq9h" event={"ID":"080f79db-04a5-4733-8669-83b168e7e448","Type":"ContainerDied","Data":"19d79900902a8fd0988f5781d9d9cb8727bb6a8740e5502d13e2f0c0c662aefb"} Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.839155 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d83af60c-798f-4343-8a9f-a6a39b6ccb3f","Type":"ContainerStarted","Data":"189ea82beb9e827b8827da85d019ab12eafdd5e0b9b0b15546a114106b0460f1"} Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.839184 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d83af60c-798f-4343-8a9f-a6a39b6ccb3f","Type":"ContainerStarted","Data":"fbe749cd7dabad970bbb5c99b2a9ab570d07d522fece5a7653f3bbdcad6d2034"} Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.839193 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d83af60c-798f-4343-8a9f-a6a39b6ccb3f","Type":"ContainerStarted","Data":"1b1b2fc40b4971b9ddd0fa3edbc86978d0f916463f03b118b65a653f74bb1f21"} Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.856959 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.8569439079999999 podStartE2EDuration="1.856943908s" podCreationTimestamp="2026-04-04 02:28:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:28:44.849331249 +0000 UTC m=+2004.515106369" watchObservedRunningTime="2026-04-04 02:28:44.856943908 +0000 UTC m=+2004.522719028" Apr 04 02:28:44 crc kubenswrapper[4681]: W0404 02:28:44.864596 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3fb1572_f7e6_4be5_8839_647fa7e78e67.slice/crio-8ddd64e4d2e569901001b9f84b1c861fe929c48b10b230258dd8c474f71e9523 WatchSource:0}: Error finding container 8ddd64e4d2e569901001b9f84b1c861fe929c48b10b230258dd8c474f71e9523: Status 404 returned error can't find the container with id 8ddd64e4d2e569901001b9f84b1c861fe929c48b10b230258dd8c474f71e9523 Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.889621 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54fd76d97c-j5cr2"] Apr 04 02:28:44 crc kubenswrapper[4681]: I0404 02:28:44.907528 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.907510799 podStartE2EDuration="1.907510799s" podCreationTimestamp="2026-04-04 02:28:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:28:44.900583128 +0000 UTC m=+2004.566358248" watchObservedRunningTime="2026-04-04 02:28:44.907510799 +0000 UTC m=+2004.573285919" Apr 04 02:28:45 crc kubenswrapper[4681]: I0404 02:28:45.214658 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a49a198d-f1ad-4eff-9c8d-a0360b9fedb4" path="/var/lib/kubelet/pods/a49a198d-f1ad-4eff-9c8d-a0360b9fedb4/volumes" Apr 04 02:28:45 crc kubenswrapper[4681]: I0404 02:28:45.851383 4681 generic.go:334] "Generic (PLEG): container finished" podID="d3fb1572-f7e6-4be5-8839-647fa7e78e67" containerID="b92741ef78aae6d49d83db8007273ea81eb22edf11dffc051be9d1e489900c0b" exitCode=0 Apr 04 02:28:45 crc kubenswrapper[4681]: I0404 02:28:45.851487 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" event={"ID":"d3fb1572-f7e6-4be5-8839-647fa7e78e67","Type":"ContainerDied","Data":"b92741ef78aae6d49d83db8007273ea81eb22edf11dffc051be9d1e489900c0b"} Apr 04 02:28:45 crc kubenswrapper[4681]: I0404 02:28:45.852561 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" event={"ID":"d3fb1572-f7e6-4be5-8839-647fa7e78e67","Type":"ContainerStarted","Data":"8ddd64e4d2e569901001b9f84b1c861fe929c48b10b230258dd8c474f71e9523"} Apr 04 02:28:46 crc kubenswrapper[4681]: I0404 02:28:46.865036 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" event={"ID":"d3fb1572-f7e6-4be5-8839-647fa7e78e67","Type":"ContainerStarted","Data":"cf8c19ef7dccd1910785ed37646ba2265e713e30ebf0ad090ff8b3581dd0fc53"} Apr 04 02:28:47 crc kubenswrapper[4681]: I0404 02:28:47.245238 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vfkgh"] Apr 04 02:28:47 crc kubenswrapper[4681]: I0404 02:28:47.251590 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vfkgh" Apr 04 02:28:47 crc kubenswrapper[4681]: I0404 02:28:47.287986 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vfkgh"] Apr 04 02:28:47 crc kubenswrapper[4681]: I0404 02:28:47.424677 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8r4c\" (UniqueName: \"kubernetes.io/projected/dd41c1b8-d52c-4176-b423-1f8b61246822-kube-api-access-d8r4c\") pod \"certified-operators-vfkgh\" (UID: \"dd41c1b8-d52c-4176-b423-1f8b61246822\") " pod="openshift-marketplace/certified-operators-vfkgh" Apr 04 02:28:47 crc kubenswrapper[4681]: I0404 02:28:47.424821 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd41c1b8-d52c-4176-b423-1f8b61246822-utilities\") pod \"certified-operators-vfkgh\" (UID: \"dd41c1b8-d52c-4176-b423-1f8b61246822\") " pod="openshift-marketplace/certified-operators-vfkgh" Apr 04 02:28:47 crc kubenswrapper[4681]: I0404 02:28:47.424995 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd41c1b8-d52c-4176-b423-1f8b61246822-catalog-content\") pod \"certified-operators-vfkgh\" (UID: \"dd41c1b8-d52c-4176-b423-1f8b61246822\") " pod="openshift-marketplace/certified-operators-vfkgh" Apr 04 02:28:47 crc kubenswrapper[4681]: I0404 02:28:47.528399 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd41c1b8-d52c-4176-b423-1f8b61246822-catalog-content\") pod \"certified-operators-vfkgh\" (UID: \"dd41c1b8-d52c-4176-b423-1f8b61246822\") " pod="openshift-marketplace/certified-operators-vfkgh" Apr 04 02:28:47 crc kubenswrapper[4681]: I0404 02:28:47.528538 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8r4c\" (UniqueName: \"kubernetes.io/projected/dd41c1b8-d52c-4176-b423-1f8b61246822-kube-api-access-d8r4c\") pod \"certified-operators-vfkgh\" (UID: \"dd41c1b8-d52c-4176-b423-1f8b61246822\") " pod="openshift-marketplace/certified-operators-vfkgh" Apr 04 02:28:47 crc kubenswrapper[4681]: I0404 02:28:47.528612 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd41c1b8-d52c-4176-b423-1f8b61246822-utilities\") pod \"certified-operators-vfkgh\" (UID: \"dd41c1b8-d52c-4176-b423-1f8b61246822\") " pod="openshift-marketplace/certified-operators-vfkgh" Apr 04 02:28:47 crc kubenswrapper[4681]: I0404 02:28:47.529068 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd41c1b8-d52c-4176-b423-1f8b61246822-utilities\") pod \"certified-operators-vfkgh\" (UID: \"dd41c1b8-d52c-4176-b423-1f8b61246822\") " pod="openshift-marketplace/certified-operators-vfkgh" Apr 04 02:28:47 crc kubenswrapper[4681]: I0404 02:28:47.529284 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd41c1b8-d52c-4176-b423-1f8b61246822-catalog-content\") pod \"certified-operators-vfkgh\" (UID: \"dd41c1b8-d52c-4176-b423-1f8b61246822\") " pod="openshift-marketplace/certified-operators-vfkgh" Apr 04 02:28:47 crc kubenswrapper[4681]: I0404 02:28:47.562721 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8r4c\" (UniqueName: \"kubernetes.io/projected/dd41c1b8-d52c-4176-b423-1f8b61246822-kube-api-access-d8r4c\") pod \"certified-operators-vfkgh\" (UID: \"dd41c1b8-d52c-4176-b423-1f8b61246822\") " pod="openshift-marketplace/certified-operators-vfkgh" Apr 04 02:28:47 crc kubenswrapper[4681]: I0404 02:28:47.590838 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vfkgh" Apr 04 02:28:47 crc kubenswrapper[4681]: I0404 02:28:47.595878 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Apr 04 02:28:47 crc kubenswrapper[4681]: I0404 02:28:47.596094 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d8f784c8-6944-46b5-b4c2-e81f403dfa44" containerName="nova-api-log" containerID="cri-o://388d9c8a0ee93836a8f063a1b05a3017f12521424b0669f7ef0d98c941cd82a1" gracePeriod=30 Apr 04 02:28:47 crc kubenswrapper[4681]: I0404 02:28:47.596289 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d8f784c8-6944-46b5-b4c2-e81f403dfa44" containerName="nova-api-api" containerID="cri-o://ca0b277dc7b5ac3f390bf53923fef73cdddc38eb8fee3c48603d42a2007294f4" gracePeriod=30 Apr 04 02:28:47 crc kubenswrapper[4681]: I0404 02:28:47.908110 4681 generic.go:334] "Generic (PLEG): container finished" podID="d8f784c8-6944-46b5-b4c2-e81f403dfa44" containerID="388d9c8a0ee93836a8f063a1b05a3017f12521424b0669f7ef0d98c941cd82a1" exitCode=143 Apr 04 02:28:47 crc kubenswrapper[4681]: I0404 02:28:47.908446 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8f784c8-6944-46b5-b4c2-e81f403dfa44","Type":"ContainerDied","Data":"388d9c8a0ee93836a8f063a1b05a3017f12521424b0669f7ef0d98c941cd82a1"} Apr 04 02:28:47 crc kubenswrapper[4681]: I0404 02:28:47.912749 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jtq9h" event={"ID":"080f79db-04a5-4733-8669-83b168e7e448","Type":"ContainerStarted","Data":"e85f588eb7f21160c968acab5633042980c7ee41dce081fb2ffdb0ab49407e7a"} Apr 04 02:28:47 crc kubenswrapper[4681]: I0404 02:28:47.912808 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" Apr 04 02:28:47 crc kubenswrapper[4681]: I0404 02:28:47.945790 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" podStartSLOduration=4.945771492 podStartE2EDuration="4.945771492s" podCreationTimestamp="2026-04-04 02:28:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:28:47.939352075 +0000 UTC m=+2007.605127195" watchObservedRunningTime="2026-04-04 02:28:47.945771492 +0000 UTC m=+2007.611546612" Apr 04 02:28:47 crc kubenswrapper[4681]: I0404 02:28:47.979786 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jtq9h" podStartSLOduration=3.627591615 podStartE2EDuration="7.979764558s" podCreationTimestamp="2026-04-04 02:28:40 +0000 UTC" firstStartedPulling="2026-04-04 02:28:42.791649805 +0000 UTC m=+2002.457424925" lastFinishedPulling="2026-04-04 02:28:47.143822748 +0000 UTC m=+2006.809597868" observedRunningTime="2026-04-04 02:28:47.960173239 +0000 UTC m=+2007.625948359" watchObservedRunningTime="2026-04-04 02:28:47.979764558 +0000 UTC m=+2007.645539668" Apr 04 02:28:48 crc kubenswrapper[4681]: I0404 02:28:48.227812 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vfkgh"] Apr 04 02:28:48 crc kubenswrapper[4681]: I0404 02:28:48.588448 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:48 crc kubenswrapper[4681]: I0404 02:28:48.923871 4681 generic.go:334] "Generic (PLEG): container finished" podID="dd41c1b8-d52c-4176-b423-1f8b61246822" containerID="579ecc255c6ee16f5395053f42d72de8322e5a0e1fc0a078874a67aed525da9c" exitCode=0 Apr 04 02:28:48 crc kubenswrapper[4681]: I0404 02:28:48.924077 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfkgh" event={"ID":"dd41c1b8-d52c-4176-b423-1f8b61246822","Type":"ContainerDied","Data":"579ecc255c6ee16f5395053f42d72de8322e5a0e1fc0a078874a67aed525da9c"} Apr 04 02:28:48 crc kubenswrapper[4681]: I0404 02:28:48.924390 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfkgh" event={"ID":"dd41c1b8-d52c-4176-b423-1f8b61246822","Type":"ContainerStarted","Data":"2bcc1049443486a8e301a90706907ceeccbb3b93f7a6db5eb78a01341c423a1c"} Apr 04 02:28:49 crc kubenswrapper[4681]: I0404 02:28:49.147938 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:28:49 crc kubenswrapper[4681]: I0404 02:28:49.148248 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="11eed3c6-7bf0-4496-b9c1-de5ab8f113a2" containerName="ceilometer-central-agent" containerID="cri-o://0b754dfab6e10f5bff69352930844341e9ef06b8c84633cb14947a2c7404145c" gracePeriod=30 Apr 04 02:28:49 crc kubenswrapper[4681]: I0404 02:28:49.148313 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="11eed3c6-7bf0-4496-b9c1-de5ab8f113a2" containerName="sg-core" containerID="cri-o://8e24430e03521f4331387e89f4d6208727be6c4c35c8eed9b08d9aacf0fae40f" gracePeriod=30 Apr 04 02:28:49 crc kubenswrapper[4681]: I0404 02:28:49.148329 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="11eed3c6-7bf0-4496-b9c1-de5ab8f113a2" containerName="ceilometer-notification-agent" containerID="cri-o://ed8c84566f7b23c510564455fcf27b42c2bc928a722478f2535fa7c3a678f070" gracePeriod=30 Apr 04 02:28:49 crc kubenswrapper[4681]: I0404 02:28:49.148384 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="11eed3c6-7bf0-4496-b9c1-de5ab8f113a2" containerName="proxy-httpd" containerID="cri-o://92478cdaf637f692b722213ee9f123d26d0eb703bd75deee9e652843eaeba8f3" gracePeriod=30 Apr 04 02:28:49 crc kubenswrapper[4681]: I0404 02:28:49.963431 4681 generic.go:334] "Generic (PLEG): container finished" podID="11eed3c6-7bf0-4496-b9c1-de5ab8f113a2" containerID="92478cdaf637f692b722213ee9f123d26d0eb703bd75deee9e652843eaeba8f3" exitCode=0 Apr 04 02:28:49 crc kubenswrapper[4681]: I0404 02:28:49.963754 4681 generic.go:334] "Generic (PLEG): container finished" podID="11eed3c6-7bf0-4496-b9c1-de5ab8f113a2" containerID="8e24430e03521f4331387e89f4d6208727be6c4c35c8eed9b08d9aacf0fae40f" exitCode=2 Apr 04 02:28:49 crc kubenswrapper[4681]: I0404 02:28:49.963770 4681 generic.go:334] "Generic (PLEG): container finished" podID="11eed3c6-7bf0-4496-b9c1-de5ab8f113a2" containerID="0b754dfab6e10f5bff69352930844341e9ef06b8c84633cb14947a2c7404145c" exitCode=0 Apr 04 02:28:49 crc kubenswrapper[4681]: I0404 02:28:49.963488 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2","Type":"ContainerDied","Data":"92478cdaf637f692b722213ee9f123d26d0eb703bd75deee9e652843eaeba8f3"} Apr 04 02:28:49 crc kubenswrapper[4681]: I0404 02:28:49.963881 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2","Type":"ContainerDied","Data":"8e24430e03521f4331387e89f4d6208727be6c4c35c8eed9b08d9aacf0fae40f"} Apr 04 02:28:49 crc kubenswrapper[4681]: I0404 02:28:49.963900 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2","Type":"ContainerDied","Data":"0b754dfab6e10f5bff69352930844341e9ef06b8c84633cb14947a2c7404145c"} Apr 04 02:28:49 crc kubenswrapper[4681]: I0404 02:28:49.967449 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfkgh" event={"ID":"dd41c1b8-d52c-4176-b423-1f8b61246822","Type":"ContainerStarted","Data":"b3cdd9fe7071f7fe24eb7b79c29e3ab61b1f8948c27f4c9c67983cc36ddc181b"} Apr 04 02:28:49 crc kubenswrapper[4681]: I0404 02:28:49.974523 4681 generic.go:334] "Generic (PLEG): container finished" podID="d8f784c8-6944-46b5-b4c2-e81f403dfa44" containerID="ca0b277dc7b5ac3f390bf53923fef73cdddc38eb8fee3c48603d42a2007294f4" exitCode=0 Apr 04 02:28:49 crc kubenswrapper[4681]: I0404 02:28:49.974568 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8f784c8-6944-46b5-b4c2-e81f403dfa44","Type":"ContainerDied","Data":"ca0b277dc7b5ac3f390bf53923fef73cdddc38eb8fee3c48603d42a2007294f4"} Apr 04 02:28:50 crc kubenswrapper[4681]: I0404 02:28:50.317227 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Apr 04 02:28:50 crc kubenswrapper[4681]: I0404 02:28:50.497506 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8f784c8-6944-46b5-b4c2-e81f403dfa44-config-data\") pod \"d8f784c8-6944-46b5-b4c2-e81f403dfa44\" (UID: \"d8f784c8-6944-46b5-b4c2-e81f403dfa44\") " Apr 04 02:28:50 crc kubenswrapper[4681]: I0404 02:28:50.497584 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n65vs\" (UniqueName: \"kubernetes.io/projected/d8f784c8-6944-46b5-b4c2-e81f403dfa44-kube-api-access-n65vs\") pod \"d8f784c8-6944-46b5-b4c2-e81f403dfa44\" (UID: \"d8f784c8-6944-46b5-b4c2-e81f403dfa44\") " Apr 04 02:28:50 crc kubenswrapper[4681]: I0404 02:28:50.497638 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f784c8-6944-46b5-b4c2-e81f403dfa44-combined-ca-bundle\") pod \"d8f784c8-6944-46b5-b4c2-e81f403dfa44\" (UID: \"d8f784c8-6944-46b5-b4c2-e81f403dfa44\") " Apr 04 02:28:50 crc kubenswrapper[4681]: I0404 02:28:50.497769 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8f784c8-6944-46b5-b4c2-e81f403dfa44-logs\") pod \"d8f784c8-6944-46b5-b4c2-e81f403dfa44\" (UID: \"d8f784c8-6944-46b5-b4c2-e81f403dfa44\") " Apr 04 02:28:50 crc kubenswrapper[4681]: I0404 02:28:50.498230 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8f784c8-6944-46b5-b4c2-e81f403dfa44-logs" (OuterVolumeSpecName: "logs") pod "d8f784c8-6944-46b5-b4c2-e81f403dfa44" (UID: "d8f784c8-6944-46b5-b4c2-e81f403dfa44"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:28:50 crc kubenswrapper[4681]: I0404 02:28:50.505413 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8f784c8-6944-46b5-b4c2-e81f403dfa44-kube-api-access-n65vs" (OuterVolumeSpecName: "kube-api-access-n65vs") pod "d8f784c8-6944-46b5-b4c2-e81f403dfa44" (UID: "d8f784c8-6944-46b5-b4c2-e81f403dfa44"). InnerVolumeSpecName "kube-api-access-n65vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:28:50 crc kubenswrapper[4681]: I0404 02:28:50.531091 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8f784c8-6944-46b5-b4c2-e81f403dfa44-config-data" (OuterVolumeSpecName: "config-data") pod "d8f784c8-6944-46b5-b4c2-e81f403dfa44" (UID: "d8f784c8-6944-46b5-b4c2-e81f403dfa44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:28:50 crc kubenswrapper[4681]: I0404 02:28:50.545030 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8f784c8-6944-46b5-b4c2-e81f403dfa44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8f784c8-6944-46b5-b4c2-e81f403dfa44" (UID: "d8f784c8-6944-46b5-b4c2-e81f403dfa44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:28:50 crc kubenswrapper[4681]: I0404 02:28:50.599744 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8f784c8-6944-46b5-b4c2-e81f403dfa44-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:50 crc kubenswrapper[4681]: I0404 02:28:50.599784 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n65vs\" (UniqueName: \"kubernetes.io/projected/d8f784c8-6944-46b5-b4c2-e81f403dfa44-kube-api-access-n65vs\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:50 crc kubenswrapper[4681]: I0404 02:28:50.599798 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8f784c8-6944-46b5-b4c2-e81f403dfa44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:50 crc kubenswrapper[4681]: I0404 02:28:50.599811 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8f784c8-6944-46b5-b4c2-e81f403dfa44-logs\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:50 crc kubenswrapper[4681]: I0404 02:28:50.990942 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Apr 04 02:28:50 crc kubenswrapper[4681]: I0404 02:28:50.990947 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d8f784c8-6944-46b5-b4c2-e81f403dfa44","Type":"ContainerDied","Data":"c81020721508cd4cee5827cd3862e5f5e2054b0d8337f9f9616b2ce44cf0d5c4"} Apr 04 02:28:50 crc kubenswrapper[4681]: I0404 02:28:50.991026 4681 scope.go:117] "RemoveContainer" containerID="ca0b277dc7b5ac3f390bf53923fef73cdddc38eb8fee3c48603d42a2007294f4" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.002812 4681 generic.go:334] "Generic (PLEG): container finished" podID="11eed3c6-7bf0-4496-b9c1-de5ab8f113a2" containerID="ed8c84566f7b23c510564455fcf27b42c2bc928a722478f2535fa7c3a678f070" exitCode=0 Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.002997 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2","Type":"ContainerDied","Data":"ed8c84566f7b23c510564455fcf27b42c2bc928a722478f2535fa7c3a678f070"} Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.009086 4681 generic.go:334] "Generic (PLEG): container finished" podID="dd41c1b8-d52c-4176-b423-1f8b61246822" containerID="b3cdd9fe7071f7fe24eb7b79c29e3ab61b1f8948c27f4c9c67983cc36ddc181b" exitCode=0 Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.009143 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfkgh" event={"ID":"dd41c1b8-d52c-4176-b423-1f8b61246822","Type":"ContainerDied","Data":"b3cdd9fe7071f7fe24eb7b79c29e3ab61b1f8948c27f4c9c67983cc36ddc181b"} Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.056853 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.084664 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.105312 4681 scope.go:117] "RemoveContainer" containerID="388d9c8a0ee93836a8f063a1b05a3017f12521424b0669f7ef0d98c941cd82a1" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.105472 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Apr 04 02:28:51 crc kubenswrapper[4681]: E0404 02:28:51.105963 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8f784c8-6944-46b5-b4c2-e81f403dfa44" containerName="nova-api-log" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.105985 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8f784c8-6944-46b5-b4c2-e81f403dfa44" containerName="nova-api-log" Apr 04 02:28:51 crc kubenswrapper[4681]: E0404 02:28:51.106004 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8f784c8-6944-46b5-b4c2-e81f403dfa44" containerName="nova-api-api" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.106013 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8f784c8-6944-46b5-b4c2-e81f403dfa44" containerName="nova-api-api" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.106286 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8f784c8-6944-46b5-b4c2-e81f403dfa44" containerName="nova-api-api" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.106314 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8f784c8-6944-46b5-b4c2-e81f403dfa44" containerName="nova-api-log" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.107840 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.110741 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.110906 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.111092 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.118002 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.190778 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jtq9h" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.191050 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jtq9h" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.196339 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.210044 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20b30c42-c576-4f80-ab8c-23e4f5a73519-internal-tls-certs\") pod \"nova-api-0\" (UID: \"20b30c42-c576-4f80-ab8c-23e4f5a73519\") " pod="openstack/nova-api-0" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.210352 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20b30c42-c576-4f80-ab8c-23e4f5a73519-logs\") pod \"nova-api-0\" (UID: \"20b30c42-c576-4f80-ab8c-23e4f5a73519\") " pod="openstack/nova-api-0" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.210440 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20b30c42-c576-4f80-ab8c-23e4f5a73519-public-tls-certs\") pod \"nova-api-0\" (UID: \"20b30c42-c576-4f80-ab8c-23e4f5a73519\") " pod="openstack/nova-api-0" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.210545 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20b30c42-c576-4f80-ab8c-23e4f5a73519-config-data\") pod \"nova-api-0\" (UID: \"20b30c42-c576-4f80-ab8c-23e4f5a73519\") " pod="openstack/nova-api-0" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.210668 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20b30c42-c576-4f80-ab8c-23e4f5a73519-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"20b30c42-c576-4f80-ab8c-23e4f5a73519\") " pod="openstack/nova-api-0" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.210923 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xncjn\" (UniqueName: \"kubernetes.io/projected/20b30c42-c576-4f80-ab8c-23e4f5a73519-kube-api-access-xncjn\") pod \"nova-api-0\" (UID: \"20b30c42-c576-4f80-ab8c-23e4f5a73519\") " pod="openstack/nova-api-0" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.220298 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8f784c8-6944-46b5-b4c2-e81f403dfa44" path="/var/lib/kubelet/pods/d8f784c8-6944-46b5-b4c2-e81f403dfa44/volumes" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.249809 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jtq9h" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.312047 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-ceilometer-tls-certs\") pod \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.312137 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-sg-core-conf-yaml\") pod \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.312187 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlpb\" (UniqueName: \"kubernetes.io/projected/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-kube-api-access-ddlpb\") pod \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.312238 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-config-data\") pod \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.312297 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-log-httpd\") pod \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.312416 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-combined-ca-bundle\") pod \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.312455 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-scripts\") pod \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.312503 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-run-httpd\") pod \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\" (UID: \"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2\") " Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.312812 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xncjn\" (UniqueName: \"kubernetes.io/projected/20b30c42-c576-4f80-ab8c-23e4f5a73519-kube-api-access-xncjn\") pod \"nova-api-0\" (UID: \"20b30c42-c576-4f80-ab8c-23e4f5a73519\") " pod="openstack/nova-api-0" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.312880 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20b30c42-c576-4f80-ab8c-23e4f5a73519-internal-tls-certs\") pod \"nova-api-0\" (UID: \"20b30c42-c576-4f80-ab8c-23e4f5a73519\") " pod="openstack/nova-api-0" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.312919 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20b30c42-c576-4f80-ab8c-23e4f5a73519-logs\") pod \"nova-api-0\" (UID: \"20b30c42-c576-4f80-ab8c-23e4f5a73519\") " pod="openstack/nova-api-0" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.312939 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20b30c42-c576-4f80-ab8c-23e4f5a73519-public-tls-certs\") pod \"nova-api-0\" (UID: \"20b30c42-c576-4f80-ab8c-23e4f5a73519\") " pod="openstack/nova-api-0" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.312973 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20b30c42-c576-4f80-ab8c-23e4f5a73519-config-data\") pod \"nova-api-0\" (UID: \"20b30c42-c576-4f80-ab8c-23e4f5a73519\") " pod="openstack/nova-api-0" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.312997 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20b30c42-c576-4f80-ab8c-23e4f5a73519-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"20b30c42-c576-4f80-ab8c-23e4f5a73519\") " pod="openstack/nova-api-0" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.313065 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "11eed3c6-7bf0-4496-b9c1-de5ab8f113a2" (UID: "11eed3c6-7bf0-4496-b9c1-de5ab8f113a2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.314433 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "11eed3c6-7bf0-4496-b9c1-de5ab8f113a2" (UID: "11eed3c6-7bf0-4496-b9c1-de5ab8f113a2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.315055 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20b30c42-c576-4f80-ab8c-23e4f5a73519-logs\") pod \"nova-api-0\" (UID: \"20b30c42-c576-4f80-ab8c-23e4f5a73519\") " pod="openstack/nova-api-0" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.317673 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20b30c42-c576-4f80-ab8c-23e4f5a73519-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"20b30c42-c576-4f80-ab8c-23e4f5a73519\") " pod="openstack/nova-api-0" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.318128 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20b30c42-c576-4f80-ab8c-23e4f5a73519-config-data\") pod \"nova-api-0\" (UID: \"20b30c42-c576-4f80-ab8c-23e4f5a73519\") " pod="openstack/nova-api-0" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.322371 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-kube-api-access-ddlpb" (OuterVolumeSpecName: "kube-api-access-ddlpb") pod "11eed3c6-7bf0-4496-b9c1-de5ab8f113a2" (UID: "11eed3c6-7bf0-4496-b9c1-de5ab8f113a2"). InnerVolumeSpecName "kube-api-access-ddlpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.323636 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20b30c42-c576-4f80-ab8c-23e4f5a73519-public-tls-certs\") pod \"nova-api-0\" (UID: \"20b30c42-c576-4f80-ab8c-23e4f5a73519\") " pod="openstack/nova-api-0" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.324856 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-scripts" (OuterVolumeSpecName: "scripts") pod "11eed3c6-7bf0-4496-b9c1-de5ab8f113a2" (UID: "11eed3c6-7bf0-4496-b9c1-de5ab8f113a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.326415 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20b30c42-c576-4f80-ab8c-23e4f5a73519-internal-tls-certs\") pod \"nova-api-0\" (UID: \"20b30c42-c576-4f80-ab8c-23e4f5a73519\") " pod="openstack/nova-api-0" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.332596 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xncjn\" (UniqueName: \"kubernetes.io/projected/20b30c42-c576-4f80-ab8c-23e4f5a73519-kube-api-access-xncjn\") pod \"nova-api-0\" (UID: \"20b30c42-c576-4f80-ab8c-23e4f5a73519\") " pod="openstack/nova-api-0" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.352667 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "11eed3c6-7bf0-4496-b9c1-de5ab8f113a2" (UID: "11eed3c6-7bf0-4496-b9c1-de5ab8f113a2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.385054 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "11eed3c6-7bf0-4496-b9c1-de5ab8f113a2" (UID: "11eed3c6-7bf0-4496-b9c1-de5ab8f113a2"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.422460 4681 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-log-httpd\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.422532 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.422545 4681 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-run-httpd\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.422553 4681 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.422589 4681 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.422599 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ddlpb\" (UniqueName: \"kubernetes.io/projected/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-kube-api-access-ddlpb\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.428003 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11eed3c6-7bf0-4496-b9c1-de5ab8f113a2" (UID: "11eed3c6-7bf0-4496-b9c1-de5ab8f113a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.449659 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-config-data" (OuterVolumeSpecName: "config-data") pod "11eed3c6-7bf0-4496-b9c1-de5ab8f113a2" (UID: "11eed3c6-7bf0-4496-b9c1-de5ab8f113a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.494322 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.525232 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:51 crc kubenswrapper[4681]: I0404 02:28:51.525279 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.024406 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.025526 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"11eed3c6-7bf0-4496-b9c1-de5ab8f113a2","Type":"ContainerDied","Data":"5c219d17d8dc89e9342e88e20b6b7c6bcf5fc913abaf820b52687daee0188da9"} Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.025564 4681 scope.go:117] "RemoveContainer" containerID="92478cdaf637f692b722213ee9f123d26d0eb703bd75deee9e652843eaeba8f3" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.046341 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.056838 4681 scope.go:117] "RemoveContainer" containerID="8e24430e03521f4331387e89f4d6208727be6c4c35c8eed9b08d9aacf0fae40f" Apr 04 02:28:52 crc kubenswrapper[4681]: W0404 02:28:52.067046 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20b30c42_c576_4f80_ab8c_23e4f5a73519.slice/crio-ca522e03434d495c5b68be9107b049cab5f52637594f5eb932bf52e0eaca3d27 WatchSource:0}: Error finding container ca522e03434d495c5b68be9107b049cab5f52637594f5eb932bf52e0eaca3d27: Status 404 returned error can't find the container with id ca522e03434d495c5b68be9107b049cab5f52637594f5eb932bf52e0eaca3d27 Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.086664 4681 scope.go:117] "RemoveContainer" containerID="ed8c84566f7b23c510564455fcf27b42c2bc928a722478f2535fa7c3a678f070" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.104211 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.106318 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jtq9h" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.117499 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.134549 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:28:52 crc kubenswrapper[4681]: E0404 02:28:52.135052 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11eed3c6-7bf0-4496-b9c1-de5ab8f113a2" containerName="proxy-httpd" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.135072 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="11eed3c6-7bf0-4496-b9c1-de5ab8f113a2" containerName="proxy-httpd" Apr 04 02:28:52 crc kubenswrapper[4681]: E0404 02:28:52.135091 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11eed3c6-7bf0-4496-b9c1-de5ab8f113a2" containerName="ceilometer-notification-agent" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.135097 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="11eed3c6-7bf0-4496-b9c1-de5ab8f113a2" containerName="ceilometer-notification-agent" Apr 04 02:28:52 crc kubenswrapper[4681]: E0404 02:28:52.135112 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11eed3c6-7bf0-4496-b9c1-de5ab8f113a2" containerName="ceilometer-central-agent" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.135119 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="11eed3c6-7bf0-4496-b9c1-de5ab8f113a2" containerName="ceilometer-central-agent" Apr 04 02:28:52 crc kubenswrapper[4681]: E0404 02:28:52.135139 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11eed3c6-7bf0-4496-b9c1-de5ab8f113a2" containerName="sg-core" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.135144 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="11eed3c6-7bf0-4496-b9c1-de5ab8f113a2" containerName="sg-core" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.135375 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="11eed3c6-7bf0-4496-b9c1-de5ab8f113a2" containerName="ceilometer-notification-agent" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.135405 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="11eed3c6-7bf0-4496-b9c1-de5ab8f113a2" containerName="sg-core" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.135413 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="11eed3c6-7bf0-4496-b9c1-de5ab8f113a2" containerName="ceilometer-central-agent" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.135424 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="11eed3c6-7bf0-4496-b9c1-de5ab8f113a2" containerName="proxy-httpd" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.138520 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.140803 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.140986 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.141160 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.142581 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.144759 4681 scope.go:117] "RemoveContainer" containerID="0b754dfab6e10f5bff69352930844341e9ef06b8c84633cb14947a2c7404145c" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.241235 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/130000be-4800-4c22-9a54-08918788abad-log-httpd\") pod \"ceilometer-0\" (UID: \"130000be-4800-4c22-9a54-08918788abad\") " pod="openstack/ceilometer-0" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.241438 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/130000be-4800-4c22-9a54-08918788abad-config-data\") pod \"ceilometer-0\" (UID: \"130000be-4800-4c22-9a54-08918788abad\") " pod="openstack/ceilometer-0" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.241511 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/130000be-4800-4c22-9a54-08918788abad-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"130000be-4800-4c22-9a54-08918788abad\") " pod="openstack/ceilometer-0" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.241635 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/130000be-4800-4c22-9a54-08918788abad-run-httpd\") pod \"ceilometer-0\" (UID: \"130000be-4800-4c22-9a54-08918788abad\") " pod="openstack/ceilometer-0" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.241659 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/130000be-4800-4c22-9a54-08918788abad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"130000be-4800-4c22-9a54-08918788abad\") " pod="openstack/ceilometer-0" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.241681 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjkn9\" (UniqueName: \"kubernetes.io/projected/130000be-4800-4c22-9a54-08918788abad-kube-api-access-jjkn9\") pod \"ceilometer-0\" (UID: \"130000be-4800-4c22-9a54-08918788abad\") " pod="openstack/ceilometer-0" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.241711 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/130000be-4800-4c22-9a54-08918788abad-scripts\") pod \"ceilometer-0\" (UID: \"130000be-4800-4c22-9a54-08918788abad\") " pod="openstack/ceilometer-0" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.241727 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/130000be-4800-4c22-9a54-08918788abad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"130000be-4800-4c22-9a54-08918788abad\") " pod="openstack/ceilometer-0" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.343510 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/130000be-4800-4c22-9a54-08918788abad-run-httpd\") pod \"ceilometer-0\" (UID: \"130000be-4800-4c22-9a54-08918788abad\") " pod="openstack/ceilometer-0" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.343805 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/130000be-4800-4c22-9a54-08918788abad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"130000be-4800-4c22-9a54-08918788abad\") " pod="openstack/ceilometer-0" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.343843 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjkn9\" (UniqueName: \"kubernetes.io/projected/130000be-4800-4c22-9a54-08918788abad-kube-api-access-jjkn9\") pod \"ceilometer-0\" (UID: \"130000be-4800-4c22-9a54-08918788abad\") " pod="openstack/ceilometer-0" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.343882 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/130000be-4800-4c22-9a54-08918788abad-scripts\") pod \"ceilometer-0\" (UID: \"130000be-4800-4c22-9a54-08918788abad\") " pod="openstack/ceilometer-0" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.343910 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/130000be-4800-4c22-9a54-08918788abad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"130000be-4800-4c22-9a54-08918788abad\") " pod="openstack/ceilometer-0" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.344057 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/130000be-4800-4c22-9a54-08918788abad-run-httpd\") pod \"ceilometer-0\" (UID: \"130000be-4800-4c22-9a54-08918788abad\") " pod="openstack/ceilometer-0" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.344095 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/130000be-4800-4c22-9a54-08918788abad-log-httpd\") pod \"ceilometer-0\" (UID: \"130000be-4800-4c22-9a54-08918788abad\") " pod="openstack/ceilometer-0" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.344193 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/130000be-4800-4c22-9a54-08918788abad-config-data\") pod \"ceilometer-0\" (UID: \"130000be-4800-4c22-9a54-08918788abad\") " pod="openstack/ceilometer-0" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.346843 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/130000be-4800-4c22-9a54-08918788abad-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"130000be-4800-4c22-9a54-08918788abad\") " pod="openstack/ceilometer-0" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.344392 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/130000be-4800-4c22-9a54-08918788abad-log-httpd\") pod \"ceilometer-0\" (UID: \"130000be-4800-4c22-9a54-08918788abad\") " pod="openstack/ceilometer-0" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.350896 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/130000be-4800-4c22-9a54-08918788abad-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"130000be-4800-4c22-9a54-08918788abad\") " pod="openstack/ceilometer-0" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.351063 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/130000be-4800-4c22-9a54-08918788abad-scripts\") pod \"ceilometer-0\" (UID: \"130000be-4800-4c22-9a54-08918788abad\") " pod="openstack/ceilometer-0" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.351129 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/130000be-4800-4c22-9a54-08918788abad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"130000be-4800-4c22-9a54-08918788abad\") " pod="openstack/ceilometer-0" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.361226 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/130000be-4800-4c22-9a54-08918788abad-config-data\") pod \"ceilometer-0\" (UID: \"130000be-4800-4c22-9a54-08918788abad\") " pod="openstack/ceilometer-0" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.361829 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/130000be-4800-4c22-9a54-08918788abad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"130000be-4800-4c22-9a54-08918788abad\") " pod="openstack/ceilometer-0" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.363580 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjkn9\" (UniqueName: \"kubernetes.io/projected/130000be-4800-4c22-9a54-08918788abad-kube-api-access-jjkn9\") pod \"ceilometer-0\" (UID: \"130000be-4800-4c22-9a54-08918788abad\") " pod="openstack/ceilometer-0" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.461453 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Apr 04 02:28:52 crc kubenswrapper[4681]: I0404 02:28:52.966817 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Apr 04 02:28:53 crc kubenswrapper[4681]: I0404 02:28:53.035408 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"20b30c42-c576-4f80-ab8c-23e4f5a73519","Type":"ContainerStarted","Data":"7439c394f97a30b17ebd7eb5b15bb44281f3674101227232957a4e4515e790c0"} Apr 04 02:28:53 crc kubenswrapper[4681]: I0404 02:28:53.035456 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"20b30c42-c576-4f80-ab8c-23e4f5a73519","Type":"ContainerStarted","Data":"4643ba563d8753e780daeb23cbe4ac4eeeba61bebee2341f84101732f5c36262"} Apr 04 02:28:53 crc kubenswrapper[4681]: I0404 02:28:53.035471 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"20b30c42-c576-4f80-ab8c-23e4f5a73519","Type":"ContainerStarted","Data":"ca522e03434d495c5b68be9107b049cab5f52637594f5eb932bf52e0eaca3d27"} Apr 04 02:28:53 crc kubenswrapper[4681]: I0404 02:28:53.041344 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"130000be-4800-4c22-9a54-08918788abad","Type":"ContainerStarted","Data":"5ebcb142419a5c75cff90aca1cd06ccf2c9ba9a63a8259aca664d8accaef0838"} Apr 04 02:28:53 crc kubenswrapper[4681]: I0404 02:28:53.044210 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfkgh" event={"ID":"dd41c1b8-d52c-4176-b423-1f8b61246822","Type":"ContainerStarted","Data":"ad00be358ab9c1aad967fedbfa63494ccd8d6cbb83c63ec38b438a82c4d84ced"} Apr 04 02:28:53 crc kubenswrapper[4681]: I0404 02:28:53.064068 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.064051084 podStartE2EDuration="2.064051084s" podCreationTimestamp="2026-04-04 02:28:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:28:53.058622855 +0000 UTC m=+2012.724397975" watchObservedRunningTime="2026-04-04 02:28:53.064051084 +0000 UTC m=+2012.729826204" Apr 04 02:28:53 crc kubenswrapper[4681]: I0404 02:28:53.079870 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vfkgh" podStartSLOduration=3.054095899 podStartE2EDuration="6.079855838s" podCreationTimestamp="2026-04-04 02:28:47 +0000 UTC" firstStartedPulling="2026-04-04 02:28:48.926432764 +0000 UTC m=+2008.592207884" lastFinishedPulling="2026-04-04 02:28:51.952192693 +0000 UTC m=+2011.617967823" observedRunningTime="2026-04-04 02:28:53.075786427 +0000 UTC m=+2012.741561567" watchObservedRunningTime="2026-04-04 02:28:53.079855838 +0000 UTC m=+2012.745630958" Apr 04 02:28:53 crc kubenswrapper[4681]: I0404 02:28:53.214434 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11eed3c6-7bf0-4496-b9c1-de5ab8f113a2" path="/var/lib/kubelet/pods/11eed3c6-7bf0-4496-b9c1-de5ab8f113a2/volumes" Apr 04 02:28:53 crc kubenswrapper[4681]: I0404 02:28:53.589185 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:53 crc kubenswrapper[4681]: I0404 02:28:53.607278 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Apr 04 02:28:53 crc kubenswrapper[4681]: I0404 02:28:53.607315 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Apr 04 02:28:53 crc kubenswrapper[4681]: I0404 02:28:53.616966 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:53 crc kubenswrapper[4681]: I0404 02:28:53.640430 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jtq9h"] Apr 04 02:28:54 crc kubenswrapper[4681]: I0404 02:28:54.058479 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"130000be-4800-4c22-9a54-08918788abad","Type":"ContainerStarted","Data":"b67af32197fd3fa4fe7bf19140bbeeb3d9889be74f4a4e20a7b9d40807d53795"} Apr 04 02:28:54 crc kubenswrapper[4681]: I0404 02:28:54.058725 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"130000be-4800-4c22-9a54-08918788abad","Type":"ContainerStarted","Data":"ab8246237bcd47f1162a47f8505160839a9ffef6ff0fc2cc700908304a233aa6"} Apr 04 02:28:54 crc kubenswrapper[4681]: I0404 02:28:54.086177 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Apr 04 02:28:54 crc kubenswrapper[4681]: I0404 02:28:54.202121 4681 scope.go:117] "RemoveContainer" containerID="a115e63e478f2dcd96d6a1ea5579c4abd8544184899aee1395f08415f92823da" Apr 04 02:28:54 crc kubenswrapper[4681]: E0404 02:28:54.202427 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:28:54 crc kubenswrapper[4681]: I0404 02:28:54.276453 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-n7nn9"] Apr 04 02:28:54 crc kubenswrapper[4681]: I0404 02:28:54.278226 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-n7nn9" Apr 04 02:28:54 crc kubenswrapper[4681]: I0404 02:28:54.280172 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Apr 04 02:28:54 crc kubenswrapper[4681]: I0404 02:28:54.280283 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Apr 04 02:28:54 crc kubenswrapper[4681]: I0404 02:28:54.304178 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-n7nn9"] Apr 04 02:28:54 crc kubenswrapper[4681]: I0404 02:28:54.330568 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" Apr 04 02:28:54 crc kubenswrapper[4681]: I0404 02:28:54.393344 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2487907-8ce5-4e62-a772-66ac674c64e0-config-data\") pod \"nova-cell1-cell-mapping-n7nn9\" (UID: \"f2487907-8ce5-4e62-a772-66ac674c64e0\") " pod="openstack/nova-cell1-cell-mapping-n7nn9" Apr 04 02:28:54 crc kubenswrapper[4681]: I0404 02:28:54.393412 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6crr\" (UniqueName: \"kubernetes.io/projected/f2487907-8ce5-4e62-a772-66ac674c64e0-kube-api-access-d6crr\") pod \"nova-cell1-cell-mapping-n7nn9\" (UID: \"f2487907-8ce5-4e62-a772-66ac674c64e0\") " pod="openstack/nova-cell1-cell-mapping-n7nn9" Apr 04 02:28:54 crc kubenswrapper[4681]: I0404 02:28:54.393517 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2487907-8ce5-4e62-a772-66ac674c64e0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-n7nn9\" (UID: \"f2487907-8ce5-4e62-a772-66ac674c64e0\") " pod="openstack/nova-cell1-cell-mapping-n7nn9" Apr 04 02:28:54 crc kubenswrapper[4681]: I0404 02:28:54.393614 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2487907-8ce5-4e62-a772-66ac674c64e0-scripts\") pod \"nova-cell1-cell-mapping-n7nn9\" (UID: \"f2487907-8ce5-4e62-a772-66ac674c64e0\") " pod="openstack/nova-cell1-cell-mapping-n7nn9" Apr 04 02:28:54 crc kubenswrapper[4681]: I0404 02:28:54.404876 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d899d57cc-vrn4r"] Apr 04 02:28:54 crc kubenswrapper[4681]: I0404 02:28:54.405181 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" podUID="ae22ed73-322c-41fc-821e-b0f2e7217ab6" containerName="dnsmasq-dns" containerID="cri-o://f99760b19a10d6991768c5b00c03b065466eb4182071e7c363261547edb8ae3b" gracePeriod=10 Apr 04 02:28:54 crc kubenswrapper[4681]: I0404 02:28:54.500547 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2487907-8ce5-4e62-a772-66ac674c64e0-config-data\") pod \"nova-cell1-cell-mapping-n7nn9\" (UID: \"f2487907-8ce5-4e62-a772-66ac674c64e0\") " pod="openstack/nova-cell1-cell-mapping-n7nn9" Apr 04 02:28:54 crc kubenswrapper[4681]: I0404 02:28:54.500876 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6crr\" (UniqueName: \"kubernetes.io/projected/f2487907-8ce5-4e62-a772-66ac674c64e0-kube-api-access-d6crr\") pod \"nova-cell1-cell-mapping-n7nn9\" (UID: \"f2487907-8ce5-4e62-a772-66ac674c64e0\") " pod="openstack/nova-cell1-cell-mapping-n7nn9" Apr 04 02:28:54 crc kubenswrapper[4681]: I0404 02:28:54.500932 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2487907-8ce5-4e62-a772-66ac674c64e0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-n7nn9\" (UID: \"f2487907-8ce5-4e62-a772-66ac674c64e0\") " pod="openstack/nova-cell1-cell-mapping-n7nn9" Apr 04 02:28:54 crc kubenswrapper[4681]: I0404 02:28:54.501053 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2487907-8ce5-4e62-a772-66ac674c64e0-scripts\") pod \"nova-cell1-cell-mapping-n7nn9\" (UID: \"f2487907-8ce5-4e62-a772-66ac674c64e0\") " pod="openstack/nova-cell1-cell-mapping-n7nn9" Apr 04 02:28:54 crc kubenswrapper[4681]: I0404 02:28:54.506858 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2487907-8ce5-4e62-a772-66ac674c64e0-scripts\") pod \"nova-cell1-cell-mapping-n7nn9\" (UID: \"f2487907-8ce5-4e62-a772-66ac674c64e0\") " pod="openstack/nova-cell1-cell-mapping-n7nn9" Apr 04 02:28:54 crc kubenswrapper[4681]: I0404 02:28:54.507138 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2487907-8ce5-4e62-a772-66ac674c64e0-config-data\") pod \"nova-cell1-cell-mapping-n7nn9\" (UID: \"f2487907-8ce5-4e62-a772-66ac674c64e0\") " pod="openstack/nova-cell1-cell-mapping-n7nn9" Apr 04 02:28:54 crc kubenswrapper[4681]: I0404 02:28:54.509117 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2487907-8ce5-4e62-a772-66ac674c64e0-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-n7nn9\" (UID: \"f2487907-8ce5-4e62-a772-66ac674c64e0\") " pod="openstack/nova-cell1-cell-mapping-n7nn9" Apr 04 02:28:54 crc kubenswrapper[4681]: I0404 02:28:54.520600 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6crr\" (UniqueName: \"kubernetes.io/projected/f2487907-8ce5-4e62-a772-66ac674c64e0-kube-api-access-d6crr\") pod \"nova-cell1-cell-mapping-n7nn9\" (UID: \"f2487907-8ce5-4e62-a772-66ac674c64e0\") " pod="openstack/nova-cell1-cell-mapping-n7nn9" Apr 04 02:28:54 crc kubenswrapper[4681]: I0404 02:28:54.609108 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-n7nn9" Apr 04 02:28:54 crc kubenswrapper[4681]: I0404 02:28:54.619465 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d83af60c-798f-4343-8a9f-a6a39b6ccb3f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.228:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 04 02:28:54 crc kubenswrapper[4681]: I0404 02:28:54.619765 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d83af60c-798f-4343-8a9f-a6a39b6ccb3f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.228:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 04 02:28:55 crc kubenswrapper[4681]: I0404 02:28:55.087779 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"130000be-4800-4c22-9a54-08918788abad","Type":"ContainerStarted","Data":"94de919ac3fb67c50b538d72d197613ea23273b58039be525170ef2bbb50c970"} Apr 04 02:28:55 crc kubenswrapper[4681]: I0404 02:28:55.099414 4681 generic.go:334] "Generic (PLEG): container finished" podID="ae22ed73-322c-41fc-821e-b0f2e7217ab6" containerID="f99760b19a10d6991768c5b00c03b065466eb4182071e7c363261547edb8ae3b" exitCode=0 Apr 04 02:28:55 crc kubenswrapper[4681]: I0404 02:28:55.099663 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jtq9h" podUID="080f79db-04a5-4733-8669-83b168e7e448" containerName="registry-server" containerID="cri-o://e85f588eb7f21160c968acab5633042980c7ee41dce081fb2ffdb0ab49407e7a" gracePeriod=2 Apr 04 02:28:55 crc kubenswrapper[4681]: I0404 02:28:55.100642 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" event={"ID":"ae22ed73-322c-41fc-821e-b0f2e7217ab6","Type":"ContainerDied","Data":"f99760b19a10d6991768c5b00c03b065466eb4182071e7c363261547edb8ae3b"} Apr 04 02:28:55 crc kubenswrapper[4681]: I0404 02:28:55.100711 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" event={"ID":"ae22ed73-322c-41fc-821e-b0f2e7217ab6","Type":"ContainerDied","Data":"145b92508a5c1190614848f83e3add85e47d8247d3264eb0baaab3ee03c6bb8c"} Apr 04 02:28:55 crc kubenswrapper[4681]: I0404 02:28:55.100726 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="145b92508a5c1190614848f83e3add85e47d8247d3264eb0baaab3ee03c6bb8c" Apr 04 02:28:55 crc kubenswrapper[4681]: I0404 02:28:55.188506 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" Apr 04 02:28:55 crc kubenswrapper[4681]: I0404 02:28:55.324475 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae22ed73-322c-41fc-821e-b0f2e7217ab6-config\") pod \"ae22ed73-322c-41fc-821e-b0f2e7217ab6\" (UID: \"ae22ed73-322c-41fc-821e-b0f2e7217ab6\") " Apr 04 02:28:55 crc kubenswrapper[4681]: I0404 02:28:55.324572 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c97lg\" (UniqueName: \"kubernetes.io/projected/ae22ed73-322c-41fc-821e-b0f2e7217ab6-kube-api-access-c97lg\") pod \"ae22ed73-322c-41fc-821e-b0f2e7217ab6\" (UID: \"ae22ed73-322c-41fc-821e-b0f2e7217ab6\") " Apr 04 02:28:55 crc kubenswrapper[4681]: I0404 02:28:55.324632 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae22ed73-322c-41fc-821e-b0f2e7217ab6-dns-svc\") pod \"ae22ed73-322c-41fc-821e-b0f2e7217ab6\" (UID: \"ae22ed73-322c-41fc-821e-b0f2e7217ab6\") " Apr 04 02:28:55 crc kubenswrapper[4681]: I0404 02:28:55.324774 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae22ed73-322c-41fc-821e-b0f2e7217ab6-dns-swift-storage-0\") pod \"ae22ed73-322c-41fc-821e-b0f2e7217ab6\" (UID: \"ae22ed73-322c-41fc-821e-b0f2e7217ab6\") " Apr 04 02:28:55 crc kubenswrapper[4681]: I0404 02:28:55.324835 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae22ed73-322c-41fc-821e-b0f2e7217ab6-ovsdbserver-nb\") pod \"ae22ed73-322c-41fc-821e-b0f2e7217ab6\" (UID: \"ae22ed73-322c-41fc-821e-b0f2e7217ab6\") " Apr 04 02:28:55 crc kubenswrapper[4681]: I0404 02:28:55.324908 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae22ed73-322c-41fc-821e-b0f2e7217ab6-ovsdbserver-sb\") pod \"ae22ed73-322c-41fc-821e-b0f2e7217ab6\" (UID: \"ae22ed73-322c-41fc-821e-b0f2e7217ab6\") " Apr 04 02:28:55 crc kubenswrapper[4681]: I0404 02:28:55.326358 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-n7nn9"] Apr 04 02:28:55 crc kubenswrapper[4681]: I0404 02:28:55.331565 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae22ed73-322c-41fc-821e-b0f2e7217ab6-kube-api-access-c97lg" (OuterVolumeSpecName: "kube-api-access-c97lg") pod "ae22ed73-322c-41fc-821e-b0f2e7217ab6" (UID: "ae22ed73-322c-41fc-821e-b0f2e7217ab6"). InnerVolumeSpecName "kube-api-access-c97lg". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:28:55 crc kubenswrapper[4681]: I0404 02:28:55.391660 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae22ed73-322c-41fc-821e-b0f2e7217ab6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ae22ed73-322c-41fc-821e-b0f2e7217ab6" (UID: "ae22ed73-322c-41fc-821e-b0f2e7217ab6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:28:55 crc kubenswrapper[4681]: I0404 02:28:55.394659 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae22ed73-322c-41fc-821e-b0f2e7217ab6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ae22ed73-322c-41fc-821e-b0f2e7217ab6" (UID: "ae22ed73-322c-41fc-821e-b0f2e7217ab6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:28:55 crc kubenswrapper[4681]: I0404 02:28:55.395545 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae22ed73-322c-41fc-821e-b0f2e7217ab6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ae22ed73-322c-41fc-821e-b0f2e7217ab6" (UID: "ae22ed73-322c-41fc-821e-b0f2e7217ab6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:28:55 crc kubenswrapper[4681]: I0404 02:28:55.410352 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae22ed73-322c-41fc-821e-b0f2e7217ab6-config" (OuterVolumeSpecName: "config") pod "ae22ed73-322c-41fc-821e-b0f2e7217ab6" (UID: "ae22ed73-322c-41fc-821e-b0f2e7217ab6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:28:55 crc kubenswrapper[4681]: I0404 02:28:55.424545 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae22ed73-322c-41fc-821e-b0f2e7217ab6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ae22ed73-322c-41fc-821e-b0f2e7217ab6" (UID: "ae22ed73-322c-41fc-821e-b0f2e7217ab6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:28:55 crc kubenswrapper[4681]: I0404 02:28:55.427879 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c97lg\" (UniqueName: \"kubernetes.io/projected/ae22ed73-322c-41fc-821e-b0f2e7217ab6-kube-api-access-c97lg\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:55 crc kubenswrapper[4681]: I0404 02:28:55.427915 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae22ed73-322c-41fc-821e-b0f2e7217ab6-dns-svc\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:55 crc kubenswrapper[4681]: I0404 02:28:55.427928 4681 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae22ed73-322c-41fc-821e-b0f2e7217ab6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:55 crc kubenswrapper[4681]: I0404 02:28:55.427939 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae22ed73-322c-41fc-821e-b0f2e7217ab6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:55 crc kubenswrapper[4681]: I0404 02:28:55.427950 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae22ed73-322c-41fc-821e-b0f2e7217ab6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:55 crc kubenswrapper[4681]: I0404 02:28:55.427963 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae22ed73-322c-41fc-821e-b0f2e7217ab6-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:56 crc kubenswrapper[4681]: I0404 02:28:56.118391 4681 generic.go:334] "Generic (PLEG): container finished" podID="080f79db-04a5-4733-8669-83b168e7e448" containerID="e85f588eb7f21160c968acab5633042980c7ee41dce081fb2ffdb0ab49407e7a" exitCode=0 Apr 04 02:28:56 crc kubenswrapper[4681]: I0404 02:28:56.118545 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jtq9h" event={"ID":"080f79db-04a5-4733-8669-83b168e7e448","Type":"ContainerDied","Data":"e85f588eb7f21160c968acab5633042980c7ee41dce081fb2ffdb0ab49407e7a"} Apr 04 02:28:56 crc kubenswrapper[4681]: I0404 02:28:56.124642 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-n7nn9" event={"ID":"f2487907-8ce5-4e62-a772-66ac674c64e0","Type":"ContainerStarted","Data":"effb887815aa24bd5625259c978ee550f13d88ff5100d9350b737ed32a1eae8d"} Apr 04 02:28:56 crc kubenswrapper[4681]: I0404 02:28:56.124690 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-n7nn9" event={"ID":"f2487907-8ce5-4e62-a772-66ac674c64e0","Type":"ContainerStarted","Data":"9da64220913d8421df2a3558a2a73d40960ad9f5562fa33b0f0a2e337c12d9f8"} Apr 04 02:28:56 crc kubenswrapper[4681]: I0404 02:28:56.124975 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d899d57cc-vrn4r" Apr 04 02:28:56 crc kubenswrapper[4681]: I0404 02:28:56.157904 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-n7nn9" podStartSLOduration=2.157884986 podStartE2EDuration="2.157884986s" podCreationTimestamp="2026-04-04 02:28:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:28:56.147416357 +0000 UTC m=+2015.813191477" watchObservedRunningTime="2026-04-04 02:28:56.157884986 +0000 UTC m=+2015.823660096" Apr 04 02:28:56 crc kubenswrapper[4681]: I0404 02:28:56.181341 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d899d57cc-vrn4r"] Apr 04 02:28:56 crc kubenswrapper[4681]: I0404 02:28:56.194956 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d899d57cc-vrn4r"] Apr 04 02:28:57 crc kubenswrapper[4681]: I0404 02:28:57.233328 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae22ed73-322c-41fc-821e-b0f2e7217ab6" path="/var/lib/kubelet/pods/ae22ed73-322c-41fc-821e-b0f2e7217ab6/volumes" Apr 04 02:28:57 crc kubenswrapper[4681]: I0404 02:28:57.348831 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jtq9h" Apr 04 02:28:57 crc kubenswrapper[4681]: I0404 02:28:57.471048 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080f79db-04a5-4733-8669-83b168e7e448-catalog-content\") pod \"080f79db-04a5-4733-8669-83b168e7e448\" (UID: \"080f79db-04a5-4733-8669-83b168e7e448\") " Apr 04 02:28:57 crc kubenswrapper[4681]: I0404 02:28:57.471122 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080f79db-04a5-4733-8669-83b168e7e448-utilities\") pod \"080f79db-04a5-4733-8669-83b168e7e448\" (UID: \"080f79db-04a5-4733-8669-83b168e7e448\") " Apr 04 02:28:57 crc kubenswrapper[4681]: I0404 02:28:57.471188 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6k6c\" (UniqueName: \"kubernetes.io/projected/080f79db-04a5-4733-8669-83b168e7e448-kube-api-access-l6k6c\") pod \"080f79db-04a5-4733-8669-83b168e7e448\" (UID: \"080f79db-04a5-4733-8669-83b168e7e448\") " Apr 04 02:28:57 crc kubenswrapper[4681]: I0404 02:28:57.471969 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/080f79db-04a5-4733-8669-83b168e7e448-utilities" (OuterVolumeSpecName: "utilities") pod "080f79db-04a5-4733-8669-83b168e7e448" (UID: "080f79db-04a5-4733-8669-83b168e7e448"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:28:57 crc kubenswrapper[4681]: I0404 02:28:57.476832 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/080f79db-04a5-4733-8669-83b168e7e448-kube-api-access-l6k6c" (OuterVolumeSpecName: "kube-api-access-l6k6c") pod "080f79db-04a5-4733-8669-83b168e7e448" (UID: "080f79db-04a5-4733-8669-83b168e7e448"). InnerVolumeSpecName "kube-api-access-l6k6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:28:57 crc kubenswrapper[4681]: I0404 02:28:57.534122 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/080f79db-04a5-4733-8669-83b168e7e448-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "080f79db-04a5-4733-8669-83b168e7e448" (UID: "080f79db-04a5-4733-8669-83b168e7e448"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:28:57 crc kubenswrapper[4681]: I0404 02:28:57.573927 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080f79db-04a5-4733-8669-83b168e7e448-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:57 crc kubenswrapper[4681]: I0404 02:28:57.574210 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080f79db-04a5-4733-8669-83b168e7e448-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:57 crc kubenswrapper[4681]: I0404 02:28:57.574292 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6k6c\" (UniqueName: \"kubernetes.io/projected/080f79db-04a5-4733-8669-83b168e7e448-kube-api-access-l6k6c\") on node \"crc\" DevicePath \"\"" Apr 04 02:28:57 crc kubenswrapper[4681]: I0404 02:28:57.591042 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vfkgh" Apr 04 02:28:57 crc kubenswrapper[4681]: I0404 02:28:57.592147 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vfkgh" Apr 04 02:28:57 crc kubenswrapper[4681]: I0404 02:28:57.667881 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vfkgh" Apr 04 02:28:58 crc kubenswrapper[4681]: I0404 02:28:58.219046 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"130000be-4800-4c22-9a54-08918788abad","Type":"ContainerStarted","Data":"d6ec3aa694faff04723bd90d419d66995751b54137f47f5e453684ab597d35d7"} Apr 04 02:28:58 crc kubenswrapper[4681]: I0404 02:28:58.219427 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Apr 04 02:28:58 crc kubenswrapper[4681]: I0404 02:28:58.222518 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jtq9h" Apr 04 02:28:58 crc kubenswrapper[4681]: I0404 02:28:58.222564 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jtq9h" event={"ID":"080f79db-04a5-4733-8669-83b168e7e448","Type":"ContainerDied","Data":"0e6ffe2b13c39180945c6c590076d3ae00e8159d214b5401165d32f3840b73d1"} Apr 04 02:28:58 crc kubenswrapper[4681]: I0404 02:28:58.222597 4681 scope.go:117] "RemoveContainer" containerID="e85f588eb7f21160c968acab5633042980c7ee41dce081fb2ffdb0ab49407e7a" Apr 04 02:28:58 crc kubenswrapper[4681]: I0404 02:28:58.247920 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.21750387 podStartE2EDuration="6.247895799s" podCreationTimestamp="2026-04-04 02:28:52 +0000 UTC" firstStartedPulling="2026-04-04 02:28:52.963848607 +0000 UTC m=+2012.629623727" lastFinishedPulling="2026-04-04 02:28:56.994240536 +0000 UTC m=+2016.660015656" observedRunningTime="2026-04-04 02:28:58.245508403 +0000 UTC m=+2017.911283543" watchObservedRunningTime="2026-04-04 02:28:58.247895799 +0000 UTC m=+2017.913670929" Apr 04 02:28:58 crc kubenswrapper[4681]: I0404 02:28:58.250569 4681 scope.go:117] "RemoveContainer" containerID="19d79900902a8fd0988f5781d9d9cb8727bb6a8740e5502d13e2f0c0c662aefb" Apr 04 02:28:58 crc kubenswrapper[4681]: I0404 02:28:58.299163 4681 scope.go:117] "RemoveContainer" containerID="9523bb2de993dfc16de3e5a56ee3568542c792f5bfb677a5d39327773aabf945" Apr 04 02:28:58 crc kubenswrapper[4681]: I0404 02:28:58.306545 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vfkgh" Apr 04 02:28:58 crc kubenswrapper[4681]: I0404 02:28:58.314196 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jtq9h"] Apr 04 02:28:58 crc kubenswrapper[4681]: I0404 02:28:58.336577 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jtq9h"] Apr 04 02:28:59 crc kubenswrapper[4681]: I0404 02:28:59.211965 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="080f79db-04a5-4733-8669-83b168e7e448" path="/var/lib/kubelet/pods/080f79db-04a5-4733-8669-83b168e7e448/volumes" Apr 04 02:28:59 crc kubenswrapper[4681]: I0404 02:28:59.638473 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vfkgh"] Apr 04 02:29:01 crc kubenswrapper[4681]: I0404 02:29:01.259364 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vfkgh" podUID="dd41c1b8-d52c-4176-b423-1f8b61246822" containerName="registry-server" containerID="cri-o://ad00be358ab9c1aad967fedbfa63494ccd8d6cbb83c63ec38b438a82c4d84ced" gracePeriod=2 Apr 04 02:29:01 crc kubenswrapper[4681]: I0404 02:29:01.496300 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Apr 04 02:29:01 crc kubenswrapper[4681]: I0404 02:29:01.496378 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Apr 04 02:29:01 crc kubenswrapper[4681]: I0404 02:29:01.607035 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Apr 04 02:29:01 crc kubenswrapper[4681]: I0404 02:29:01.607282 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Apr 04 02:29:01 crc kubenswrapper[4681]: I0404 02:29:01.761451 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vfkgh" Apr 04 02:29:01 crc kubenswrapper[4681]: I0404 02:29:01.763830 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8r4c\" (UniqueName: \"kubernetes.io/projected/dd41c1b8-d52c-4176-b423-1f8b61246822-kube-api-access-d8r4c\") pod \"dd41c1b8-d52c-4176-b423-1f8b61246822\" (UID: \"dd41c1b8-d52c-4176-b423-1f8b61246822\") " Apr 04 02:29:01 crc kubenswrapper[4681]: I0404 02:29:01.763985 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd41c1b8-d52c-4176-b423-1f8b61246822-utilities\") pod \"dd41c1b8-d52c-4176-b423-1f8b61246822\" (UID: \"dd41c1b8-d52c-4176-b423-1f8b61246822\") " Apr 04 02:29:01 crc kubenswrapper[4681]: I0404 02:29:01.764083 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd41c1b8-d52c-4176-b423-1f8b61246822-catalog-content\") pod \"dd41c1b8-d52c-4176-b423-1f8b61246822\" (UID: \"dd41c1b8-d52c-4176-b423-1f8b61246822\") " Apr 04 02:29:01 crc kubenswrapper[4681]: I0404 02:29:01.765320 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd41c1b8-d52c-4176-b423-1f8b61246822-utilities" (OuterVolumeSpecName: "utilities") pod "dd41c1b8-d52c-4176-b423-1f8b61246822" (UID: "dd41c1b8-d52c-4176-b423-1f8b61246822"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:29:01 crc kubenswrapper[4681]: I0404 02:29:01.773222 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd41c1b8-d52c-4176-b423-1f8b61246822-kube-api-access-d8r4c" (OuterVolumeSpecName: "kube-api-access-d8r4c") pod "dd41c1b8-d52c-4176-b423-1f8b61246822" (UID: "dd41c1b8-d52c-4176-b423-1f8b61246822"). InnerVolumeSpecName "kube-api-access-d8r4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:29:01 crc kubenswrapper[4681]: I0404 02:29:01.851202 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd41c1b8-d52c-4176-b423-1f8b61246822-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd41c1b8-d52c-4176-b423-1f8b61246822" (UID: "dd41c1b8-d52c-4176-b423-1f8b61246822"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:29:01 crc kubenswrapper[4681]: I0404 02:29:01.868042 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd41c1b8-d52c-4176-b423-1f8b61246822-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:01 crc kubenswrapper[4681]: I0404 02:29:01.868366 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd41c1b8-d52c-4176-b423-1f8b61246822-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:01 crc kubenswrapper[4681]: I0404 02:29:01.868446 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8r4c\" (UniqueName: \"kubernetes.io/projected/dd41c1b8-d52c-4176-b423-1f8b61246822-kube-api-access-d8r4c\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:02 crc kubenswrapper[4681]: I0404 02:29:02.269826 4681 generic.go:334] "Generic (PLEG): container finished" podID="f2487907-8ce5-4e62-a772-66ac674c64e0" containerID="effb887815aa24bd5625259c978ee550f13d88ff5100d9350b737ed32a1eae8d" exitCode=0 Apr 04 02:29:02 crc kubenswrapper[4681]: I0404 02:29:02.269910 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-n7nn9" event={"ID":"f2487907-8ce5-4e62-a772-66ac674c64e0","Type":"ContainerDied","Data":"effb887815aa24bd5625259c978ee550f13d88ff5100d9350b737ed32a1eae8d"} Apr 04 02:29:02 crc kubenswrapper[4681]: I0404 02:29:02.280058 4681 generic.go:334] "Generic (PLEG): container finished" podID="dd41c1b8-d52c-4176-b423-1f8b61246822" containerID="ad00be358ab9c1aad967fedbfa63494ccd8d6cbb83c63ec38b438a82c4d84ced" exitCode=0 Apr 04 02:29:02 crc kubenswrapper[4681]: I0404 02:29:02.280112 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfkgh" event={"ID":"dd41c1b8-d52c-4176-b423-1f8b61246822","Type":"ContainerDied","Data":"ad00be358ab9c1aad967fedbfa63494ccd8d6cbb83c63ec38b438a82c4d84ced"} Apr 04 02:29:02 crc kubenswrapper[4681]: I0404 02:29:02.280159 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vfkgh" event={"ID":"dd41c1b8-d52c-4176-b423-1f8b61246822","Type":"ContainerDied","Data":"2bcc1049443486a8e301a90706907ceeccbb3b93f7a6db5eb78a01341c423a1c"} Apr 04 02:29:02 crc kubenswrapper[4681]: I0404 02:29:02.280186 4681 scope.go:117] "RemoveContainer" containerID="ad00be358ab9c1aad967fedbfa63494ccd8d6cbb83c63ec38b438a82c4d84ced" Apr 04 02:29:02 crc kubenswrapper[4681]: I0404 02:29:02.280402 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vfkgh" Apr 04 02:29:02 crc kubenswrapper[4681]: I0404 02:29:02.341198 4681 scope.go:117] "RemoveContainer" containerID="b3cdd9fe7071f7fe24eb7b79c29e3ab61b1f8948c27f4c9c67983cc36ddc181b" Apr 04 02:29:02 crc kubenswrapper[4681]: I0404 02:29:02.365755 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vfkgh"] Apr 04 02:29:02 crc kubenswrapper[4681]: I0404 02:29:02.373402 4681 scope.go:117] "RemoveContainer" containerID="579ecc255c6ee16f5395053f42d72de8322e5a0e1fc0a078874a67aed525da9c" Apr 04 02:29:02 crc kubenswrapper[4681]: I0404 02:29:02.379094 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vfkgh"] Apr 04 02:29:02 crc kubenswrapper[4681]: I0404 02:29:02.438117 4681 scope.go:117] "RemoveContainer" containerID="ad00be358ab9c1aad967fedbfa63494ccd8d6cbb83c63ec38b438a82c4d84ced" Apr 04 02:29:02 crc kubenswrapper[4681]: E0404 02:29:02.440886 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad00be358ab9c1aad967fedbfa63494ccd8d6cbb83c63ec38b438a82c4d84ced\": container with ID starting with ad00be358ab9c1aad967fedbfa63494ccd8d6cbb83c63ec38b438a82c4d84ced not found: ID does not exist" containerID="ad00be358ab9c1aad967fedbfa63494ccd8d6cbb83c63ec38b438a82c4d84ced" Apr 04 02:29:02 crc kubenswrapper[4681]: I0404 02:29:02.441016 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad00be358ab9c1aad967fedbfa63494ccd8d6cbb83c63ec38b438a82c4d84ced"} err="failed to get container status \"ad00be358ab9c1aad967fedbfa63494ccd8d6cbb83c63ec38b438a82c4d84ced\": rpc error: code = NotFound desc = could not find container \"ad00be358ab9c1aad967fedbfa63494ccd8d6cbb83c63ec38b438a82c4d84ced\": container with ID starting with ad00be358ab9c1aad967fedbfa63494ccd8d6cbb83c63ec38b438a82c4d84ced not found: ID does not exist" Apr 04 02:29:02 crc kubenswrapper[4681]: I0404 02:29:02.441062 4681 scope.go:117] "RemoveContainer" containerID="b3cdd9fe7071f7fe24eb7b79c29e3ab61b1f8948c27f4c9c67983cc36ddc181b" Apr 04 02:29:02 crc kubenswrapper[4681]: E0404 02:29:02.441639 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3cdd9fe7071f7fe24eb7b79c29e3ab61b1f8948c27f4c9c67983cc36ddc181b\": container with ID starting with b3cdd9fe7071f7fe24eb7b79c29e3ab61b1f8948c27f4c9c67983cc36ddc181b not found: ID does not exist" containerID="b3cdd9fe7071f7fe24eb7b79c29e3ab61b1f8948c27f4c9c67983cc36ddc181b" Apr 04 02:29:02 crc kubenswrapper[4681]: I0404 02:29:02.441677 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3cdd9fe7071f7fe24eb7b79c29e3ab61b1f8948c27f4c9c67983cc36ddc181b"} err="failed to get container status \"b3cdd9fe7071f7fe24eb7b79c29e3ab61b1f8948c27f4c9c67983cc36ddc181b\": rpc error: code = NotFound desc = could not find container \"b3cdd9fe7071f7fe24eb7b79c29e3ab61b1f8948c27f4c9c67983cc36ddc181b\": container with ID starting with b3cdd9fe7071f7fe24eb7b79c29e3ab61b1f8948c27f4c9c67983cc36ddc181b not found: ID does not exist" Apr 04 02:29:02 crc kubenswrapper[4681]: I0404 02:29:02.441699 4681 scope.go:117] "RemoveContainer" containerID="579ecc255c6ee16f5395053f42d72de8322e5a0e1fc0a078874a67aed525da9c" Apr 04 02:29:02 crc kubenswrapper[4681]: E0404 02:29:02.441984 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"579ecc255c6ee16f5395053f42d72de8322e5a0e1fc0a078874a67aed525da9c\": container with ID starting with 579ecc255c6ee16f5395053f42d72de8322e5a0e1fc0a078874a67aed525da9c not found: ID does not exist" containerID="579ecc255c6ee16f5395053f42d72de8322e5a0e1fc0a078874a67aed525da9c" Apr 04 02:29:02 crc kubenswrapper[4681]: I0404 02:29:02.442011 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"579ecc255c6ee16f5395053f42d72de8322e5a0e1fc0a078874a67aed525da9c"} err="failed to get container status \"579ecc255c6ee16f5395053f42d72de8322e5a0e1fc0a078874a67aed525da9c\": rpc error: code = NotFound desc = could not find container \"579ecc255c6ee16f5395053f42d72de8322e5a0e1fc0a078874a67aed525da9c\": container with ID starting with 579ecc255c6ee16f5395053f42d72de8322e5a0e1fc0a078874a67aed525da9c not found: ID does not exist" Apr 04 02:29:02 crc kubenswrapper[4681]: I0404 02:29:02.512442 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="20b30c42-c576-4f80-ab8c-23e4f5a73519" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.231:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 04 02:29:02 crc kubenswrapper[4681]: I0404 02:29:02.512685 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="20b30c42-c576-4f80-ab8c-23e4f5a73519" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.231:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 04 02:29:03 crc kubenswrapper[4681]: I0404 02:29:03.217095 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd41c1b8-d52c-4176-b423-1f8b61246822" path="/var/lib/kubelet/pods/dd41c1b8-d52c-4176-b423-1f8b61246822/volumes" Apr 04 02:29:03 crc kubenswrapper[4681]: I0404 02:29:03.615915 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Apr 04 02:29:03 crc kubenswrapper[4681]: I0404 02:29:03.616849 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Apr 04 02:29:03 crc kubenswrapper[4681]: I0404 02:29:03.636223 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Apr 04 02:29:03 crc kubenswrapper[4681]: I0404 02:29:03.736301 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-n7nn9" Apr 04 02:29:03 crc kubenswrapper[4681]: I0404 02:29:03.906554 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2487907-8ce5-4e62-a772-66ac674c64e0-config-data\") pod \"f2487907-8ce5-4e62-a772-66ac674c64e0\" (UID: \"f2487907-8ce5-4e62-a772-66ac674c64e0\") " Apr 04 02:29:03 crc kubenswrapper[4681]: I0404 02:29:03.906659 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6crr\" (UniqueName: \"kubernetes.io/projected/f2487907-8ce5-4e62-a772-66ac674c64e0-kube-api-access-d6crr\") pod \"f2487907-8ce5-4e62-a772-66ac674c64e0\" (UID: \"f2487907-8ce5-4e62-a772-66ac674c64e0\") " Apr 04 02:29:03 crc kubenswrapper[4681]: I0404 02:29:03.906799 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2487907-8ce5-4e62-a772-66ac674c64e0-combined-ca-bundle\") pod \"f2487907-8ce5-4e62-a772-66ac674c64e0\" (UID: \"f2487907-8ce5-4e62-a772-66ac674c64e0\") " Apr 04 02:29:03 crc kubenswrapper[4681]: I0404 02:29:03.906877 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2487907-8ce5-4e62-a772-66ac674c64e0-scripts\") pod \"f2487907-8ce5-4e62-a772-66ac674c64e0\" (UID: \"f2487907-8ce5-4e62-a772-66ac674c64e0\") " Apr 04 02:29:03 crc kubenswrapper[4681]: I0404 02:29:03.912591 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2487907-8ce5-4e62-a772-66ac674c64e0-scripts" (OuterVolumeSpecName: "scripts") pod "f2487907-8ce5-4e62-a772-66ac674c64e0" (UID: "f2487907-8ce5-4e62-a772-66ac674c64e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:29:03 crc kubenswrapper[4681]: I0404 02:29:03.929393 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2487907-8ce5-4e62-a772-66ac674c64e0-kube-api-access-d6crr" (OuterVolumeSpecName: "kube-api-access-d6crr") pod "f2487907-8ce5-4e62-a772-66ac674c64e0" (UID: "f2487907-8ce5-4e62-a772-66ac674c64e0"). InnerVolumeSpecName "kube-api-access-d6crr". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:29:03 crc kubenswrapper[4681]: I0404 02:29:03.942072 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2487907-8ce5-4e62-a772-66ac674c64e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2487907-8ce5-4e62-a772-66ac674c64e0" (UID: "f2487907-8ce5-4e62-a772-66ac674c64e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:29:03 crc kubenswrapper[4681]: I0404 02:29:03.959976 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2487907-8ce5-4e62-a772-66ac674c64e0-config-data" (OuterVolumeSpecName: "config-data") pod "f2487907-8ce5-4e62-a772-66ac674c64e0" (UID: "f2487907-8ce5-4e62-a772-66ac674c64e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:29:04 crc kubenswrapper[4681]: I0404 02:29:04.009333 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2487907-8ce5-4e62-a772-66ac674c64e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:04 crc kubenswrapper[4681]: I0404 02:29:04.009378 4681 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2487907-8ce5-4e62-a772-66ac674c64e0-scripts\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:04 crc kubenswrapper[4681]: I0404 02:29:04.009392 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2487907-8ce5-4e62-a772-66ac674c64e0-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:04 crc kubenswrapper[4681]: I0404 02:29:04.009404 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6crr\" (UniqueName: \"kubernetes.io/projected/f2487907-8ce5-4e62-a772-66ac674c64e0-kube-api-access-d6crr\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:04 crc kubenswrapper[4681]: I0404 02:29:04.308511 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-n7nn9" Apr 04 02:29:04 crc kubenswrapper[4681]: I0404 02:29:04.310022 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-n7nn9" event={"ID":"f2487907-8ce5-4e62-a772-66ac674c64e0","Type":"ContainerDied","Data":"9da64220913d8421df2a3558a2a73d40960ad9f5562fa33b0f0a2e337c12d9f8"} Apr 04 02:29:04 crc kubenswrapper[4681]: I0404 02:29:04.310075 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9da64220913d8421df2a3558a2a73d40960ad9f5562fa33b0f0a2e337c12d9f8" Apr 04 02:29:04 crc kubenswrapper[4681]: I0404 02:29:04.333451 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Apr 04 02:29:04 crc kubenswrapper[4681]: I0404 02:29:04.486779 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Apr 04 02:29:04 crc kubenswrapper[4681]: I0404 02:29:04.488699 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="20b30c42-c576-4f80-ab8c-23e4f5a73519" containerName="nova-api-log" containerID="cri-o://4643ba563d8753e780daeb23cbe4ac4eeeba61bebee2341f84101732f5c36262" gracePeriod=30 Apr 04 02:29:04 crc kubenswrapper[4681]: I0404 02:29:04.488920 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="20b30c42-c576-4f80-ab8c-23e4f5a73519" containerName="nova-api-api" containerID="cri-o://7439c394f97a30b17ebd7eb5b15bb44281f3674101227232957a4e4515e790c0" gracePeriod=30 Apr 04 02:29:04 crc kubenswrapper[4681]: I0404 02:29:04.533944 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Apr 04 02:29:04 crc kubenswrapper[4681]: I0404 02:29:04.534199 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="9616ac2a-6a60-414d-a440-59105ea678ee" containerName="nova-scheduler-scheduler" containerID="cri-o://e57fbec9638241c28657308939ec81020cacd82283192da6742fd4fe223deeaa" gracePeriod=30 Apr 04 02:29:04 crc kubenswrapper[4681]: I0404 02:29:04.557899 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Apr 04 02:29:05 crc kubenswrapper[4681]: I0404 02:29:05.321587 4681 generic.go:334] "Generic (PLEG): container finished" podID="20b30c42-c576-4f80-ab8c-23e4f5a73519" containerID="4643ba563d8753e780daeb23cbe4ac4eeeba61bebee2341f84101732f5c36262" exitCode=143 Apr 04 02:29:05 crc kubenswrapper[4681]: I0404 02:29:05.321649 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"20b30c42-c576-4f80-ab8c-23e4f5a73519","Type":"ContainerDied","Data":"4643ba563d8753e780daeb23cbe4ac4eeeba61bebee2341f84101732f5c36262"} Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.166470 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.335941 4681 generic.go:334] "Generic (PLEG): container finished" podID="9616ac2a-6a60-414d-a440-59105ea678ee" containerID="e57fbec9638241c28657308939ec81020cacd82283192da6742fd4fe223deeaa" exitCode=0 Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.336014 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9616ac2a-6a60-414d-a440-59105ea678ee","Type":"ContainerDied","Data":"e57fbec9638241c28657308939ec81020cacd82283192da6742fd4fe223deeaa"} Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.336045 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"9616ac2a-6a60-414d-a440-59105ea678ee","Type":"ContainerDied","Data":"fdfac131e8ddae49808f6bd7c1a09e4cdc0fe21c455b6bad8c7e724bce571e9e"} Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.336059 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdfac131e8ddae49808f6bd7c1a09e4cdc0fe21c455b6bad8c7e724bce571e9e" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.338086 4681 generic.go:334] "Generic (PLEG): container finished" podID="20b30c42-c576-4f80-ab8c-23e4f5a73519" containerID="7439c394f97a30b17ebd7eb5b15bb44281f3674101227232957a4e4515e790c0" exitCode=0 Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.338139 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.338174 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"20b30c42-c576-4f80-ab8c-23e4f5a73519","Type":"ContainerDied","Data":"7439c394f97a30b17ebd7eb5b15bb44281f3674101227232957a4e4515e790c0"} Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.338196 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"20b30c42-c576-4f80-ab8c-23e4f5a73519","Type":"ContainerDied","Data":"ca522e03434d495c5b68be9107b049cab5f52637594f5eb932bf52e0eaca3d27"} Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.338214 4681 scope.go:117] "RemoveContainer" containerID="7439c394f97a30b17ebd7eb5b15bb44281f3674101227232957a4e4515e790c0" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.338298 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d83af60c-798f-4343-8a9f-a6a39b6ccb3f" containerName="nova-metadata-log" containerID="cri-o://fbe749cd7dabad970bbb5c99b2a9ab570d07d522fece5a7653f3bbdcad6d2034" gracePeriod=30 Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.338399 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d83af60c-798f-4343-8a9f-a6a39b6ccb3f" containerName="nova-metadata-metadata" containerID="cri-o://189ea82beb9e827b8827da85d019ab12eafdd5e0b9b0b15546a114106b0460f1" gracePeriod=30 Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.361147 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20b30c42-c576-4f80-ab8c-23e4f5a73519-combined-ca-bundle\") pod \"20b30c42-c576-4f80-ab8c-23e4f5a73519\" (UID: \"20b30c42-c576-4f80-ab8c-23e4f5a73519\") " Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.361392 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20b30c42-c576-4f80-ab8c-23e4f5a73519-public-tls-certs\") pod \"20b30c42-c576-4f80-ab8c-23e4f5a73519\" (UID: \"20b30c42-c576-4f80-ab8c-23e4f5a73519\") " Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.361430 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20b30c42-c576-4f80-ab8c-23e4f5a73519-logs\") pod \"20b30c42-c576-4f80-ab8c-23e4f5a73519\" (UID: \"20b30c42-c576-4f80-ab8c-23e4f5a73519\") " Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.361519 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xncjn\" (UniqueName: \"kubernetes.io/projected/20b30c42-c576-4f80-ab8c-23e4f5a73519-kube-api-access-xncjn\") pod \"20b30c42-c576-4f80-ab8c-23e4f5a73519\" (UID: \"20b30c42-c576-4f80-ab8c-23e4f5a73519\") " Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.361654 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20b30c42-c576-4f80-ab8c-23e4f5a73519-config-data\") pod \"20b30c42-c576-4f80-ab8c-23e4f5a73519\" (UID: \"20b30c42-c576-4f80-ab8c-23e4f5a73519\") " Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.361723 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20b30c42-c576-4f80-ab8c-23e4f5a73519-internal-tls-certs\") pod \"20b30c42-c576-4f80-ab8c-23e4f5a73519\" (UID: \"20b30c42-c576-4f80-ab8c-23e4f5a73519\") " Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.362130 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20b30c42-c576-4f80-ab8c-23e4f5a73519-logs" (OuterVolumeSpecName: "logs") pod "20b30c42-c576-4f80-ab8c-23e4f5a73519" (UID: "20b30c42-c576-4f80-ab8c-23e4f5a73519"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.362778 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20b30c42-c576-4f80-ab8c-23e4f5a73519-logs\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.391594 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.395959 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b30c42-c576-4f80-ab8c-23e4f5a73519-kube-api-access-xncjn" (OuterVolumeSpecName: "kube-api-access-xncjn") pod "20b30c42-c576-4f80-ab8c-23e4f5a73519" (UID: "20b30c42-c576-4f80-ab8c-23e4f5a73519"). InnerVolumeSpecName "kube-api-access-xncjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.414959 4681 scope.go:117] "RemoveContainer" containerID="4643ba563d8753e780daeb23cbe4ac4eeeba61bebee2341f84101732f5c36262" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.422199 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b30c42-c576-4f80-ab8c-23e4f5a73519-config-data" (OuterVolumeSpecName: "config-data") pod "20b30c42-c576-4f80-ab8c-23e4f5a73519" (UID: "20b30c42-c576-4f80-ab8c-23e4f5a73519"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.429753 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b30c42-c576-4f80-ab8c-23e4f5a73519-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20b30c42-c576-4f80-ab8c-23e4f5a73519" (UID: "20b30c42-c576-4f80-ab8c-23e4f5a73519"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.445534 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b30c42-c576-4f80-ab8c-23e4f5a73519-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "20b30c42-c576-4f80-ab8c-23e4f5a73519" (UID: "20b30c42-c576-4f80-ab8c-23e4f5a73519"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.448308 4681 scope.go:117] "RemoveContainer" containerID="7439c394f97a30b17ebd7eb5b15bb44281f3674101227232957a4e4515e790c0" Apr 04 02:29:06 crc kubenswrapper[4681]: E0404 02:29:06.448846 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7439c394f97a30b17ebd7eb5b15bb44281f3674101227232957a4e4515e790c0\": container with ID starting with 7439c394f97a30b17ebd7eb5b15bb44281f3674101227232957a4e4515e790c0 not found: ID does not exist" containerID="7439c394f97a30b17ebd7eb5b15bb44281f3674101227232957a4e4515e790c0" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.448906 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7439c394f97a30b17ebd7eb5b15bb44281f3674101227232957a4e4515e790c0"} err="failed to get container status \"7439c394f97a30b17ebd7eb5b15bb44281f3674101227232957a4e4515e790c0\": rpc error: code = NotFound desc = could not find container \"7439c394f97a30b17ebd7eb5b15bb44281f3674101227232957a4e4515e790c0\": container with ID starting with 7439c394f97a30b17ebd7eb5b15bb44281f3674101227232957a4e4515e790c0 not found: ID does not exist" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.448928 4681 scope.go:117] "RemoveContainer" containerID="4643ba563d8753e780daeb23cbe4ac4eeeba61bebee2341f84101732f5c36262" Apr 04 02:29:06 crc kubenswrapper[4681]: E0404 02:29:06.449229 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4643ba563d8753e780daeb23cbe4ac4eeeba61bebee2341f84101732f5c36262\": container with ID starting with 4643ba563d8753e780daeb23cbe4ac4eeeba61bebee2341f84101732f5c36262 not found: ID does not exist" containerID="4643ba563d8753e780daeb23cbe4ac4eeeba61bebee2341f84101732f5c36262" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.449252 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4643ba563d8753e780daeb23cbe4ac4eeeba61bebee2341f84101732f5c36262"} err="failed to get container status \"4643ba563d8753e780daeb23cbe4ac4eeeba61bebee2341f84101732f5c36262\": rpc error: code = NotFound desc = could not find container \"4643ba563d8753e780daeb23cbe4ac4eeeba61bebee2341f84101732f5c36262\": container with ID starting with 4643ba563d8753e780daeb23cbe4ac4eeeba61bebee2341f84101732f5c36262 not found: ID does not exist" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.466446 4681 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/20b30c42-c576-4f80-ab8c-23e4f5a73519-public-tls-certs\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.466477 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xncjn\" (UniqueName: \"kubernetes.io/projected/20b30c42-c576-4f80-ab8c-23e4f5a73519-kube-api-access-xncjn\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.466488 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20b30c42-c576-4f80-ab8c-23e4f5a73519-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.466498 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20b30c42-c576-4f80-ab8c-23e4f5a73519-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.474513 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b30c42-c576-4f80-ab8c-23e4f5a73519-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "20b30c42-c576-4f80-ab8c-23e4f5a73519" (UID: "20b30c42-c576-4f80-ab8c-23e4f5a73519"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.568017 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9616ac2a-6a60-414d-a440-59105ea678ee-combined-ca-bundle\") pod \"9616ac2a-6a60-414d-a440-59105ea678ee\" (UID: \"9616ac2a-6a60-414d-a440-59105ea678ee\") " Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.568374 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4w59\" (UniqueName: \"kubernetes.io/projected/9616ac2a-6a60-414d-a440-59105ea678ee-kube-api-access-j4w59\") pod \"9616ac2a-6a60-414d-a440-59105ea678ee\" (UID: \"9616ac2a-6a60-414d-a440-59105ea678ee\") " Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.568450 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9616ac2a-6a60-414d-a440-59105ea678ee-config-data\") pod \"9616ac2a-6a60-414d-a440-59105ea678ee\" (UID: \"9616ac2a-6a60-414d-a440-59105ea678ee\") " Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.568988 4681 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20b30c42-c576-4f80-ab8c-23e4f5a73519-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.576578 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9616ac2a-6a60-414d-a440-59105ea678ee-kube-api-access-j4w59" (OuterVolumeSpecName: "kube-api-access-j4w59") pod "9616ac2a-6a60-414d-a440-59105ea678ee" (UID: "9616ac2a-6a60-414d-a440-59105ea678ee"). InnerVolumeSpecName "kube-api-access-j4w59". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.602478 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9616ac2a-6a60-414d-a440-59105ea678ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9616ac2a-6a60-414d-a440-59105ea678ee" (UID: "9616ac2a-6a60-414d-a440-59105ea678ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.605121 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9616ac2a-6a60-414d-a440-59105ea678ee-config-data" (OuterVolumeSpecName: "config-data") pod "9616ac2a-6a60-414d-a440-59105ea678ee" (UID: "9616ac2a-6a60-414d-a440-59105ea678ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.670676 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4w59\" (UniqueName: \"kubernetes.io/projected/9616ac2a-6a60-414d-a440-59105ea678ee-kube-api-access-j4w59\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.670715 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9616ac2a-6a60-414d-a440-59105ea678ee-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.670727 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9616ac2a-6a60-414d-a440-59105ea678ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.677500 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.690892 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.701083 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Apr 04 02:29:06 crc kubenswrapper[4681]: E0404 02:29:06.701520 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="080f79db-04a5-4733-8669-83b168e7e448" containerName="extract-utilities" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.701536 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="080f79db-04a5-4733-8669-83b168e7e448" containerName="extract-utilities" Apr 04 02:29:06 crc kubenswrapper[4681]: E0404 02:29:06.701554 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd41c1b8-d52c-4176-b423-1f8b61246822" containerName="extract-utilities" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.701561 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd41c1b8-d52c-4176-b423-1f8b61246822" containerName="extract-utilities" Apr 04 02:29:06 crc kubenswrapper[4681]: E0404 02:29:06.701581 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2487907-8ce5-4e62-a772-66ac674c64e0" containerName="nova-manage" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.701588 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2487907-8ce5-4e62-a772-66ac674c64e0" containerName="nova-manage" Apr 04 02:29:06 crc kubenswrapper[4681]: E0404 02:29:06.701601 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="080f79db-04a5-4733-8669-83b168e7e448" containerName="extract-content" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.701609 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="080f79db-04a5-4733-8669-83b168e7e448" containerName="extract-content" Apr 04 02:29:06 crc kubenswrapper[4681]: E0404 02:29:06.701616 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae22ed73-322c-41fc-821e-b0f2e7217ab6" containerName="dnsmasq-dns" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.701625 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae22ed73-322c-41fc-821e-b0f2e7217ab6" containerName="dnsmasq-dns" Apr 04 02:29:06 crc kubenswrapper[4681]: E0404 02:29:06.701640 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd41c1b8-d52c-4176-b423-1f8b61246822" containerName="extract-content" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.701647 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd41c1b8-d52c-4176-b423-1f8b61246822" containerName="extract-content" Apr 04 02:29:06 crc kubenswrapper[4681]: E0404 02:29:06.701666 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9616ac2a-6a60-414d-a440-59105ea678ee" containerName="nova-scheduler-scheduler" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.701673 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="9616ac2a-6a60-414d-a440-59105ea678ee" containerName="nova-scheduler-scheduler" Apr 04 02:29:06 crc kubenswrapper[4681]: E0404 02:29:06.701684 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="080f79db-04a5-4733-8669-83b168e7e448" containerName="registry-server" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.701691 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="080f79db-04a5-4733-8669-83b168e7e448" containerName="registry-server" Apr 04 02:29:06 crc kubenswrapper[4681]: E0404 02:29:06.701706 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b30c42-c576-4f80-ab8c-23e4f5a73519" containerName="nova-api-log" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.701713 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b30c42-c576-4f80-ab8c-23e4f5a73519" containerName="nova-api-log" Apr 04 02:29:06 crc kubenswrapper[4681]: E0404 02:29:06.701730 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd41c1b8-d52c-4176-b423-1f8b61246822" containerName="registry-server" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.701738 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd41c1b8-d52c-4176-b423-1f8b61246822" containerName="registry-server" Apr 04 02:29:06 crc kubenswrapper[4681]: E0404 02:29:06.701748 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae22ed73-322c-41fc-821e-b0f2e7217ab6" containerName="init" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.701755 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae22ed73-322c-41fc-821e-b0f2e7217ab6" containerName="init" Apr 04 02:29:06 crc kubenswrapper[4681]: E0404 02:29:06.701770 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20b30c42-c576-4f80-ab8c-23e4f5a73519" containerName="nova-api-api" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.701778 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="20b30c42-c576-4f80-ab8c-23e4f5a73519" containerName="nova-api-api" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.702011 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="20b30c42-c576-4f80-ab8c-23e4f5a73519" containerName="nova-api-api" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.702025 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="20b30c42-c576-4f80-ab8c-23e4f5a73519" containerName="nova-api-log" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.702034 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="080f79db-04a5-4733-8669-83b168e7e448" containerName="registry-server" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.702054 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd41c1b8-d52c-4176-b423-1f8b61246822" containerName="registry-server" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.702069 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae22ed73-322c-41fc-821e-b0f2e7217ab6" containerName="dnsmasq-dns" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.702081 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="9616ac2a-6a60-414d-a440-59105ea678ee" containerName="nova-scheduler-scheduler" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.702097 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2487907-8ce5-4e62-a772-66ac674c64e0" containerName="nova-manage" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.706394 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.710385 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.710662 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.710840 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.741501 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.873626 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/858e598e-35ac-4ca2-a5d5-52e31278378f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"858e598e-35ac-4ca2-a5d5-52e31278378f\") " pod="openstack/nova-api-0" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.873665 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqz6x\" (UniqueName: \"kubernetes.io/projected/858e598e-35ac-4ca2-a5d5-52e31278378f-kube-api-access-wqz6x\") pod \"nova-api-0\" (UID: \"858e598e-35ac-4ca2-a5d5-52e31278378f\") " pod="openstack/nova-api-0" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.873698 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/858e598e-35ac-4ca2-a5d5-52e31278378f-config-data\") pod \"nova-api-0\" (UID: \"858e598e-35ac-4ca2-a5d5-52e31278378f\") " pod="openstack/nova-api-0" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.873775 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/858e598e-35ac-4ca2-a5d5-52e31278378f-logs\") pod \"nova-api-0\" (UID: \"858e598e-35ac-4ca2-a5d5-52e31278378f\") " pod="openstack/nova-api-0" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.873790 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/858e598e-35ac-4ca2-a5d5-52e31278378f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"858e598e-35ac-4ca2-a5d5-52e31278378f\") " pod="openstack/nova-api-0" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.873844 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/858e598e-35ac-4ca2-a5d5-52e31278378f-public-tls-certs\") pod \"nova-api-0\" (UID: \"858e598e-35ac-4ca2-a5d5-52e31278378f\") " pod="openstack/nova-api-0" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.975335 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/858e598e-35ac-4ca2-a5d5-52e31278378f-config-data\") pod \"nova-api-0\" (UID: \"858e598e-35ac-4ca2-a5d5-52e31278378f\") " pod="openstack/nova-api-0" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.975453 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/858e598e-35ac-4ca2-a5d5-52e31278378f-logs\") pod \"nova-api-0\" (UID: \"858e598e-35ac-4ca2-a5d5-52e31278378f\") " pod="openstack/nova-api-0" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.975476 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/858e598e-35ac-4ca2-a5d5-52e31278378f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"858e598e-35ac-4ca2-a5d5-52e31278378f\") " pod="openstack/nova-api-0" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.975527 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/858e598e-35ac-4ca2-a5d5-52e31278378f-public-tls-certs\") pod \"nova-api-0\" (UID: \"858e598e-35ac-4ca2-a5d5-52e31278378f\") " pod="openstack/nova-api-0" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.975589 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/858e598e-35ac-4ca2-a5d5-52e31278378f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"858e598e-35ac-4ca2-a5d5-52e31278378f\") " pod="openstack/nova-api-0" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.975611 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqz6x\" (UniqueName: \"kubernetes.io/projected/858e598e-35ac-4ca2-a5d5-52e31278378f-kube-api-access-wqz6x\") pod \"nova-api-0\" (UID: \"858e598e-35ac-4ca2-a5d5-52e31278378f\") " pod="openstack/nova-api-0" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.976423 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/858e598e-35ac-4ca2-a5d5-52e31278378f-logs\") pod \"nova-api-0\" (UID: \"858e598e-35ac-4ca2-a5d5-52e31278378f\") " pod="openstack/nova-api-0" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.980491 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/858e598e-35ac-4ca2-a5d5-52e31278378f-config-data\") pod \"nova-api-0\" (UID: \"858e598e-35ac-4ca2-a5d5-52e31278378f\") " pod="openstack/nova-api-0" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.981007 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/858e598e-35ac-4ca2-a5d5-52e31278378f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"858e598e-35ac-4ca2-a5d5-52e31278378f\") " pod="openstack/nova-api-0" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.981659 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/858e598e-35ac-4ca2-a5d5-52e31278378f-public-tls-certs\") pod \"nova-api-0\" (UID: \"858e598e-35ac-4ca2-a5d5-52e31278378f\") " pod="openstack/nova-api-0" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.981809 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/858e598e-35ac-4ca2-a5d5-52e31278378f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"858e598e-35ac-4ca2-a5d5-52e31278378f\") " pod="openstack/nova-api-0" Apr 04 02:29:06 crc kubenswrapper[4681]: I0404 02:29:06.993044 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqz6x\" (UniqueName: \"kubernetes.io/projected/858e598e-35ac-4ca2-a5d5-52e31278378f-kube-api-access-wqz6x\") pod \"nova-api-0\" (UID: \"858e598e-35ac-4ca2-a5d5-52e31278378f\") " pod="openstack/nova-api-0" Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.083243 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.213820 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b30c42-c576-4f80-ab8c-23e4f5a73519" path="/var/lib/kubelet/pods/20b30c42-c576-4f80-ab8c-23e4f5a73519/volumes" Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.349136 4681 generic.go:334] "Generic (PLEG): container finished" podID="d83af60c-798f-4343-8a9f-a6a39b6ccb3f" containerID="189ea82beb9e827b8827da85d019ab12eafdd5e0b9b0b15546a114106b0460f1" exitCode=0 Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.349496 4681 generic.go:334] "Generic (PLEG): container finished" podID="d83af60c-798f-4343-8a9f-a6a39b6ccb3f" containerID="fbe749cd7dabad970bbb5c99b2a9ab570d07d522fece5a7653f3bbdcad6d2034" exitCode=143 Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.349588 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.350711 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d83af60c-798f-4343-8a9f-a6a39b6ccb3f","Type":"ContainerDied","Data":"189ea82beb9e827b8827da85d019ab12eafdd5e0b9b0b15546a114106b0460f1"} Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.350763 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d83af60c-798f-4343-8a9f-a6a39b6ccb3f","Type":"ContainerDied","Data":"fbe749cd7dabad970bbb5c99b2a9ab570d07d522fece5a7653f3bbdcad6d2034"} Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.395386 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.413374 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.423668 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.425188 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.428606 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.437536 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.543814 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.546065 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Apr 04 02:29:07 crc kubenswrapper[4681]: W0404 02:29:07.554527 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod858e598e_35ac_4ca2_a5d5_52e31278378f.slice/crio-d00f910723f7ee8908d77288875b186e8fbd4ab3de54deb025aea00935efd1f5 WatchSource:0}: Error finding container d00f910723f7ee8908d77288875b186e8fbd4ab3de54deb025aea00935efd1f5: Status 404 returned error can't find the container with id d00f910723f7ee8908d77288875b186e8fbd4ab3de54deb025aea00935efd1f5 Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.610454 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83af60c-798f-4343-8a9f-a6a39b6ccb3f-config-data\") pod \"d83af60c-798f-4343-8a9f-a6a39b6ccb3f\" (UID: \"d83af60c-798f-4343-8a9f-a6a39b6ccb3f\") " Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.610561 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fnff\" (UniqueName: \"kubernetes.io/projected/d83af60c-798f-4343-8a9f-a6a39b6ccb3f-kube-api-access-7fnff\") pod \"d83af60c-798f-4343-8a9f-a6a39b6ccb3f\" (UID: \"d83af60c-798f-4343-8a9f-a6a39b6ccb3f\") " Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.610616 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d83af60c-798f-4343-8a9f-a6a39b6ccb3f-logs\") pod \"d83af60c-798f-4343-8a9f-a6a39b6ccb3f\" (UID: \"d83af60c-798f-4343-8a9f-a6a39b6ccb3f\") " Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.610782 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d83af60c-798f-4343-8a9f-a6a39b6ccb3f-nova-metadata-tls-certs\") pod \"d83af60c-798f-4343-8a9f-a6a39b6ccb3f\" (UID: \"d83af60c-798f-4343-8a9f-a6a39b6ccb3f\") " Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.610893 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83af60c-798f-4343-8a9f-a6a39b6ccb3f-combined-ca-bundle\") pod \"d83af60c-798f-4343-8a9f-a6a39b6ccb3f\" (UID: \"d83af60c-798f-4343-8a9f-a6a39b6ccb3f\") " Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.611244 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm5l5\" (UniqueName: \"kubernetes.io/projected/16143562-a1da-4713-a062-e3b850e170f0-kube-api-access-nm5l5\") pod \"nova-scheduler-0\" (UID: \"16143562-a1da-4713-a062-e3b850e170f0\") " pod="openstack/nova-scheduler-0" Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.611450 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16143562-a1da-4713-a062-e3b850e170f0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"16143562-a1da-4713-a062-e3b850e170f0\") " pod="openstack/nova-scheduler-0" Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.611530 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16143562-a1da-4713-a062-e3b850e170f0-config-data\") pod \"nova-scheduler-0\" (UID: \"16143562-a1da-4713-a062-e3b850e170f0\") " pod="openstack/nova-scheduler-0" Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.621312 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d83af60c-798f-4343-8a9f-a6a39b6ccb3f-logs" (OuterVolumeSpecName: "logs") pod "d83af60c-798f-4343-8a9f-a6a39b6ccb3f" (UID: "d83af60c-798f-4343-8a9f-a6a39b6ccb3f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.626279 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d83af60c-798f-4343-8a9f-a6a39b6ccb3f-kube-api-access-7fnff" (OuterVolumeSpecName: "kube-api-access-7fnff") pod "d83af60c-798f-4343-8a9f-a6a39b6ccb3f" (UID: "d83af60c-798f-4343-8a9f-a6a39b6ccb3f"). InnerVolumeSpecName "kube-api-access-7fnff". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.650385 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d83af60c-798f-4343-8a9f-a6a39b6ccb3f-config-data" (OuterVolumeSpecName: "config-data") pod "d83af60c-798f-4343-8a9f-a6a39b6ccb3f" (UID: "d83af60c-798f-4343-8a9f-a6a39b6ccb3f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.657714 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d83af60c-798f-4343-8a9f-a6a39b6ccb3f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d83af60c-798f-4343-8a9f-a6a39b6ccb3f" (UID: "d83af60c-798f-4343-8a9f-a6a39b6ccb3f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.709446 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d83af60c-798f-4343-8a9f-a6a39b6ccb3f-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d83af60c-798f-4343-8a9f-a6a39b6ccb3f" (UID: "d83af60c-798f-4343-8a9f-a6a39b6ccb3f"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.712754 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm5l5\" (UniqueName: \"kubernetes.io/projected/16143562-a1da-4713-a062-e3b850e170f0-kube-api-access-nm5l5\") pod \"nova-scheduler-0\" (UID: \"16143562-a1da-4713-a062-e3b850e170f0\") " pod="openstack/nova-scheduler-0" Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.712858 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16143562-a1da-4713-a062-e3b850e170f0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"16143562-a1da-4713-a062-e3b850e170f0\") " pod="openstack/nova-scheduler-0" Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.712911 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16143562-a1da-4713-a062-e3b850e170f0-config-data\") pod \"nova-scheduler-0\" (UID: \"16143562-a1da-4713-a062-e3b850e170f0\") " pod="openstack/nova-scheduler-0" Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.712987 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83af60c-798f-4343-8a9f-a6a39b6ccb3f-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.713001 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fnff\" (UniqueName: \"kubernetes.io/projected/d83af60c-798f-4343-8a9f-a6a39b6ccb3f-kube-api-access-7fnff\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.713013 4681 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d83af60c-798f-4343-8a9f-a6a39b6ccb3f-logs\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.713025 4681 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d83af60c-798f-4343-8a9f-a6a39b6ccb3f-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.713037 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83af60c-798f-4343-8a9f-a6a39b6ccb3f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.718031 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16143562-a1da-4713-a062-e3b850e170f0-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"16143562-a1da-4713-a062-e3b850e170f0\") " pod="openstack/nova-scheduler-0" Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.718198 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16143562-a1da-4713-a062-e3b850e170f0-config-data\") pod \"nova-scheduler-0\" (UID: \"16143562-a1da-4713-a062-e3b850e170f0\") " pod="openstack/nova-scheduler-0" Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.730641 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm5l5\" (UniqueName: \"kubernetes.io/projected/16143562-a1da-4713-a062-e3b850e170f0-kube-api-access-nm5l5\") pod \"nova-scheduler-0\" (UID: \"16143562-a1da-4713-a062-e3b850e170f0\") " pod="openstack/nova-scheduler-0" Apr 04 02:29:07 crc kubenswrapper[4681]: I0404 02:29:07.767057 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.314031 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Apr 04 02:29:08 crc kubenswrapper[4681]: W0404 02:29:08.318676 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16143562_a1da_4713_a062_e3b850e170f0.slice/crio-79a3eb68df64af42327958230b6ab5dae120e899c07f3ba33934c746d263739a WatchSource:0}: Error finding container 79a3eb68df64af42327958230b6ab5dae120e899c07f3ba33934c746d263739a: Status 404 returned error can't find the container with id 79a3eb68df64af42327958230b6ab5dae120e899c07f3ba33934c746d263739a Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.378575 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"858e598e-35ac-4ca2-a5d5-52e31278378f","Type":"ContainerStarted","Data":"ea9ca5e2296ce36c524d3eb1116f16962e833a11f12f25c523b60f3dced8b63e"} Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.378628 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"858e598e-35ac-4ca2-a5d5-52e31278378f","Type":"ContainerStarted","Data":"da4ee7bbe40bd2578bb112d3c21d4df0f6323e8681d7829541a7728a8a24e23c"} Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.378642 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"858e598e-35ac-4ca2-a5d5-52e31278378f","Type":"ContainerStarted","Data":"d00f910723f7ee8908d77288875b186e8fbd4ab3de54deb025aea00935efd1f5"} Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.405994 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d83af60c-798f-4343-8a9f-a6a39b6ccb3f","Type":"ContainerDied","Data":"1b1b2fc40b4971b9ddd0fa3edbc86978d0f916463f03b118b65a653f74bb1f21"} Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.406053 4681 scope.go:117] "RemoveContainer" containerID="189ea82beb9e827b8827da85d019ab12eafdd5e0b9b0b15546a114106b0460f1" Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.406200 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.424350 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.424331178 podStartE2EDuration="2.424331178s" podCreationTimestamp="2026-04-04 02:29:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:29:08.405750286 +0000 UTC m=+2028.071525406" watchObservedRunningTime="2026-04-04 02:29:08.424331178 +0000 UTC m=+2028.090106298" Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.429448 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"16143562-a1da-4713-a062-e3b850e170f0","Type":"ContainerStarted","Data":"79a3eb68df64af42327958230b6ab5dae120e899c07f3ba33934c746d263739a"} Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.498000 4681 scope.go:117] "RemoveContainer" containerID="fbe749cd7dabad970bbb5c99b2a9ab570d07d522fece5a7653f3bbdcad6d2034" Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.507118 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.516428 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.546960 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Apr 04 02:29:08 crc kubenswrapper[4681]: E0404 02:29:08.547682 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83af60c-798f-4343-8a9f-a6a39b6ccb3f" containerName="nova-metadata-metadata" Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.547702 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83af60c-798f-4343-8a9f-a6a39b6ccb3f" containerName="nova-metadata-metadata" Apr 04 02:29:08 crc kubenswrapper[4681]: E0404 02:29:08.547769 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83af60c-798f-4343-8a9f-a6a39b6ccb3f" containerName="nova-metadata-log" Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.547777 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83af60c-798f-4343-8a9f-a6a39b6ccb3f" containerName="nova-metadata-log" Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.548116 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d83af60c-798f-4343-8a9f-a6a39b6ccb3f" containerName="nova-metadata-log" Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.548183 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d83af60c-798f-4343-8a9f-a6a39b6ccb3f" containerName="nova-metadata-metadata" Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.550322 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.553496 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.553731 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.573982 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.732183 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c85c4e78-d474-4016-b2b1-e05582da0f60-logs\") pod \"nova-metadata-0\" (UID: \"c85c4e78-d474-4016-b2b1-e05582da0f60\") " pod="openstack/nova-metadata-0" Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.732319 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljv24\" (UniqueName: \"kubernetes.io/projected/c85c4e78-d474-4016-b2b1-e05582da0f60-kube-api-access-ljv24\") pod \"nova-metadata-0\" (UID: \"c85c4e78-d474-4016-b2b1-e05582da0f60\") " pod="openstack/nova-metadata-0" Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.732516 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c85c4e78-d474-4016-b2b1-e05582da0f60-config-data\") pod \"nova-metadata-0\" (UID: \"c85c4e78-d474-4016-b2b1-e05582da0f60\") " pod="openstack/nova-metadata-0" Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.732668 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c85c4e78-d474-4016-b2b1-e05582da0f60-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c85c4e78-d474-4016-b2b1-e05582da0f60\") " pod="openstack/nova-metadata-0" Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.732776 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85c4e78-d474-4016-b2b1-e05582da0f60-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c85c4e78-d474-4016-b2b1-e05582da0f60\") " pod="openstack/nova-metadata-0" Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.834310 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljv24\" (UniqueName: \"kubernetes.io/projected/c85c4e78-d474-4016-b2b1-e05582da0f60-kube-api-access-ljv24\") pod \"nova-metadata-0\" (UID: \"c85c4e78-d474-4016-b2b1-e05582da0f60\") " pod="openstack/nova-metadata-0" Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.834472 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c85c4e78-d474-4016-b2b1-e05582da0f60-config-data\") pod \"nova-metadata-0\" (UID: \"c85c4e78-d474-4016-b2b1-e05582da0f60\") " pod="openstack/nova-metadata-0" Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.834527 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c85c4e78-d474-4016-b2b1-e05582da0f60-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c85c4e78-d474-4016-b2b1-e05582da0f60\") " pod="openstack/nova-metadata-0" Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.834576 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85c4e78-d474-4016-b2b1-e05582da0f60-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c85c4e78-d474-4016-b2b1-e05582da0f60\") " pod="openstack/nova-metadata-0" Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.834600 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c85c4e78-d474-4016-b2b1-e05582da0f60-logs\") pod \"nova-metadata-0\" (UID: \"c85c4e78-d474-4016-b2b1-e05582da0f60\") " pod="openstack/nova-metadata-0" Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.835976 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c85c4e78-d474-4016-b2b1-e05582da0f60-logs\") pod \"nova-metadata-0\" (UID: \"c85c4e78-d474-4016-b2b1-e05582da0f60\") " pod="openstack/nova-metadata-0" Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.840851 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c85c4e78-d474-4016-b2b1-e05582da0f60-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c85c4e78-d474-4016-b2b1-e05582da0f60\") " pod="openstack/nova-metadata-0" Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.840874 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c85c4e78-d474-4016-b2b1-e05582da0f60-config-data\") pod \"nova-metadata-0\" (UID: \"c85c4e78-d474-4016-b2b1-e05582da0f60\") " pod="openstack/nova-metadata-0" Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.842901 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c85c4e78-d474-4016-b2b1-e05582da0f60-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c85c4e78-d474-4016-b2b1-e05582da0f60\") " pod="openstack/nova-metadata-0" Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.870359 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljv24\" (UniqueName: \"kubernetes.io/projected/c85c4e78-d474-4016-b2b1-e05582da0f60-kube-api-access-ljv24\") pod \"nova-metadata-0\" (UID: \"c85c4e78-d474-4016-b2b1-e05582da0f60\") " pod="openstack/nova-metadata-0" Apr 04 02:29:08 crc kubenswrapper[4681]: I0404 02:29:08.875553 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Apr 04 02:29:09 crc kubenswrapper[4681]: I0404 02:29:09.201885 4681 scope.go:117] "RemoveContainer" containerID="a115e63e478f2dcd96d6a1ea5579c4abd8544184899aee1395f08415f92823da" Apr 04 02:29:09 crc kubenswrapper[4681]: E0404 02:29:09.202514 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:29:09 crc kubenswrapper[4681]: I0404 02:29:09.216596 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9616ac2a-6a60-414d-a440-59105ea678ee" path="/var/lib/kubelet/pods/9616ac2a-6a60-414d-a440-59105ea678ee/volumes" Apr 04 02:29:09 crc kubenswrapper[4681]: I0404 02:29:09.217343 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d83af60c-798f-4343-8a9f-a6a39b6ccb3f" path="/var/lib/kubelet/pods/d83af60c-798f-4343-8a9f-a6a39b6ccb3f/volumes" Apr 04 02:29:09 crc kubenswrapper[4681]: I0404 02:29:09.366144 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Apr 04 02:29:09 crc kubenswrapper[4681]: I0404 02:29:09.446660 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c85c4e78-d474-4016-b2b1-e05582da0f60","Type":"ContainerStarted","Data":"0c4404d67e2e9985a49452ebd2b4e0f1d2d67bb33fc533e550a96b533238c306"} Apr 04 02:29:09 crc kubenswrapper[4681]: I0404 02:29:09.449245 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"16143562-a1da-4713-a062-e3b850e170f0","Type":"ContainerStarted","Data":"a360fee6eba6d92ca9a4180be3410bbf9894ed211ef49bb93104cd9b7f108420"} Apr 04 02:29:09 crc kubenswrapper[4681]: I0404 02:29:09.469303 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.469284778 podStartE2EDuration="2.469284778s" podCreationTimestamp="2026-04-04 02:29:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:29:09.465561315 +0000 UTC m=+2029.131336435" watchObservedRunningTime="2026-04-04 02:29:09.469284778 +0000 UTC m=+2029.135059898" Apr 04 02:29:10 crc kubenswrapper[4681]: I0404 02:29:10.461690 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c85c4e78-d474-4016-b2b1-e05582da0f60","Type":"ContainerStarted","Data":"026c4311a9d679253857558070e388381aa358cbb23ae81a980f81e90e2305b8"} Apr 04 02:29:10 crc kubenswrapper[4681]: I0404 02:29:10.461989 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c85c4e78-d474-4016-b2b1-e05582da0f60","Type":"ContainerStarted","Data":"b4f7902dd3bbb8be210b2572e60f88dac4609b628a60edc5e0f463144d27caec"} Apr 04 02:29:10 crc kubenswrapper[4681]: I0404 02:29:10.483892 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.483870213 podStartE2EDuration="2.483870213s" podCreationTimestamp="2026-04-04 02:29:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:29:10.47793965 +0000 UTC m=+2030.143714780" watchObservedRunningTime="2026-04-04 02:29:10.483870213 +0000 UTC m=+2030.149645343" Apr 04 02:29:12 crc kubenswrapper[4681]: I0404 02:29:12.767518 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Apr 04 02:29:17 crc kubenswrapper[4681]: I0404 02:29:17.083725 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Apr 04 02:29:17 crc kubenswrapper[4681]: I0404 02:29:17.084754 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Apr 04 02:29:17 crc kubenswrapper[4681]: I0404 02:29:17.692337 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g479w"] Apr 04 02:29:17 crc kubenswrapper[4681]: I0404 02:29:17.694693 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g479w" Apr 04 02:29:17 crc kubenswrapper[4681]: I0404 02:29:17.706832 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g479w"] Apr 04 02:29:17 crc kubenswrapper[4681]: I0404 02:29:17.712711 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxtb2\" (UniqueName: \"kubernetes.io/projected/5ecf2236-21de-4759-8950-1f95cd329e03-kube-api-access-lxtb2\") pod \"redhat-marketplace-g479w\" (UID: \"5ecf2236-21de-4759-8950-1f95cd329e03\") " pod="openshift-marketplace/redhat-marketplace-g479w" Apr 04 02:29:17 crc kubenswrapper[4681]: I0404 02:29:17.712886 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ecf2236-21de-4759-8950-1f95cd329e03-utilities\") pod \"redhat-marketplace-g479w\" (UID: \"5ecf2236-21de-4759-8950-1f95cd329e03\") " pod="openshift-marketplace/redhat-marketplace-g479w" Apr 04 02:29:17 crc kubenswrapper[4681]: I0404 02:29:17.712914 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ecf2236-21de-4759-8950-1f95cd329e03-catalog-content\") pod \"redhat-marketplace-g479w\" (UID: \"5ecf2236-21de-4759-8950-1f95cd329e03\") " pod="openshift-marketplace/redhat-marketplace-g479w" Apr 04 02:29:17 crc kubenswrapper[4681]: I0404 02:29:17.767923 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Apr 04 02:29:17 crc kubenswrapper[4681]: I0404 02:29:17.799379 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Apr 04 02:29:17 crc kubenswrapper[4681]: I0404 02:29:17.814964 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxtb2\" (UniqueName: \"kubernetes.io/projected/5ecf2236-21de-4759-8950-1f95cd329e03-kube-api-access-lxtb2\") pod \"redhat-marketplace-g479w\" (UID: \"5ecf2236-21de-4759-8950-1f95cd329e03\") " pod="openshift-marketplace/redhat-marketplace-g479w" Apr 04 02:29:17 crc kubenswrapper[4681]: I0404 02:29:17.815142 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ecf2236-21de-4759-8950-1f95cd329e03-utilities\") pod \"redhat-marketplace-g479w\" (UID: \"5ecf2236-21de-4759-8950-1f95cd329e03\") " pod="openshift-marketplace/redhat-marketplace-g479w" Apr 04 02:29:17 crc kubenswrapper[4681]: I0404 02:29:17.815167 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ecf2236-21de-4759-8950-1f95cd329e03-catalog-content\") pod \"redhat-marketplace-g479w\" (UID: \"5ecf2236-21de-4759-8950-1f95cd329e03\") " pod="openshift-marketplace/redhat-marketplace-g479w" Apr 04 02:29:17 crc kubenswrapper[4681]: I0404 02:29:17.815712 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ecf2236-21de-4759-8950-1f95cd329e03-catalog-content\") pod \"redhat-marketplace-g479w\" (UID: \"5ecf2236-21de-4759-8950-1f95cd329e03\") " pod="openshift-marketplace/redhat-marketplace-g479w" Apr 04 02:29:17 crc kubenswrapper[4681]: I0404 02:29:17.815796 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ecf2236-21de-4759-8950-1f95cd329e03-utilities\") pod \"redhat-marketplace-g479w\" (UID: \"5ecf2236-21de-4759-8950-1f95cd329e03\") " pod="openshift-marketplace/redhat-marketplace-g479w" Apr 04 02:29:17 crc kubenswrapper[4681]: I0404 02:29:17.851053 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxtb2\" (UniqueName: \"kubernetes.io/projected/5ecf2236-21de-4759-8950-1f95cd329e03-kube-api-access-lxtb2\") pod \"redhat-marketplace-g479w\" (UID: \"5ecf2236-21de-4759-8950-1f95cd329e03\") " pod="openshift-marketplace/redhat-marketplace-g479w" Apr 04 02:29:18 crc kubenswrapper[4681]: I0404 02:29:18.017494 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g479w" Apr 04 02:29:18 crc kubenswrapper[4681]: I0404 02:29:18.102468 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="858e598e-35ac-4ca2-a5d5-52e31278378f" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.234:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 04 02:29:18 crc kubenswrapper[4681]: I0404 02:29:18.102481 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="858e598e-35ac-4ca2-a5d5-52e31278378f" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.234:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 04 02:29:18 crc kubenswrapper[4681]: I0404 02:29:18.605939 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Apr 04 02:29:18 crc kubenswrapper[4681]: I0404 02:29:18.646820 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g479w"] Apr 04 02:29:18 crc kubenswrapper[4681]: I0404 02:29:18.877165 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Apr 04 02:29:18 crc kubenswrapper[4681]: I0404 02:29:18.877314 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Apr 04 02:29:19 crc kubenswrapper[4681]: I0404 02:29:19.548053 4681 generic.go:334] "Generic (PLEG): container finished" podID="5ecf2236-21de-4759-8950-1f95cd329e03" containerID="d71db8fec48ff66c2aebbdc1a784ccb43e49e2f163a2d457198ed81d28717340" exitCode=0 Apr 04 02:29:19 crc kubenswrapper[4681]: I0404 02:29:19.549613 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g479w" event={"ID":"5ecf2236-21de-4759-8950-1f95cd329e03","Type":"ContainerDied","Data":"d71db8fec48ff66c2aebbdc1a784ccb43e49e2f163a2d457198ed81d28717340"} Apr 04 02:29:19 crc kubenswrapper[4681]: I0404 02:29:19.549645 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g479w" event={"ID":"5ecf2236-21de-4759-8950-1f95cd329e03","Type":"ContainerStarted","Data":"2a90fc86514cdf91036c0c264d93e467928e0467745cc15c00164ea582beca4d"} Apr 04 02:29:19 crc kubenswrapper[4681]: I0404 02:29:19.894527 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c85c4e78-d474-4016-b2b1-e05582da0f60" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.236:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 04 02:29:19 crc kubenswrapper[4681]: I0404 02:29:19.894531 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c85c4e78-d474-4016-b2b1-e05582da0f60" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.236:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 04 02:29:20 crc kubenswrapper[4681]: I0404 02:29:20.566785 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g479w" event={"ID":"5ecf2236-21de-4759-8950-1f95cd329e03","Type":"ContainerStarted","Data":"b275ccb05ae95174233f23971feb1111b37569d8f34d925c02f3e587c9a45b46"} Apr 04 02:29:21 crc kubenswrapper[4681]: I0404 02:29:21.200971 4681 scope.go:117] "RemoveContainer" containerID="a115e63e478f2dcd96d6a1ea5579c4abd8544184899aee1395f08415f92823da" Apr 04 02:29:21 crc kubenswrapper[4681]: E0404 02:29:21.201441 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:29:21 crc kubenswrapper[4681]: I0404 02:29:21.579394 4681 generic.go:334] "Generic (PLEG): container finished" podID="5ecf2236-21de-4759-8950-1f95cd329e03" containerID="b275ccb05ae95174233f23971feb1111b37569d8f34d925c02f3e587c9a45b46" exitCode=0 Apr 04 02:29:21 crc kubenswrapper[4681]: I0404 02:29:21.579441 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g479w" event={"ID":"5ecf2236-21de-4759-8950-1f95cd329e03","Type":"ContainerDied","Data":"b275ccb05ae95174233f23971feb1111b37569d8f34d925c02f3e587c9a45b46"} Apr 04 02:29:22 crc kubenswrapper[4681]: I0404 02:29:22.476857 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Apr 04 02:29:23 crc kubenswrapper[4681]: I0404 02:29:23.610989 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g479w" event={"ID":"5ecf2236-21de-4759-8950-1f95cd329e03","Type":"ContainerStarted","Data":"c3e3e37d790ca218c23aede1a09d52291ab5ad178e4ffdf5cca232b9f854e946"} Apr 04 02:29:23 crc kubenswrapper[4681]: I0404 02:29:23.634862 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g479w" podStartSLOduration=3.796008065 podStartE2EDuration="6.634840311s" podCreationTimestamp="2026-04-04 02:29:17 +0000 UTC" firstStartedPulling="2026-04-04 02:29:19.551505335 +0000 UTC m=+2039.217280455" lastFinishedPulling="2026-04-04 02:29:22.390337581 +0000 UTC m=+2042.056112701" observedRunningTime="2026-04-04 02:29:23.628163758 +0000 UTC m=+2043.293938888" watchObservedRunningTime="2026-04-04 02:29:23.634840311 +0000 UTC m=+2043.300615431" Apr 04 02:29:25 crc kubenswrapper[4681]: I0404 02:29:25.083810 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Apr 04 02:29:25 crc kubenswrapper[4681]: I0404 02:29:25.083858 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Apr 04 02:29:26 crc kubenswrapper[4681]: I0404 02:29:26.877234 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Apr 04 02:29:26 crc kubenswrapper[4681]: I0404 02:29:26.877319 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Apr 04 02:29:27 crc kubenswrapper[4681]: I0404 02:29:27.093508 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Apr 04 02:29:27 crc kubenswrapper[4681]: I0404 02:29:27.094225 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Apr 04 02:29:27 crc kubenswrapper[4681]: I0404 02:29:27.104115 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Apr 04 02:29:27 crc kubenswrapper[4681]: I0404 02:29:27.673937 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Apr 04 02:29:28 crc kubenswrapper[4681]: I0404 02:29:28.018023 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g479w" Apr 04 02:29:28 crc kubenswrapper[4681]: I0404 02:29:28.018822 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g479w" Apr 04 02:29:28 crc kubenswrapper[4681]: I0404 02:29:28.068806 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g479w" Apr 04 02:29:28 crc kubenswrapper[4681]: I0404 02:29:28.722156 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g479w" Apr 04 02:29:28 crc kubenswrapper[4681]: I0404 02:29:28.768058 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g479w"] Apr 04 02:29:28 crc kubenswrapper[4681]: I0404 02:29:28.883338 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Apr 04 02:29:28 crc kubenswrapper[4681]: I0404 02:29:28.884080 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Apr 04 02:29:28 crc kubenswrapper[4681]: I0404 02:29:28.888806 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Apr 04 02:29:29 crc kubenswrapper[4681]: I0404 02:29:29.688537 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Apr 04 02:29:30 crc kubenswrapper[4681]: I0404 02:29:30.690408 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g479w" podUID="5ecf2236-21de-4759-8950-1f95cd329e03" containerName="registry-server" containerID="cri-o://c3e3e37d790ca218c23aede1a09d52291ab5ad178e4ffdf5cca232b9f854e946" gracePeriod=2 Apr 04 02:29:31 crc kubenswrapper[4681]: I0404 02:29:31.116525 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g479w" Apr 04 02:29:31 crc kubenswrapper[4681]: I0404 02:29:31.208232 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ecf2236-21de-4759-8950-1f95cd329e03-catalog-content\") pod \"5ecf2236-21de-4759-8950-1f95cd329e03\" (UID: \"5ecf2236-21de-4759-8950-1f95cd329e03\") " Apr 04 02:29:31 crc kubenswrapper[4681]: I0404 02:29:31.208314 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxtb2\" (UniqueName: \"kubernetes.io/projected/5ecf2236-21de-4759-8950-1f95cd329e03-kube-api-access-lxtb2\") pod \"5ecf2236-21de-4759-8950-1f95cd329e03\" (UID: \"5ecf2236-21de-4759-8950-1f95cd329e03\") " Apr 04 02:29:31 crc kubenswrapper[4681]: I0404 02:29:31.208546 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ecf2236-21de-4759-8950-1f95cd329e03-utilities\") pod \"5ecf2236-21de-4759-8950-1f95cd329e03\" (UID: \"5ecf2236-21de-4759-8950-1f95cd329e03\") " Apr 04 02:29:31 crc kubenswrapper[4681]: I0404 02:29:31.210579 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ecf2236-21de-4759-8950-1f95cd329e03-utilities" (OuterVolumeSpecName: "utilities") pod "5ecf2236-21de-4759-8950-1f95cd329e03" (UID: "5ecf2236-21de-4759-8950-1f95cd329e03"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:29:31 crc kubenswrapper[4681]: I0404 02:29:31.216567 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ecf2236-21de-4759-8950-1f95cd329e03-kube-api-access-lxtb2" (OuterVolumeSpecName: "kube-api-access-lxtb2") pod "5ecf2236-21de-4759-8950-1f95cd329e03" (UID: "5ecf2236-21de-4759-8950-1f95cd329e03"). InnerVolumeSpecName "kube-api-access-lxtb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:29:31 crc kubenswrapper[4681]: I0404 02:29:31.244015 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ecf2236-21de-4759-8950-1f95cd329e03-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ecf2236-21de-4759-8950-1f95cd329e03" (UID: "5ecf2236-21de-4759-8950-1f95cd329e03"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:29:31 crc kubenswrapper[4681]: I0404 02:29:31.310301 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ecf2236-21de-4759-8950-1f95cd329e03-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:31 crc kubenswrapper[4681]: I0404 02:29:31.310343 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxtb2\" (UniqueName: \"kubernetes.io/projected/5ecf2236-21de-4759-8950-1f95cd329e03-kube-api-access-lxtb2\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:31 crc kubenswrapper[4681]: I0404 02:29:31.310359 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ecf2236-21de-4759-8950-1f95cd329e03-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:31 crc kubenswrapper[4681]: I0404 02:29:31.707772 4681 generic.go:334] "Generic (PLEG): container finished" podID="5ecf2236-21de-4759-8950-1f95cd329e03" containerID="c3e3e37d790ca218c23aede1a09d52291ab5ad178e4ffdf5cca232b9f854e946" exitCode=0 Apr 04 02:29:31 crc kubenswrapper[4681]: I0404 02:29:31.707860 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g479w" event={"ID":"5ecf2236-21de-4759-8950-1f95cd329e03","Type":"ContainerDied","Data":"c3e3e37d790ca218c23aede1a09d52291ab5ad178e4ffdf5cca232b9f854e946"} Apr 04 02:29:31 crc kubenswrapper[4681]: I0404 02:29:31.708202 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g479w" event={"ID":"5ecf2236-21de-4759-8950-1f95cd329e03","Type":"ContainerDied","Data":"2a90fc86514cdf91036c0c264d93e467928e0467745cc15c00164ea582beca4d"} Apr 04 02:29:31 crc kubenswrapper[4681]: I0404 02:29:31.708239 4681 scope.go:117] "RemoveContainer" containerID="c3e3e37d790ca218c23aede1a09d52291ab5ad178e4ffdf5cca232b9f854e946" Apr 04 02:29:31 crc kubenswrapper[4681]: I0404 02:29:31.707925 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g479w" Apr 04 02:29:31 crc kubenswrapper[4681]: I0404 02:29:31.766553 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g479w"] Apr 04 02:29:31 crc kubenswrapper[4681]: I0404 02:29:31.767485 4681 scope.go:117] "RemoveContainer" containerID="b275ccb05ae95174233f23971feb1111b37569d8f34d925c02f3e587c9a45b46" Apr 04 02:29:31 crc kubenswrapper[4681]: I0404 02:29:31.776422 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g479w"] Apr 04 02:29:31 crc kubenswrapper[4681]: I0404 02:29:31.794454 4681 scope.go:117] "RemoveContainer" containerID="d71db8fec48ff66c2aebbdc1a784ccb43e49e2f163a2d457198ed81d28717340" Apr 04 02:29:31 crc kubenswrapper[4681]: I0404 02:29:31.837859 4681 scope.go:117] "RemoveContainer" containerID="c3e3e37d790ca218c23aede1a09d52291ab5ad178e4ffdf5cca232b9f854e946" Apr 04 02:29:31 crc kubenswrapper[4681]: E0404 02:29:31.838302 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3e3e37d790ca218c23aede1a09d52291ab5ad178e4ffdf5cca232b9f854e946\": container with ID starting with c3e3e37d790ca218c23aede1a09d52291ab5ad178e4ffdf5cca232b9f854e946 not found: ID does not exist" containerID="c3e3e37d790ca218c23aede1a09d52291ab5ad178e4ffdf5cca232b9f854e946" Apr 04 02:29:31 crc kubenswrapper[4681]: I0404 02:29:31.838333 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3e3e37d790ca218c23aede1a09d52291ab5ad178e4ffdf5cca232b9f854e946"} err="failed to get container status \"c3e3e37d790ca218c23aede1a09d52291ab5ad178e4ffdf5cca232b9f854e946\": rpc error: code = NotFound desc = could not find container \"c3e3e37d790ca218c23aede1a09d52291ab5ad178e4ffdf5cca232b9f854e946\": container with ID starting with c3e3e37d790ca218c23aede1a09d52291ab5ad178e4ffdf5cca232b9f854e946 not found: ID does not exist" Apr 04 02:29:31 crc kubenswrapper[4681]: I0404 02:29:31.838361 4681 scope.go:117] "RemoveContainer" containerID="b275ccb05ae95174233f23971feb1111b37569d8f34d925c02f3e587c9a45b46" Apr 04 02:29:31 crc kubenswrapper[4681]: E0404 02:29:31.838722 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b275ccb05ae95174233f23971feb1111b37569d8f34d925c02f3e587c9a45b46\": container with ID starting with b275ccb05ae95174233f23971feb1111b37569d8f34d925c02f3e587c9a45b46 not found: ID does not exist" containerID="b275ccb05ae95174233f23971feb1111b37569d8f34d925c02f3e587c9a45b46" Apr 04 02:29:31 crc kubenswrapper[4681]: I0404 02:29:31.838755 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b275ccb05ae95174233f23971feb1111b37569d8f34d925c02f3e587c9a45b46"} err="failed to get container status \"b275ccb05ae95174233f23971feb1111b37569d8f34d925c02f3e587c9a45b46\": rpc error: code = NotFound desc = could not find container \"b275ccb05ae95174233f23971feb1111b37569d8f34d925c02f3e587c9a45b46\": container with ID starting with b275ccb05ae95174233f23971feb1111b37569d8f34d925c02f3e587c9a45b46 not found: ID does not exist" Apr 04 02:29:31 crc kubenswrapper[4681]: I0404 02:29:31.838776 4681 scope.go:117] "RemoveContainer" containerID="d71db8fec48ff66c2aebbdc1a784ccb43e49e2f163a2d457198ed81d28717340" Apr 04 02:29:31 crc kubenswrapper[4681]: E0404 02:29:31.839092 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d71db8fec48ff66c2aebbdc1a784ccb43e49e2f163a2d457198ed81d28717340\": container with ID starting with d71db8fec48ff66c2aebbdc1a784ccb43e49e2f163a2d457198ed81d28717340 not found: ID does not exist" containerID="d71db8fec48ff66c2aebbdc1a784ccb43e49e2f163a2d457198ed81d28717340" Apr 04 02:29:31 crc kubenswrapper[4681]: I0404 02:29:31.839160 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d71db8fec48ff66c2aebbdc1a784ccb43e49e2f163a2d457198ed81d28717340"} err="failed to get container status \"d71db8fec48ff66c2aebbdc1a784ccb43e49e2f163a2d457198ed81d28717340\": rpc error: code = NotFound desc = could not find container \"d71db8fec48ff66c2aebbdc1a784ccb43e49e2f163a2d457198ed81d28717340\": container with ID starting with d71db8fec48ff66c2aebbdc1a784ccb43e49e2f163a2d457198ed81d28717340 not found: ID does not exist" Apr 04 02:29:33 crc kubenswrapper[4681]: I0404 02:29:33.201763 4681 scope.go:117] "RemoveContainer" containerID="a115e63e478f2dcd96d6a1ea5579c4abd8544184899aee1395f08415f92823da" Apr 04 02:29:33 crc kubenswrapper[4681]: E0404 02:29:33.202304 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:29:33 crc kubenswrapper[4681]: I0404 02:29:33.215924 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ecf2236-21de-4759-8950-1f95cd329e03" path="/var/lib/kubelet/pods/5ecf2236-21de-4759-8950-1f95cd329e03/volumes" Apr 04 02:29:39 crc kubenswrapper[4681]: I0404 02:29:39.766867 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Apr 04 02:29:40 crc kubenswrapper[4681]: I0404 02:29:40.685748 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Apr 04 02:29:43 crc kubenswrapper[4681]: I0404 02:29:43.269414 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="b40175fa-a3b0-40c3-bc35-7d927897b82b" containerName="rabbitmq" containerID="cri-o://6b45a0a11701e0e9c68853c0d349941d9a5177198c7082613ce522d101158e75" gracePeriod=604797 Apr 04 02:29:43 crc kubenswrapper[4681]: I0404 02:29:43.865476 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="274d9ff3-9300-48ad-8172-5be9539f6e7b" containerName="rabbitmq" containerID="cri-o://daeb2cafed59da4f2eff2f0e710e84123865650508f479b8fb19626d7f281a15" gracePeriod=604797 Apr 04 02:29:44 crc kubenswrapper[4681]: I0404 02:29:44.852792 4681 generic.go:334] "Generic (PLEG): container finished" podID="b40175fa-a3b0-40c3-bc35-7d927897b82b" containerID="6b45a0a11701e0e9c68853c0d349941d9a5177198c7082613ce522d101158e75" exitCode=0 Apr 04 02:29:44 crc kubenswrapper[4681]: I0404 02:29:44.852881 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b40175fa-a3b0-40c3-bc35-7d927897b82b","Type":"ContainerDied","Data":"6b45a0a11701e0e9c68853c0d349941d9a5177198c7082613ce522d101158e75"} Apr 04 02:29:44 crc kubenswrapper[4681]: I0404 02:29:44.853236 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b40175fa-a3b0-40c3-bc35-7d927897b82b","Type":"ContainerDied","Data":"b90271c55f6dc546352e3b58b33273ce09e93afbbc49ffa55de4309797c707ff"} Apr 04 02:29:44 crc kubenswrapper[4681]: I0404 02:29:44.853254 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b90271c55f6dc546352e3b58b33273ce09e93afbbc49ffa55de4309797c707ff" Apr 04 02:29:44 crc kubenswrapper[4681]: I0404 02:29:44.857726 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Apr 04 02:29:44 crc kubenswrapper[4681]: I0404 02:29:44.936460 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b40175fa-a3b0-40c3-bc35-7d927897b82b-rabbitmq-plugins\") pod \"b40175fa-a3b0-40c3-bc35-7d927897b82b\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " Apr 04 02:29:44 crc kubenswrapper[4681]: I0404 02:29:44.936779 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b40175fa-a3b0-40c3-bc35-7d927897b82b-server-conf\") pod \"b40175fa-a3b0-40c3-bc35-7d927897b82b\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " Apr 04 02:29:44 crc kubenswrapper[4681]: I0404 02:29:44.936808 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b40175fa-a3b0-40c3-bc35-7d927897b82b-plugins-conf\") pod \"b40175fa-a3b0-40c3-bc35-7d927897b82b\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " Apr 04 02:29:44 crc kubenswrapper[4681]: I0404 02:29:44.936849 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b40175fa-a3b0-40c3-bc35-7d927897b82b-config-data\") pod \"b40175fa-a3b0-40c3-bc35-7d927897b82b\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " Apr 04 02:29:44 crc kubenswrapper[4681]: I0404 02:29:44.936873 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"b40175fa-a3b0-40c3-bc35-7d927897b82b\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " Apr 04 02:29:44 crc kubenswrapper[4681]: I0404 02:29:44.936908 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b40175fa-a3b0-40c3-bc35-7d927897b82b-erlang-cookie-secret\") pod \"b40175fa-a3b0-40c3-bc35-7d927897b82b\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " Apr 04 02:29:44 crc kubenswrapper[4681]: I0404 02:29:44.936924 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b40175fa-a3b0-40c3-bc35-7d927897b82b-rabbitmq-confd\") pod \"b40175fa-a3b0-40c3-bc35-7d927897b82b\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " Apr 04 02:29:44 crc kubenswrapper[4681]: I0404 02:29:44.936932 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b40175fa-a3b0-40c3-bc35-7d927897b82b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b40175fa-a3b0-40c3-bc35-7d927897b82b" (UID: "b40175fa-a3b0-40c3-bc35-7d927897b82b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:29:44 crc kubenswrapper[4681]: I0404 02:29:44.936974 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfgqd\" (UniqueName: \"kubernetes.io/projected/b40175fa-a3b0-40c3-bc35-7d927897b82b-kube-api-access-hfgqd\") pod \"b40175fa-a3b0-40c3-bc35-7d927897b82b\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " Apr 04 02:29:44 crc kubenswrapper[4681]: I0404 02:29:44.937006 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b40175fa-a3b0-40c3-bc35-7d927897b82b-rabbitmq-tls\") pod \"b40175fa-a3b0-40c3-bc35-7d927897b82b\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " Apr 04 02:29:44 crc kubenswrapper[4681]: I0404 02:29:44.937729 4681 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b40175fa-a3b0-40c3-bc35-7d927897b82b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:44 crc kubenswrapper[4681]: I0404 02:29:44.942736 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b40175fa-a3b0-40c3-bc35-7d927897b82b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b40175fa-a3b0-40c3-bc35-7d927897b82b" (UID: "b40175fa-a3b0-40c3-bc35-7d927897b82b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:29:44 crc kubenswrapper[4681]: I0404 02:29:44.943032 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b40175fa-a3b0-40c3-bc35-7d927897b82b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b40175fa-a3b0-40c3-bc35-7d927897b82b" (UID: "b40175fa-a3b0-40c3-bc35-7d927897b82b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:29:44 crc kubenswrapper[4681]: I0404 02:29:44.946361 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b40175fa-a3b0-40c3-bc35-7d927897b82b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b40175fa-a3b0-40c3-bc35-7d927897b82b" (UID: "b40175fa-a3b0-40c3-bc35-7d927897b82b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:29:44 crc kubenswrapper[4681]: I0404 02:29:44.963470 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "b40175fa-a3b0-40c3-bc35-7d927897b82b" (UID: "b40175fa-a3b0-40c3-bc35-7d927897b82b"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Apr 04 02:29:44 crc kubenswrapper[4681]: I0404 02:29:44.963643 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b40175fa-a3b0-40c3-bc35-7d927897b82b-kube-api-access-hfgqd" (OuterVolumeSpecName: "kube-api-access-hfgqd") pod "b40175fa-a3b0-40c3-bc35-7d927897b82b" (UID: "b40175fa-a3b0-40c3-bc35-7d927897b82b"). InnerVolumeSpecName "kube-api-access-hfgqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.011928 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b40175fa-a3b0-40c3-bc35-7d927897b82b-config-data" (OuterVolumeSpecName: "config-data") pod "b40175fa-a3b0-40c3-bc35-7d927897b82b" (UID: "b40175fa-a3b0-40c3-bc35-7d927897b82b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.025898 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b40175fa-a3b0-40c3-bc35-7d927897b82b-server-conf" (OuterVolumeSpecName: "server-conf") pod "b40175fa-a3b0-40c3-bc35-7d927897b82b" (UID: "b40175fa-a3b0-40c3-bc35-7d927897b82b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.040566 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b40175fa-a3b0-40c3-bc35-7d927897b82b-pod-info\") pod \"b40175fa-a3b0-40c3-bc35-7d927897b82b\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.040678 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b40175fa-a3b0-40c3-bc35-7d927897b82b-rabbitmq-erlang-cookie\") pod \"b40175fa-a3b0-40c3-bc35-7d927897b82b\" (UID: \"b40175fa-a3b0-40c3-bc35-7d927897b82b\") " Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.041401 4681 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b40175fa-a3b0-40c3-bc35-7d927897b82b-server-conf\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.041423 4681 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b40175fa-a3b0-40c3-bc35-7d927897b82b-plugins-conf\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.041443 4681 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.041452 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b40175fa-a3b0-40c3-bc35-7d927897b82b-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.041460 4681 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b40175fa-a3b0-40c3-bc35-7d927897b82b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.041469 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfgqd\" (UniqueName: \"kubernetes.io/projected/b40175fa-a3b0-40c3-bc35-7d927897b82b-kube-api-access-hfgqd\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.041476 4681 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b40175fa-a3b0-40c3-bc35-7d927897b82b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.042615 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b40175fa-a3b0-40c3-bc35-7d927897b82b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b40175fa-a3b0-40c3-bc35-7d927897b82b" (UID: "b40175fa-a3b0-40c3-bc35-7d927897b82b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.045035 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b40175fa-a3b0-40c3-bc35-7d927897b82b-pod-info" (OuterVolumeSpecName: "pod-info") pod "b40175fa-a3b0-40c3-bc35-7d927897b82b" (UID: "b40175fa-a3b0-40c3-bc35-7d927897b82b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.070878 4681 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.145769 4681 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b40175fa-a3b0-40c3-bc35-7d927897b82b-pod-info\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.145935 4681 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b40175fa-a3b0-40c3-bc35-7d927897b82b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.146012 4681 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.145785 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b40175fa-a3b0-40c3-bc35-7d927897b82b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b40175fa-a3b0-40c3-bc35-7d927897b82b" (UID: "b40175fa-a3b0-40c3-bc35-7d927897b82b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.248813 4681 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b40175fa-a3b0-40c3-bc35-7d927897b82b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.421497 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.451934 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/274d9ff3-9300-48ad-8172-5be9539f6e7b-rabbitmq-confd\") pod \"274d9ff3-9300-48ad-8172-5be9539f6e7b\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.451991 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srcs6\" (UniqueName: \"kubernetes.io/projected/274d9ff3-9300-48ad-8172-5be9539f6e7b-kube-api-access-srcs6\") pod \"274d9ff3-9300-48ad-8172-5be9539f6e7b\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.452053 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/274d9ff3-9300-48ad-8172-5be9539f6e7b-server-conf\") pod \"274d9ff3-9300-48ad-8172-5be9539f6e7b\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.452095 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/274d9ff3-9300-48ad-8172-5be9539f6e7b-rabbitmq-erlang-cookie\") pod \"274d9ff3-9300-48ad-8172-5be9539f6e7b\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.452126 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/274d9ff3-9300-48ad-8172-5be9539f6e7b-rabbitmq-tls\") pod \"274d9ff3-9300-48ad-8172-5be9539f6e7b\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.452557 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/274d9ff3-9300-48ad-8172-5be9539f6e7b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "274d9ff3-9300-48ad-8172-5be9539f6e7b" (UID: "274d9ff3-9300-48ad-8172-5be9539f6e7b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.452782 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/274d9ff3-9300-48ad-8172-5be9539f6e7b-config-data\") pod \"274d9ff3-9300-48ad-8172-5be9539f6e7b\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.458617 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/274d9ff3-9300-48ad-8172-5be9539f6e7b-kube-api-access-srcs6" (OuterVolumeSpecName: "kube-api-access-srcs6") pod "274d9ff3-9300-48ad-8172-5be9539f6e7b" (UID: "274d9ff3-9300-48ad-8172-5be9539f6e7b"). InnerVolumeSpecName "kube-api-access-srcs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.459938 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "274d9ff3-9300-48ad-8172-5be9539f6e7b" (UID: "274d9ff3-9300-48ad-8172-5be9539f6e7b"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.460009 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/274d9ff3-9300-48ad-8172-5be9539f6e7b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "274d9ff3-9300-48ad-8172-5be9539f6e7b" (UID: "274d9ff3-9300-48ad-8172-5be9539f6e7b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.463455 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"274d9ff3-9300-48ad-8172-5be9539f6e7b\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.463590 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/274d9ff3-9300-48ad-8172-5be9539f6e7b-plugins-conf\") pod \"274d9ff3-9300-48ad-8172-5be9539f6e7b\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.463623 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/274d9ff3-9300-48ad-8172-5be9539f6e7b-rabbitmq-plugins\") pod \"274d9ff3-9300-48ad-8172-5be9539f6e7b\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.463701 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/274d9ff3-9300-48ad-8172-5be9539f6e7b-pod-info\") pod \"274d9ff3-9300-48ad-8172-5be9539f6e7b\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.463740 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/274d9ff3-9300-48ad-8172-5be9539f6e7b-erlang-cookie-secret\") pod \"274d9ff3-9300-48ad-8172-5be9539f6e7b\" (UID: \"274d9ff3-9300-48ad-8172-5be9539f6e7b\") " Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.464462 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srcs6\" (UniqueName: \"kubernetes.io/projected/274d9ff3-9300-48ad-8172-5be9539f6e7b-kube-api-access-srcs6\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.464480 4681 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/274d9ff3-9300-48ad-8172-5be9539f6e7b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.464490 4681 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/274d9ff3-9300-48ad-8172-5be9539f6e7b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.464509 4681 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.465208 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/274d9ff3-9300-48ad-8172-5be9539f6e7b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "274d9ff3-9300-48ad-8172-5be9539f6e7b" (UID: "274d9ff3-9300-48ad-8172-5be9539f6e7b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.470913 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/274d9ff3-9300-48ad-8172-5be9539f6e7b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "274d9ff3-9300-48ad-8172-5be9539f6e7b" (UID: "274d9ff3-9300-48ad-8172-5be9539f6e7b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.473930 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/274d9ff3-9300-48ad-8172-5be9539f6e7b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "274d9ff3-9300-48ad-8172-5be9539f6e7b" (UID: "274d9ff3-9300-48ad-8172-5be9539f6e7b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.495452 4681 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.507831 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/274d9ff3-9300-48ad-8172-5be9539f6e7b-config-data" (OuterVolumeSpecName: "config-data") pod "274d9ff3-9300-48ad-8172-5be9539f6e7b" (UID: "274d9ff3-9300-48ad-8172-5be9539f6e7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.519404 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/274d9ff3-9300-48ad-8172-5be9539f6e7b-pod-info" (OuterVolumeSpecName: "pod-info") pod "274d9ff3-9300-48ad-8172-5be9539f6e7b" (UID: "274d9ff3-9300-48ad-8172-5be9539f6e7b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.575642 4681 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/274d9ff3-9300-48ad-8172-5be9539f6e7b-plugins-conf\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.575689 4681 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/274d9ff3-9300-48ad-8172-5be9539f6e7b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.575700 4681 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/274d9ff3-9300-48ad-8172-5be9539f6e7b-pod-info\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.575714 4681 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/274d9ff3-9300-48ad-8172-5be9539f6e7b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.575726 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/274d9ff3-9300-48ad-8172-5be9539f6e7b-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.575738 4681 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.600957 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/274d9ff3-9300-48ad-8172-5be9539f6e7b-server-conf" (OuterVolumeSpecName: "server-conf") pod "274d9ff3-9300-48ad-8172-5be9539f6e7b" (UID: "274d9ff3-9300-48ad-8172-5be9539f6e7b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.649541 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/274d9ff3-9300-48ad-8172-5be9539f6e7b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "274d9ff3-9300-48ad-8172-5be9539f6e7b" (UID: "274d9ff3-9300-48ad-8172-5be9539f6e7b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.677713 4681 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/274d9ff3-9300-48ad-8172-5be9539f6e7b-server-conf\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.677751 4681 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/274d9ff3-9300-48ad-8172-5be9539f6e7b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.863242 4681 generic.go:334] "Generic (PLEG): container finished" podID="274d9ff3-9300-48ad-8172-5be9539f6e7b" containerID="daeb2cafed59da4f2eff2f0e710e84123865650508f479b8fb19626d7f281a15" exitCode=0 Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.863325 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.863370 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"274d9ff3-9300-48ad-8172-5be9539f6e7b","Type":"ContainerDied","Data":"daeb2cafed59da4f2eff2f0e710e84123865650508f479b8fb19626d7f281a15"} Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.864561 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"274d9ff3-9300-48ad-8172-5be9539f6e7b","Type":"ContainerDied","Data":"1b1f20ffd5f92abdea6191780f349a233c0076c681e57dcd798f8b28762bd9b5"} Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.864588 4681 scope.go:117] "RemoveContainer" containerID="daeb2cafed59da4f2eff2f0e710e84123865650508f479b8fb19626d7f281a15" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.864680 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.905224 4681 scope.go:117] "RemoveContainer" containerID="6bd0a70fd89cdba92090f31bfc1808554d991ffdc4bb98db56421cd6149eb1f5" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.938163 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.948412 4681 scope.go:117] "RemoveContainer" containerID="daeb2cafed59da4f2eff2f0e710e84123865650508f479b8fb19626d7f281a15" Apr 04 02:29:45 crc kubenswrapper[4681]: E0404 02:29:45.952837 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daeb2cafed59da4f2eff2f0e710e84123865650508f479b8fb19626d7f281a15\": container with ID starting with daeb2cafed59da4f2eff2f0e710e84123865650508f479b8fb19626d7f281a15 not found: ID does not exist" containerID="daeb2cafed59da4f2eff2f0e710e84123865650508f479b8fb19626d7f281a15" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.952878 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daeb2cafed59da4f2eff2f0e710e84123865650508f479b8fb19626d7f281a15"} err="failed to get container status \"daeb2cafed59da4f2eff2f0e710e84123865650508f479b8fb19626d7f281a15\": rpc error: code = NotFound desc = could not find container \"daeb2cafed59da4f2eff2f0e710e84123865650508f479b8fb19626d7f281a15\": container with ID starting with daeb2cafed59da4f2eff2f0e710e84123865650508f479b8fb19626d7f281a15 not found: ID does not exist" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.952902 4681 scope.go:117] "RemoveContainer" containerID="6bd0a70fd89cdba92090f31bfc1808554d991ffdc4bb98db56421cd6149eb1f5" Apr 04 02:29:45 crc kubenswrapper[4681]: E0404 02:29:45.957629 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bd0a70fd89cdba92090f31bfc1808554d991ffdc4bb98db56421cd6149eb1f5\": container with ID starting with 6bd0a70fd89cdba92090f31bfc1808554d991ffdc4bb98db56421cd6149eb1f5 not found: ID does not exist" containerID="6bd0a70fd89cdba92090f31bfc1808554d991ffdc4bb98db56421cd6149eb1f5" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.957685 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bd0a70fd89cdba92090f31bfc1808554d991ffdc4bb98db56421cd6149eb1f5"} err="failed to get container status \"6bd0a70fd89cdba92090f31bfc1808554d991ffdc4bb98db56421cd6149eb1f5\": rpc error: code = NotFound desc = could not find container \"6bd0a70fd89cdba92090f31bfc1808554d991ffdc4bb98db56421cd6149eb1f5\": container with ID starting with 6bd0a70fd89cdba92090f31bfc1808554d991ffdc4bb98db56421cd6149eb1f5 not found: ID does not exist" Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.965035 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.975402 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Apr 04 02:29:45 crc kubenswrapper[4681]: I0404 02:29:45.989999 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.005997 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Apr 04 02:29:46 crc kubenswrapper[4681]: E0404 02:29:46.006683 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b40175fa-a3b0-40c3-bc35-7d927897b82b" containerName="rabbitmq" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.006710 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="b40175fa-a3b0-40c3-bc35-7d927897b82b" containerName="rabbitmq" Apr 04 02:29:46 crc kubenswrapper[4681]: E0404 02:29:46.006730 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ecf2236-21de-4759-8950-1f95cd329e03" containerName="extract-utilities" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.006740 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ecf2236-21de-4759-8950-1f95cd329e03" containerName="extract-utilities" Apr 04 02:29:46 crc kubenswrapper[4681]: E0404 02:29:46.006759 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ecf2236-21de-4759-8950-1f95cd329e03" containerName="extract-content" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.006769 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ecf2236-21de-4759-8950-1f95cd329e03" containerName="extract-content" Apr 04 02:29:46 crc kubenswrapper[4681]: E0404 02:29:46.006787 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="274d9ff3-9300-48ad-8172-5be9539f6e7b" containerName="setup-container" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.006814 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="274d9ff3-9300-48ad-8172-5be9539f6e7b" containerName="setup-container" Apr 04 02:29:46 crc kubenswrapper[4681]: E0404 02:29:46.006834 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b40175fa-a3b0-40c3-bc35-7d927897b82b" containerName="setup-container" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.006841 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="b40175fa-a3b0-40c3-bc35-7d927897b82b" containerName="setup-container" Apr 04 02:29:46 crc kubenswrapper[4681]: E0404 02:29:46.006866 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="274d9ff3-9300-48ad-8172-5be9539f6e7b" containerName="rabbitmq" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.006874 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="274d9ff3-9300-48ad-8172-5be9539f6e7b" containerName="rabbitmq" Apr 04 02:29:46 crc kubenswrapper[4681]: E0404 02:29:46.006891 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ecf2236-21de-4759-8950-1f95cd329e03" containerName="registry-server" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.006899 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ecf2236-21de-4759-8950-1f95cd329e03" containerName="registry-server" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.007134 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="274d9ff3-9300-48ad-8172-5be9539f6e7b" containerName="rabbitmq" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.007151 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="b40175fa-a3b0-40c3-bc35-7d927897b82b" containerName="rabbitmq" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.007169 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ecf2236-21de-4759-8950-1f95cd329e03" containerName="registry-server" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.008508 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.013893 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-d5flm" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.013954 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.014802 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.014960 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.017605 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.017849 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.018003 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.032130 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.034703 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.041936 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.042238 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.042456 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.042655 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-qc276" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.041980 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.042125 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.043161 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.060213 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.070113 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.089004 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dfd8bf26-d103-4fa4-92d1-b463c9012169-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.089071 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dfd8bf26-d103-4fa4-92d1-b463c9012169-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.089136 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.089165 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/caa29c68-1123-4e1c-ba0a-8a34a9be0135-server-conf\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.089440 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/caa29c68-1123-4e1c-ba0a-8a34a9be0135-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.089481 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/caa29c68-1123-4e1c-ba0a-8a34a9be0135-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.089507 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhmsk\" (UniqueName: \"kubernetes.io/projected/dfd8bf26-d103-4fa4-92d1-b463c9012169-kube-api-access-fhmsk\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.089539 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/caa29c68-1123-4e1c-ba0a-8a34a9be0135-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.089567 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/caa29c68-1123-4e1c-ba0a-8a34a9be0135-pod-info\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.089593 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dfd8bf26-d103-4fa4-92d1-b463c9012169-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.089664 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dfd8bf26-d103-4fa4-92d1-b463c9012169-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.089697 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/caa29c68-1123-4e1c-ba0a-8a34a9be0135-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.089720 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/caa29c68-1123-4e1c-ba0a-8a34a9be0135-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.089748 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzfh8\" (UniqueName: \"kubernetes.io/projected/caa29c68-1123-4e1c-ba0a-8a34a9be0135-kube-api-access-vzfh8\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.089780 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dfd8bf26-d103-4fa4-92d1-b463c9012169-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.089844 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.089889 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dfd8bf26-d103-4fa4-92d1-b463c9012169-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.089944 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dfd8bf26-d103-4fa4-92d1-b463c9012169-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.089967 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/caa29c68-1123-4e1c-ba0a-8a34a9be0135-config-data\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.089994 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dfd8bf26-d103-4fa4-92d1-b463c9012169-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.090047 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dfd8bf26-d103-4fa4-92d1-b463c9012169-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.090079 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/caa29c68-1123-4e1c-ba0a-8a34a9be0135-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.191884 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/caa29c68-1123-4e1c-ba0a-8a34a9be0135-config-data\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.192126 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dfd8bf26-d103-4fa4-92d1-b463c9012169-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.192219 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dfd8bf26-d103-4fa4-92d1-b463c9012169-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.192338 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dfd8bf26-d103-4fa4-92d1-b463c9012169-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.192425 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/caa29c68-1123-4e1c-ba0a-8a34a9be0135-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.192557 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dfd8bf26-d103-4fa4-92d1-b463c9012169-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.192766 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dfd8bf26-d103-4fa4-92d1-b463c9012169-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.192803 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dfd8bf26-d103-4fa4-92d1-b463c9012169-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.192815 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/caa29c68-1123-4e1c-ba0a-8a34a9be0135-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.193241 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/caa29c68-1123-4e1c-ba0a-8a34a9be0135-config-data\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.192782 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dfd8bf26-d103-4fa4-92d1-b463c9012169-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.193468 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.193546 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.193510 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dfd8bf26-d103-4fa4-92d1-b463c9012169-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.193516 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dfd8bf26-d103-4fa4-92d1-b463c9012169-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.193749 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/caa29c68-1123-4e1c-ba0a-8a34a9be0135-server-conf\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.193854 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/caa29c68-1123-4e1c-ba0a-8a34a9be0135-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.193989 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/caa29c68-1123-4e1c-ba0a-8a34a9be0135-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.194076 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhmsk\" (UniqueName: \"kubernetes.io/projected/dfd8bf26-d103-4fa4-92d1-b463c9012169-kube-api-access-fhmsk\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.194150 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/caa29c68-1123-4e1c-ba0a-8a34a9be0135-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.194228 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/caa29c68-1123-4e1c-ba0a-8a34a9be0135-pod-info\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.194327 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dfd8bf26-d103-4fa4-92d1-b463c9012169-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.194460 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dfd8bf26-d103-4fa4-92d1-b463c9012169-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.194543 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/caa29c68-1123-4e1c-ba0a-8a34a9be0135-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.194624 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/caa29c68-1123-4e1c-ba0a-8a34a9be0135-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.194705 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzfh8\" (UniqueName: \"kubernetes.io/projected/caa29c68-1123-4e1c-ba0a-8a34a9be0135-kube-api-access-vzfh8\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.194793 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dfd8bf26-d103-4fa4-92d1-b463c9012169-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.194910 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.195003 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dfd8bf26-d103-4fa4-92d1-b463c9012169-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.195253 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/caa29c68-1123-4e1c-ba0a-8a34a9be0135-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.196177 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/caa29c68-1123-4e1c-ba0a-8a34a9be0135-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.194644 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/caa29c68-1123-4e1c-ba0a-8a34a9be0135-server-conf\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.197462 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dfd8bf26-d103-4fa4-92d1-b463c9012169-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.197646 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dfd8bf26-d103-4fa4-92d1-b463c9012169-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.197746 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.200576 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/caa29c68-1123-4e1c-ba0a-8a34a9be0135-pod-info\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.200666 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/caa29c68-1123-4e1c-ba0a-8a34a9be0135-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.201206 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/caa29c68-1123-4e1c-ba0a-8a34a9be0135-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.201323 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dfd8bf26-d103-4fa4-92d1-b463c9012169-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.201878 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/caa29c68-1123-4e1c-ba0a-8a34a9be0135-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.223011 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dfd8bf26-d103-4fa4-92d1-b463c9012169-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.224979 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzfh8\" (UniqueName: \"kubernetes.io/projected/caa29c68-1123-4e1c-ba0a-8a34a9be0135-kube-api-access-vzfh8\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.225638 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dfd8bf26-d103-4fa4-92d1-b463c9012169-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.226602 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhmsk\" (UniqueName: \"kubernetes.io/projected/dfd8bf26-d103-4fa4-92d1-b463c9012169-kube-api-access-fhmsk\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.255543 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"dfd8bf26-d103-4fa4-92d1-b463c9012169\") " pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.262351 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"caa29c68-1123-4e1c-ba0a-8a34a9be0135\") " pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.327179 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.358093 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.859311 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Apr 04 02:29:46 crc kubenswrapper[4681]: I0404 02:29:46.992931 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Apr 04 02:29:47 crc kubenswrapper[4681]: I0404 02:29:47.213565 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="274d9ff3-9300-48ad-8172-5be9539f6e7b" path="/var/lib/kubelet/pods/274d9ff3-9300-48ad-8172-5be9539f6e7b/volumes" Apr 04 02:29:47 crc kubenswrapper[4681]: I0404 02:29:47.215081 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b40175fa-a3b0-40c3-bc35-7d927897b82b" path="/var/lib/kubelet/pods/b40175fa-a3b0-40c3-bc35-7d927897b82b/volumes" Apr 04 02:29:47 crc kubenswrapper[4681]: I0404 02:29:47.887311 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"caa29c68-1123-4e1c-ba0a-8a34a9be0135","Type":"ContainerStarted","Data":"4614f29597e04803aa4500d8fb5100dd62aef2061e7eaa099875079bd94c1c26"} Apr 04 02:29:47 crc kubenswrapper[4681]: I0404 02:29:47.888823 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dfd8bf26-d103-4fa4-92d1-b463c9012169","Type":"ContainerStarted","Data":"58f2752b572f28d92e674dfa9186bc1a2043aad90edbdc061dd1ca45865991fc"} Apr 04 02:29:48 crc kubenswrapper[4681]: I0404 02:29:48.202016 4681 scope.go:117] "RemoveContainer" containerID="a115e63e478f2dcd96d6a1ea5579c4abd8544184899aee1395f08415f92823da" Apr 04 02:29:48 crc kubenswrapper[4681]: E0404 02:29:48.202385 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:29:48 crc kubenswrapper[4681]: I0404 02:29:48.899946 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"caa29c68-1123-4e1c-ba0a-8a34a9be0135","Type":"ContainerStarted","Data":"ff87c6c1338fe75e58f46fd4c3bb96e6b9eb007f84cda32e4a0b006b89bf9b22"} Apr 04 02:29:49 crc kubenswrapper[4681]: I0404 02:29:49.911186 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dfd8bf26-d103-4fa4-92d1-b463c9012169","Type":"ContainerStarted","Data":"c42f1e78e1aefdcbbb707ac7f69a463196e4f7690e5f3e3b24aa5de53f2378d8"} Apr 04 02:29:54 crc kubenswrapper[4681]: I0404 02:29:54.176320 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5769bb9fb9-5w8jm"] Apr 04 02:29:54 crc kubenswrapper[4681]: I0404 02:29:54.178321 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" Apr 04 02:29:54 crc kubenswrapper[4681]: I0404 02:29:54.190250 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5769bb9fb9-5w8jm"] Apr 04 02:29:54 crc kubenswrapper[4681]: I0404 02:29:54.197857 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Apr 04 02:29:54 crc kubenswrapper[4681]: I0404 02:29:54.285508 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-openstack-edpm-ipam\") pod \"dnsmasq-dns-5769bb9fb9-5w8jm\" (UID: \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\") " pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" Apr 04 02:29:54 crc kubenswrapper[4681]: I0404 02:29:54.285562 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-dns-swift-storage-0\") pod \"dnsmasq-dns-5769bb9fb9-5w8jm\" (UID: \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\") " pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" Apr 04 02:29:54 crc kubenswrapper[4681]: I0404 02:29:54.285590 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-dns-svc\") pod \"dnsmasq-dns-5769bb9fb9-5w8jm\" (UID: \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\") " pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" Apr 04 02:29:54 crc kubenswrapper[4681]: I0404 02:29:54.285607 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdznd\" (UniqueName: \"kubernetes.io/projected/9edb6c0a-eff0-4497-9ec3-8949f17e734e-kube-api-access-zdznd\") pod \"dnsmasq-dns-5769bb9fb9-5w8jm\" (UID: \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\") " pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" Apr 04 02:29:54 crc kubenswrapper[4681]: I0404 02:29:54.285813 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-ovsdbserver-nb\") pod \"dnsmasq-dns-5769bb9fb9-5w8jm\" (UID: \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\") " pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" Apr 04 02:29:54 crc kubenswrapper[4681]: I0404 02:29:54.285925 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-ovsdbserver-sb\") pod \"dnsmasq-dns-5769bb9fb9-5w8jm\" (UID: \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\") " pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" Apr 04 02:29:54 crc kubenswrapper[4681]: I0404 02:29:54.285970 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-config\") pod \"dnsmasq-dns-5769bb9fb9-5w8jm\" (UID: \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\") " pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" Apr 04 02:29:54 crc kubenswrapper[4681]: I0404 02:29:54.387217 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-ovsdbserver-sb\") pod \"dnsmasq-dns-5769bb9fb9-5w8jm\" (UID: \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\") " pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" Apr 04 02:29:54 crc kubenswrapper[4681]: I0404 02:29:54.387279 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-config\") pod \"dnsmasq-dns-5769bb9fb9-5w8jm\" (UID: \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\") " pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" Apr 04 02:29:54 crc kubenswrapper[4681]: I0404 02:29:54.387362 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-openstack-edpm-ipam\") pod \"dnsmasq-dns-5769bb9fb9-5w8jm\" (UID: \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\") " pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" Apr 04 02:29:54 crc kubenswrapper[4681]: I0404 02:29:54.387390 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-dns-swift-storage-0\") pod \"dnsmasq-dns-5769bb9fb9-5w8jm\" (UID: \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\") " pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" Apr 04 02:29:54 crc kubenswrapper[4681]: I0404 02:29:54.387419 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-dns-svc\") pod \"dnsmasq-dns-5769bb9fb9-5w8jm\" (UID: \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\") " pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" Apr 04 02:29:54 crc kubenswrapper[4681]: I0404 02:29:54.387433 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdznd\" (UniqueName: \"kubernetes.io/projected/9edb6c0a-eff0-4497-9ec3-8949f17e734e-kube-api-access-zdznd\") pod \"dnsmasq-dns-5769bb9fb9-5w8jm\" (UID: \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\") " pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" Apr 04 02:29:54 crc kubenswrapper[4681]: I0404 02:29:54.387492 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-ovsdbserver-nb\") pod \"dnsmasq-dns-5769bb9fb9-5w8jm\" (UID: \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\") " pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" Apr 04 02:29:54 crc kubenswrapper[4681]: I0404 02:29:54.388809 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-ovsdbserver-nb\") pod \"dnsmasq-dns-5769bb9fb9-5w8jm\" (UID: \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\") " pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" Apr 04 02:29:54 crc kubenswrapper[4681]: I0404 02:29:54.388824 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-ovsdbserver-sb\") pod \"dnsmasq-dns-5769bb9fb9-5w8jm\" (UID: \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\") " pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" Apr 04 02:29:54 crc kubenswrapper[4681]: I0404 02:29:54.389052 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-dns-svc\") pod \"dnsmasq-dns-5769bb9fb9-5w8jm\" (UID: \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\") " pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" Apr 04 02:29:54 crc kubenswrapper[4681]: I0404 02:29:54.389048 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-openstack-edpm-ipam\") pod \"dnsmasq-dns-5769bb9fb9-5w8jm\" (UID: \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\") " pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" Apr 04 02:29:54 crc kubenswrapper[4681]: I0404 02:29:54.389838 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-config\") pod \"dnsmasq-dns-5769bb9fb9-5w8jm\" (UID: \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\") " pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" Apr 04 02:29:54 crc kubenswrapper[4681]: I0404 02:29:54.391736 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-dns-swift-storage-0\") pod \"dnsmasq-dns-5769bb9fb9-5w8jm\" (UID: \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\") " pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" Apr 04 02:29:54 crc kubenswrapper[4681]: I0404 02:29:54.417549 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdznd\" (UniqueName: \"kubernetes.io/projected/9edb6c0a-eff0-4497-9ec3-8949f17e734e-kube-api-access-zdznd\") pod \"dnsmasq-dns-5769bb9fb9-5w8jm\" (UID: \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\") " pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" Apr 04 02:29:54 crc kubenswrapper[4681]: I0404 02:29:54.502834 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" Apr 04 02:29:54 crc kubenswrapper[4681]: I0404 02:29:54.971689 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5769bb9fb9-5w8jm"] Apr 04 02:29:55 crc kubenswrapper[4681]: I0404 02:29:55.971368 4681 generic.go:334] "Generic (PLEG): container finished" podID="9edb6c0a-eff0-4497-9ec3-8949f17e734e" containerID="cd0b6fa2eeee66ec8773e66a0b4dc98386c58d2cb75231c0a6383ce9cffcdf99" exitCode=0 Apr 04 02:29:55 crc kubenswrapper[4681]: I0404 02:29:55.971442 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" event={"ID":"9edb6c0a-eff0-4497-9ec3-8949f17e734e","Type":"ContainerDied","Data":"cd0b6fa2eeee66ec8773e66a0b4dc98386c58d2cb75231c0a6383ce9cffcdf99"} Apr 04 02:29:55 crc kubenswrapper[4681]: I0404 02:29:55.971680 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" event={"ID":"9edb6c0a-eff0-4497-9ec3-8949f17e734e","Type":"ContainerStarted","Data":"4a13a18003c636e598a21d4f41db9df8f8d3a091f509401f70278d8a0bdd2e84"} Apr 04 02:29:56 crc kubenswrapper[4681]: I0404 02:29:56.984030 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" event={"ID":"9edb6c0a-eff0-4497-9ec3-8949f17e734e","Type":"ContainerStarted","Data":"65a5de4392075c26cc2c890e3953f6a6f24f0e07a437b71cfb71d43b5b7c4313"} Apr 04 02:29:56 crc kubenswrapper[4681]: I0404 02:29:56.984443 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" Apr 04 02:29:57 crc kubenswrapper[4681]: I0404 02:29:57.007000 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" podStartSLOduration=3.006984343 podStartE2EDuration="3.006984343s" podCreationTimestamp="2026-04-04 02:29:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:29:57.005719168 +0000 UTC m=+2076.671494288" watchObservedRunningTime="2026-04-04 02:29:57.006984343 +0000 UTC m=+2076.672759463" Apr 04 02:30:00 crc kubenswrapper[4681]: I0404 02:30:00.140621 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587830-zbqnl"] Apr 04 02:30:00 crc kubenswrapper[4681]: I0404 02:30:00.143642 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587830-zbqnl" Apr 04 02:30:00 crc kubenswrapper[4681]: I0404 02:30:00.145723 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 02:30:00 crc kubenswrapper[4681]: I0404 02:30:00.146677 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 02:30:00 crc kubenswrapper[4681]: I0404 02:30:00.147483 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 02:30:00 crc kubenswrapper[4681]: I0404 02:30:00.159957 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587830-cb9k5"] Apr 04 02:30:00 crc kubenswrapper[4681]: I0404 02:30:00.161721 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587830-cb9k5" Apr 04 02:30:00 crc kubenswrapper[4681]: I0404 02:30:00.164599 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Apr 04 02:30:00 crc kubenswrapper[4681]: I0404 02:30:00.164835 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Apr 04 02:30:00 crc kubenswrapper[4681]: I0404 02:30:00.171754 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnplk\" (UniqueName: \"kubernetes.io/projected/c2377ba5-c19e-40bd-918f-e993357fd5e7-kube-api-access-qnplk\") pod \"auto-csr-approver-29587830-zbqnl\" (UID: \"c2377ba5-c19e-40bd-918f-e993357fd5e7\") " pod="openshift-infra/auto-csr-approver-29587830-zbqnl" Apr 04 02:30:00 crc kubenswrapper[4681]: I0404 02:30:00.173497 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587830-cb9k5"] Apr 04 02:30:00 crc kubenswrapper[4681]: I0404 02:30:00.190648 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587830-zbqnl"] Apr 04 02:30:00 crc kubenswrapper[4681]: I0404 02:30:00.201678 4681 scope.go:117] "RemoveContainer" containerID="a115e63e478f2dcd96d6a1ea5579c4abd8544184899aee1395f08415f92823da" Apr 04 02:30:00 crc kubenswrapper[4681]: I0404 02:30:00.272800 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grctv\" (UniqueName: \"kubernetes.io/projected/9cba58d4-a8c6-4a88-8c02-d6b12c7b935c-kube-api-access-grctv\") pod \"collect-profiles-29587830-cb9k5\" (UID: \"9cba58d4-a8c6-4a88-8c02-d6b12c7b935c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587830-cb9k5" Apr 04 02:30:00 crc kubenswrapper[4681]: I0404 02:30:00.273101 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnplk\" (UniqueName: \"kubernetes.io/projected/c2377ba5-c19e-40bd-918f-e993357fd5e7-kube-api-access-qnplk\") pod \"auto-csr-approver-29587830-zbqnl\" (UID: \"c2377ba5-c19e-40bd-918f-e993357fd5e7\") " pod="openshift-infra/auto-csr-approver-29587830-zbqnl" Apr 04 02:30:00 crc kubenswrapper[4681]: I0404 02:30:00.273239 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9cba58d4-a8c6-4a88-8c02-d6b12c7b935c-secret-volume\") pod \"collect-profiles-29587830-cb9k5\" (UID: \"9cba58d4-a8c6-4a88-8c02-d6b12c7b935c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587830-cb9k5" Apr 04 02:30:00 crc kubenswrapper[4681]: I0404 02:30:00.273304 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9cba58d4-a8c6-4a88-8c02-d6b12c7b935c-config-volume\") pod \"collect-profiles-29587830-cb9k5\" (UID: \"9cba58d4-a8c6-4a88-8c02-d6b12c7b935c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587830-cb9k5" Apr 04 02:30:00 crc kubenswrapper[4681]: I0404 02:30:00.296905 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnplk\" (UniqueName: \"kubernetes.io/projected/c2377ba5-c19e-40bd-918f-e993357fd5e7-kube-api-access-qnplk\") pod \"auto-csr-approver-29587830-zbqnl\" (UID: \"c2377ba5-c19e-40bd-918f-e993357fd5e7\") " pod="openshift-infra/auto-csr-approver-29587830-zbqnl" Apr 04 02:30:00 crc kubenswrapper[4681]: I0404 02:30:00.376195 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9cba58d4-a8c6-4a88-8c02-d6b12c7b935c-secret-volume\") pod \"collect-profiles-29587830-cb9k5\" (UID: \"9cba58d4-a8c6-4a88-8c02-d6b12c7b935c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587830-cb9k5" Apr 04 02:30:00 crc kubenswrapper[4681]: I0404 02:30:00.376368 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9cba58d4-a8c6-4a88-8c02-d6b12c7b935c-config-volume\") pod \"collect-profiles-29587830-cb9k5\" (UID: \"9cba58d4-a8c6-4a88-8c02-d6b12c7b935c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587830-cb9k5" Apr 04 02:30:00 crc kubenswrapper[4681]: I0404 02:30:00.376502 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grctv\" (UniqueName: \"kubernetes.io/projected/9cba58d4-a8c6-4a88-8c02-d6b12c7b935c-kube-api-access-grctv\") pod \"collect-profiles-29587830-cb9k5\" (UID: \"9cba58d4-a8c6-4a88-8c02-d6b12c7b935c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587830-cb9k5" Apr 04 02:30:00 crc kubenswrapper[4681]: I0404 02:30:00.377765 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9cba58d4-a8c6-4a88-8c02-d6b12c7b935c-config-volume\") pod \"collect-profiles-29587830-cb9k5\" (UID: \"9cba58d4-a8c6-4a88-8c02-d6b12c7b935c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587830-cb9k5" Apr 04 02:30:00 crc kubenswrapper[4681]: I0404 02:30:00.382404 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9cba58d4-a8c6-4a88-8c02-d6b12c7b935c-secret-volume\") pod \"collect-profiles-29587830-cb9k5\" (UID: \"9cba58d4-a8c6-4a88-8c02-d6b12c7b935c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587830-cb9k5" Apr 04 02:30:00 crc kubenswrapper[4681]: I0404 02:30:00.393757 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grctv\" (UniqueName: \"kubernetes.io/projected/9cba58d4-a8c6-4a88-8c02-d6b12c7b935c-kube-api-access-grctv\") pod \"collect-profiles-29587830-cb9k5\" (UID: \"9cba58d4-a8c6-4a88-8c02-d6b12c7b935c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587830-cb9k5" Apr 04 02:30:00 crc kubenswrapper[4681]: I0404 02:30:00.470701 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587830-zbqnl" Apr 04 02:30:00 crc kubenswrapper[4681]: I0404 02:30:00.490123 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587830-cb9k5" Apr 04 02:30:00 crc kubenswrapper[4681]: I0404 02:30:00.984318 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587830-zbqnl"] Apr 04 02:30:00 crc kubenswrapper[4681]: W0404 02:30:00.994336 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2377ba5_c19e_40bd_918f_e993357fd5e7.slice/crio-c636be3590d141b2eee7ca7075b68c841bd08830966a737d6ce3e8ff0b54a7d0 WatchSource:0}: Error finding container c636be3590d141b2eee7ca7075b68c841bd08830966a737d6ce3e8ff0b54a7d0: Status 404 returned error can't find the container with id c636be3590d141b2eee7ca7075b68c841bd08830966a737d6ce3e8ff0b54a7d0 Apr 04 02:30:01 crc kubenswrapper[4681]: I0404 02:30:01.029803 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerStarted","Data":"cd044358ac75974487a44a4e933ddc9b9a48d95be8a57a85af2e38de9daa1d56"} Apr 04 02:30:01 crc kubenswrapper[4681]: I0404 02:30:01.031107 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587830-zbqnl" event={"ID":"c2377ba5-c19e-40bd-918f-e993357fd5e7","Type":"ContainerStarted","Data":"c636be3590d141b2eee7ca7075b68c841bd08830966a737d6ce3e8ff0b54a7d0"} Apr 04 02:30:01 crc kubenswrapper[4681]: I0404 02:30:01.087444 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587830-cb9k5"] Apr 04 02:30:02 crc kubenswrapper[4681]: I0404 02:30:02.043638 4681 generic.go:334] "Generic (PLEG): container finished" podID="9cba58d4-a8c6-4a88-8c02-d6b12c7b935c" containerID="17bde040bacf8d803af894efea6d1b5d9a45ddfae0ea83ffc10114c826450491" exitCode=0 Apr 04 02:30:02 crc kubenswrapper[4681]: I0404 02:30:02.044156 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587830-cb9k5" event={"ID":"9cba58d4-a8c6-4a88-8c02-d6b12c7b935c","Type":"ContainerDied","Data":"17bde040bacf8d803af894efea6d1b5d9a45ddfae0ea83ffc10114c826450491"} Apr 04 02:30:02 crc kubenswrapper[4681]: I0404 02:30:02.044190 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587830-cb9k5" event={"ID":"9cba58d4-a8c6-4a88-8c02-d6b12c7b935c","Type":"ContainerStarted","Data":"3dd8cef47a85cceb9855a7c198280b4ae3447b4c0b9cc3a2350f28970cbe7e6a"} Apr 04 02:30:03 crc kubenswrapper[4681]: I0404 02:30:03.528701 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587830-cb9k5" Apr 04 02:30:03 crc kubenswrapper[4681]: I0404 02:30:03.549397 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grctv\" (UniqueName: \"kubernetes.io/projected/9cba58d4-a8c6-4a88-8c02-d6b12c7b935c-kube-api-access-grctv\") pod \"9cba58d4-a8c6-4a88-8c02-d6b12c7b935c\" (UID: \"9cba58d4-a8c6-4a88-8c02-d6b12c7b935c\") " Apr 04 02:30:03 crc kubenswrapper[4681]: I0404 02:30:03.549480 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9cba58d4-a8c6-4a88-8c02-d6b12c7b935c-secret-volume\") pod \"9cba58d4-a8c6-4a88-8c02-d6b12c7b935c\" (UID: \"9cba58d4-a8c6-4a88-8c02-d6b12c7b935c\") " Apr 04 02:30:03 crc kubenswrapper[4681]: I0404 02:30:03.549678 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9cba58d4-a8c6-4a88-8c02-d6b12c7b935c-config-volume\") pod \"9cba58d4-a8c6-4a88-8c02-d6b12c7b935c\" (UID: \"9cba58d4-a8c6-4a88-8c02-d6b12c7b935c\") " Apr 04 02:30:03 crc kubenswrapper[4681]: I0404 02:30:03.550995 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cba58d4-a8c6-4a88-8c02-d6b12c7b935c-config-volume" (OuterVolumeSpecName: "config-volume") pod "9cba58d4-a8c6-4a88-8c02-d6b12c7b935c" (UID: "9cba58d4-a8c6-4a88-8c02-d6b12c7b935c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:30:03 crc kubenswrapper[4681]: I0404 02:30:03.561102 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cba58d4-a8c6-4a88-8c02-d6b12c7b935c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9cba58d4-a8c6-4a88-8c02-d6b12c7b935c" (UID: "9cba58d4-a8c6-4a88-8c02-d6b12c7b935c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:30:03 crc kubenswrapper[4681]: I0404 02:30:03.565027 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cba58d4-a8c6-4a88-8c02-d6b12c7b935c-kube-api-access-grctv" (OuterVolumeSpecName: "kube-api-access-grctv") pod "9cba58d4-a8c6-4a88-8c02-d6b12c7b935c" (UID: "9cba58d4-a8c6-4a88-8c02-d6b12c7b935c"). InnerVolumeSpecName "kube-api-access-grctv". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:30:03 crc kubenswrapper[4681]: I0404 02:30:03.652124 4681 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9cba58d4-a8c6-4a88-8c02-d6b12c7b935c-config-volume\") on node \"crc\" DevicePath \"\"" Apr 04 02:30:03 crc kubenswrapper[4681]: I0404 02:30:03.652163 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grctv\" (UniqueName: \"kubernetes.io/projected/9cba58d4-a8c6-4a88-8c02-d6b12c7b935c-kube-api-access-grctv\") on node \"crc\" DevicePath \"\"" Apr 04 02:30:03 crc kubenswrapper[4681]: I0404 02:30:03.652173 4681 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9cba58d4-a8c6-4a88-8c02-d6b12c7b935c-secret-volume\") on node \"crc\" DevicePath \"\"" Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.076586 4681 generic.go:334] "Generic (PLEG): container finished" podID="c2377ba5-c19e-40bd-918f-e993357fd5e7" containerID="53a76e0e222e8715df091f7206da8e2c31b906920aef8edfa64147798f0d00a7" exitCode=0 Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.076729 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587830-zbqnl" event={"ID":"c2377ba5-c19e-40bd-918f-e993357fd5e7","Type":"ContainerDied","Data":"53a76e0e222e8715df091f7206da8e2c31b906920aef8edfa64147798f0d00a7"} Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.080118 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587830-cb9k5" event={"ID":"9cba58d4-a8c6-4a88-8c02-d6b12c7b935c","Type":"ContainerDied","Data":"3dd8cef47a85cceb9855a7c198280b4ae3447b4c0b9cc3a2350f28970cbe7e6a"} Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.080174 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dd8cef47a85cceb9855a7c198280b4ae3447b4c0b9cc3a2350f28970cbe7e6a" Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.080255 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587830-cb9k5" Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.504181 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.596964 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54fd76d97c-j5cr2"] Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.597258 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" podUID="d3fb1572-f7e6-4be5-8839-647fa7e78e67" containerName="dnsmasq-dns" containerID="cri-o://cf8c19ef7dccd1910785ed37646ba2265e713e30ebf0ad090ff8b3581dd0fc53" gracePeriod=10 Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.639706 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587785-ft78m"] Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.676328 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587785-ft78m"] Apr 04 02:30:04 crc kubenswrapper[4681]: E0404 02:30:04.748078 4681 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3fb1572_f7e6_4be5_8839_647fa7e78e67.slice/crio-conmon-cf8c19ef7dccd1910785ed37646ba2265e713e30ebf0ad090ff8b3581dd0fc53.scope\": RecentStats: unable to find data in memory cache]" Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.767302 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7444fdbf45-49mp6"] Apr 04 02:30:04 crc kubenswrapper[4681]: E0404 02:30:04.767805 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cba58d4-a8c6-4a88-8c02-d6b12c7b935c" containerName="collect-profiles" Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.767823 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cba58d4-a8c6-4a88-8c02-d6b12c7b935c" containerName="collect-profiles" Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.768047 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cba58d4-a8c6-4a88-8c02-d6b12c7b935c" containerName="collect-profiles" Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.769088 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7444fdbf45-49mp6" Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.788153 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7444fdbf45-49mp6"] Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.880106 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f63bd22c-53ff-43aa-bc6d-fd388516ef62-ovsdbserver-nb\") pod \"dnsmasq-dns-7444fdbf45-49mp6\" (UID: \"f63bd22c-53ff-43aa-bc6d-fd388516ef62\") " pod="openstack/dnsmasq-dns-7444fdbf45-49mp6" Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.880173 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f63bd22c-53ff-43aa-bc6d-fd388516ef62-ovsdbserver-sb\") pod \"dnsmasq-dns-7444fdbf45-49mp6\" (UID: \"f63bd22c-53ff-43aa-bc6d-fd388516ef62\") " pod="openstack/dnsmasq-dns-7444fdbf45-49mp6" Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.880216 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f63bd22c-53ff-43aa-bc6d-fd388516ef62-config\") pod \"dnsmasq-dns-7444fdbf45-49mp6\" (UID: \"f63bd22c-53ff-43aa-bc6d-fd388516ef62\") " pod="openstack/dnsmasq-dns-7444fdbf45-49mp6" Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.880429 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmrk7\" (UniqueName: \"kubernetes.io/projected/f63bd22c-53ff-43aa-bc6d-fd388516ef62-kube-api-access-vmrk7\") pod \"dnsmasq-dns-7444fdbf45-49mp6\" (UID: \"f63bd22c-53ff-43aa-bc6d-fd388516ef62\") " pod="openstack/dnsmasq-dns-7444fdbf45-49mp6" Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.880519 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f63bd22c-53ff-43aa-bc6d-fd388516ef62-dns-swift-storage-0\") pod \"dnsmasq-dns-7444fdbf45-49mp6\" (UID: \"f63bd22c-53ff-43aa-bc6d-fd388516ef62\") " pod="openstack/dnsmasq-dns-7444fdbf45-49mp6" Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.880589 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f63bd22c-53ff-43aa-bc6d-fd388516ef62-openstack-edpm-ipam\") pod \"dnsmasq-dns-7444fdbf45-49mp6\" (UID: \"f63bd22c-53ff-43aa-bc6d-fd388516ef62\") " pod="openstack/dnsmasq-dns-7444fdbf45-49mp6" Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.880618 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f63bd22c-53ff-43aa-bc6d-fd388516ef62-dns-svc\") pod \"dnsmasq-dns-7444fdbf45-49mp6\" (UID: \"f63bd22c-53ff-43aa-bc6d-fd388516ef62\") " pod="openstack/dnsmasq-dns-7444fdbf45-49mp6" Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.982691 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f63bd22c-53ff-43aa-bc6d-fd388516ef62-config\") pod \"dnsmasq-dns-7444fdbf45-49mp6\" (UID: \"f63bd22c-53ff-43aa-bc6d-fd388516ef62\") " pod="openstack/dnsmasq-dns-7444fdbf45-49mp6" Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.982846 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmrk7\" (UniqueName: \"kubernetes.io/projected/f63bd22c-53ff-43aa-bc6d-fd388516ef62-kube-api-access-vmrk7\") pod \"dnsmasq-dns-7444fdbf45-49mp6\" (UID: \"f63bd22c-53ff-43aa-bc6d-fd388516ef62\") " pod="openstack/dnsmasq-dns-7444fdbf45-49mp6" Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.982881 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f63bd22c-53ff-43aa-bc6d-fd388516ef62-dns-swift-storage-0\") pod \"dnsmasq-dns-7444fdbf45-49mp6\" (UID: \"f63bd22c-53ff-43aa-bc6d-fd388516ef62\") " pod="openstack/dnsmasq-dns-7444fdbf45-49mp6" Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.982969 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f63bd22c-53ff-43aa-bc6d-fd388516ef62-openstack-edpm-ipam\") pod \"dnsmasq-dns-7444fdbf45-49mp6\" (UID: \"f63bd22c-53ff-43aa-bc6d-fd388516ef62\") " pod="openstack/dnsmasq-dns-7444fdbf45-49mp6" Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.983009 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f63bd22c-53ff-43aa-bc6d-fd388516ef62-dns-svc\") pod \"dnsmasq-dns-7444fdbf45-49mp6\" (UID: \"f63bd22c-53ff-43aa-bc6d-fd388516ef62\") " pod="openstack/dnsmasq-dns-7444fdbf45-49mp6" Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.983103 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f63bd22c-53ff-43aa-bc6d-fd388516ef62-ovsdbserver-nb\") pod \"dnsmasq-dns-7444fdbf45-49mp6\" (UID: \"f63bd22c-53ff-43aa-bc6d-fd388516ef62\") " pod="openstack/dnsmasq-dns-7444fdbf45-49mp6" Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.983159 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f63bd22c-53ff-43aa-bc6d-fd388516ef62-ovsdbserver-sb\") pod \"dnsmasq-dns-7444fdbf45-49mp6\" (UID: \"f63bd22c-53ff-43aa-bc6d-fd388516ef62\") " pod="openstack/dnsmasq-dns-7444fdbf45-49mp6" Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.983781 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f63bd22c-53ff-43aa-bc6d-fd388516ef62-config\") pod \"dnsmasq-dns-7444fdbf45-49mp6\" (UID: \"f63bd22c-53ff-43aa-bc6d-fd388516ef62\") " pod="openstack/dnsmasq-dns-7444fdbf45-49mp6" Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.984139 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f63bd22c-53ff-43aa-bc6d-fd388516ef62-ovsdbserver-sb\") pod \"dnsmasq-dns-7444fdbf45-49mp6\" (UID: \"f63bd22c-53ff-43aa-bc6d-fd388516ef62\") " pod="openstack/dnsmasq-dns-7444fdbf45-49mp6" Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.984543 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f63bd22c-53ff-43aa-bc6d-fd388516ef62-dns-swift-storage-0\") pod \"dnsmasq-dns-7444fdbf45-49mp6\" (UID: \"f63bd22c-53ff-43aa-bc6d-fd388516ef62\") " pod="openstack/dnsmasq-dns-7444fdbf45-49mp6" Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.984611 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/f63bd22c-53ff-43aa-bc6d-fd388516ef62-openstack-edpm-ipam\") pod \"dnsmasq-dns-7444fdbf45-49mp6\" (UID: \"f63bd22c-53ff-43aa-bc6d-fd388516ef62\") " pod="openstack/dnsmasq-dns-7444fdbf45-49mp6" Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.984931 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f63bd22c-53ff-43aa-bc6d-fd388516ef62-dns-svc\") pod \"dnsmasq-dns-7444fdbf45-49mp6\" (UID: \"f63bd22c-53ff-43aa-bc6d-fd388516ef62\") " pod="openstack/dnsmasq-dns-7444fdbf45-49mp6" Apr 04 02:30:04 crc kubenswrapper[4681]: I0404 02:30:04.985228 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f63bd22c-53ff-43aa-bc6d-fd388516ef62-ovsdbserver-nb\") pod \"dnsmasq-dns-7444fdbf45-49mp6\" (UID: \"f63bd22c-53ff-43aa-bc6d-fd388516ef62\") " pod="openstack/dnsmasq-dns-7444fdbf45-49mp6" Apr 04 02:30:05 crc kubenswrapper[4681]: I0404 02:30:05.016302 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmrk7\" (UniqueName: \"kubernetes.io/projected/f63bd22c-53ff-43aa-bc6d-fd388516ef62-kube-api-access-vmrk7\") pod \"dnsmasq-dns-7444fdbf45-49mp6\" (UID: \"f63bd22c-53ff-43aa-bc6d-fd388516ef62\") " pod="openstack/dnsmasq-dns-7444fdbf45-49mp6" Apr 04 02:30:05 crc kubenswrapper[4681]: I0404 02:30:05.093353 4681 generic.go:334] "Generic (PLEG): container finished" podID="d3fb1572-f7e6-4be5-8839-647fa7e78e67" containerID="cf8c19ef7dccd1910785ed37646ba2265e713e30ebf0ad090ff8b3581dd0fc53" exitCode=0 Apr 04 02:30:05 crc kubenswrapper[4681]: I0404 02:30:05.093554 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" event={"ID":"d3fb1572-f7e6-4be5-8839-647fa7e78e67","Type":"ContainerDied","Data":"cf8c19ef7dccd1910785ed37646ba2265e713e30ebf0ad090ff8b3581dd0fc53"} Apr 04 02:30:05 crc kubenswrapper[4681]: I0404 02:30:05.096060 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7444fdbf45-49mp6" Apr 04 02:30:05 crc kubenswrapper[4681]: I0404 02:30:05.242901 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56028b8f-0d6b-4f7f-b4d6-cefc5acec683" path="/var/lib/kubelet/pods/56028b8f-0d6b-4f7f-b4d6-cefc5acec683/volumes" Apr 04 02:30:05 crc kubenswrapper[4681]: I0404 02:30:05.363716 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" Apr 04 02:30:05 crc kubenswrapper[4681]: I0404 02:30:05.390557 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3fb1572-f7e6-4be5-8839-647fa7e78e67-config\") pod \"d3fb1572-f7e6-4be5-8839-647fa7e78e67\" (UID: \"d3fb1572-f7e6-4be5-8839-647fa7e78e67\") " Apr 04 02:30:05 crc kubenswrapper[4681]: I0404 02:30:05.390809 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3fb1572-f7e6-4be5-8839-647fa7e78e67-ovsdbserver-nb\") pod \"d3fb1572-f7e6-4be5-8839-647fa7e78e67\" (UID: \"d3fb1572-f7e6-4be5-8839-647fa7e78e67\") " Apr 04 02:30:05 crc kubenswrapper[4681]: I0404 02:30:05.390879 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3fb1572-f7e6-4be5-8839-647fa7e78e67-ovsdbserver-sb\") pod \"d3fb1572-f7e6-4be5-8839-647fa7e78e67\" (UID: \"d3fb1572-f7e6-4be5-8839-647fa7e78e67\") " Apr 04 02:30:05 crc kubenswrapper[4681]: I0404 02:30:05.390918 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3fb1572-f7e6-4be5-8839-647fa7e78e67-dns-svc\") pod \"d3fb1572-f7e6-4be5-8839-647fa7e78e67\" (UID: \"d3fb1572-f7e6-4be5-8839-647fa7e78e67\") " Apr 04 02:30:05 crc kubenswrapper[4681]: I0404 02:30:05.390989 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p5gr\" (UniqueName: \"kubernetes.io/projected/d3fb1572-f7e6-4be5-8839-647fa7e78e67-kube-api-access-7p5gr\") pod \"d3fb1572-f7e6-4be5-8839-647fa7e78e67\" (UID: \"d3fb1572-f7e6-4be5-8839-647fa7e78e67\") " Apr 04 02:30:05 crc kubenswrapper[4681]: I0404 02:30:05.391059 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3fb1572-f7e6-4be5-8839-647fa7e78e67-dns-swift-storage-0\") pod \"d3fb1572-f7e6-4be5-8839-647fa7e78e67\" (UID: \"d3fb1572-f7e6-4be5-8839-647fa7e78e67\") " Apr 04 02:30:05 crc kubenswrapper[4681]: I0404 02:30:05.401664 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3fb1572-f7e6-4be5-8839-647fa7e78e67-kube-api-access-7p5gr" (OuterVolumeSpecName: "kube-api-access-7p5gr") pod "d3fb1572-f7e6-4be5-8839-647fa7e78e67" (UID: "d3fb1572-f7e6-4be5-8839-647fa7e78e67"). InnerVolumeSpecName "kube-api-access-7p5gr". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:30:05 crc kubenswrapper[4681]: I0404 02:30:05.452212 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3fb1572-f7e6-4be5-8839-647fa7e78e67-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d3fb1572-f7e6-4be5-8839-647fa7e78e67" (UID: "d3fb1572-f7e6-4be5-8839-647fa7e78e67"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:30:05 crc kubenswrapper[4681]: I0404 02:30:05.473803 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3fb1572-f7e6-4be5-8839-647fa7e78e67-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d3fb1572-f7e6-4be5-8839-647fa7e78e67" (UID: "d3fb1572-f7e6-4be5-8839-647fa7e78e67"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:30:05 crc kubenswrapper[4681]: I0404 02:30:05.475528 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3fb1572-f7e6-4be5-8839-647fa7e78e67-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d3fb1572-f7e6-4be5-8839-647fa7e78e67" (UID: "d3fb1572-f7e6-4be5-8839-647fa7e78e67"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:30:05 crc kubenswrapper[4681]: I0404 02:30:05.487907 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3fb1572-f7e6-4be5-8839-647fa7e78e67-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d3fb1572-f7e6-4be5-8839-647fa7e78e67" (UID: "d3fb1572-f7e6-4be5-8839-647fa7e78e67"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:30:05 crc kubenswrapper[4681]: I0404 02:30:05.494923 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3fb1572-f7e6-4be5-8839-647fa7e78e67-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Apr 04 02:30:05 crc kubenswrapper[4681]: I0404 02:30:05.495316 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3fb1572-f7e6-4be5-8839-647fa7e78e67-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Apr 04 02:30:05 crc kubenswrapper[4681]: I0404 02:30:05.495394 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3fb1572-f7e6-4be5-8839-647fa7e78e67-dns-svc\") on node \"crc\" DevicePath \"\"" Apr 04 02:30:05 crc kubenswrapper[4681]: I0404 02:30:05.495461 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p5gr\" (UniqueName: \"kubernetes.io/projected/d3fb1572-f7e6-4be5-8839-647fa7e78e67-kube-api-access-7p5gr\") on node \"crc\" DevicePath \"\"" Apr 04 02:30:05 crc kubenswrapper[4681]: I0404 02:30:05.495644 4681 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3fb1572-f7e6-4be5-8839-647fa7e78e67-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Apr 04 02:30:05 crc kubenswrapper[4681]: I0404 02:30:05.519568 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3fb1572-f7e6-4be5-8839-647fa7e78e67-config" (OuterVolumeSpecName: "config") pod "d3fb1572-f7e6-4be5-8839-647fa7e78e67" (UID: "d3fb1572-f7e6-4be5-8839-647fa7e78e67"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:30:05 crc kubenswrapper[4681]: I0404 02:30:05.555327 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587830-zbqnl" Apr 04 02:30:05 crc kubenswrapper[4681]: I0404 02:30:05.597735 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnplk\" (UniqueName: \"kubernetes.io/projected/c2377ba5-c19e-40bd-918f-e993357fd5e7-kube-api-access-qnplk\") pod \"c2377ba5-c19e-40bd-918f-e993357fd5e7\" (UID: \"c2377ba5-c19e-40bd-918f-e993357fd5e7\") " Apr 04 02:30:05 crc kubenswrapper[4681]: I0404 02:30:05.602660 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2377ba5-c19e-40bd-918f-e993357fd5e7-kube-api-access-qnplk" (OuterVolumeSpecName: "kube-api-access-qnplk") pod "c2377ba5-c19e-40bd-918f-e993357fd5e7" (UID: "c2377ba5-c19e-40bd-918f-e993357fd5e7"). InnerVolumeSpecName "kube-api-access-qnplk". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:30:05 crc kubenswrapper[4681]: I0404 02:30:05.603481 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnplk\" (UniqueName: \"kubernetes.io/projected/c2377ba5-c19e-40bd-918f-e993357fd5e7-kube-api-access-qnplk\") on node \"crc\" DevicePath \"\"" Apr 04 02:30:05 crc kubenswrapper[4681]: I0404 02:30:05.603508 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3fb1572-f7e6-4be5-8839-647fa7e78e67-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:30:05 crc kubenswrapper[4681]: I0404 02:30:05.783495 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7444fdbf45-49mp6"] Apr 04 02:30:05 crc kubenswrapper[4681]: W0404 02:30:05.785378 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf63bd22c_53ff_43aa_bc6d_fd388516ef62.slice/crio-6bd7959965823cf56e8dca845f0398a23dae79face043995801dbea61d766d70 WatchSource:0}: Error finding container 6bd7959965823cf56e8dca845f0398a23dae79face043995801dbea61d766d70: Status 404 returned error can't find the container with id 6bd7959965823cf56e8dca845f0398a23dae79face043995801dbea61d766d70 Apr 04 02:30:06 crc kubenswrapper[4681]: I0404 02:30:06.104841 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7444fdbf45-49mp6" event={"ID":"f63bd22c-53ff-43aa-bc6d-fd388516ef62","Type":"ContainerStarted","Data":"ca788439904f90c26ce8badfd4d8fd38548605fe094cdcbc56229f98f6b2eccc"} Apr 04 02:30:06 crc kubenswrapper[4681]: I0404 02:30:06.104896 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7444fdbf45-49mp6" event={"ID":"f63bd22c-53ff-43aa-bc6d-fd388516ef62","Type":"ContainerStarted","Data":"6bd7959965823cf56e8dca845f0398a23dae79face043995801dbea61d766d70"} Apr 04 02:30:06 crc kubenswrapper[4681]: I0404 02:30:06.107660 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" event={"ID":"d3fb1572-f7e6-4be5-8839-647fa7e78e67","Type":"ContainerDied","Data":"8ddd64e4d2e569901001b9f84b1c861fe929c48b10b230258dd8c474f71e9523"} Apr 04 02:30:06 crc kubenswrapper[4681]: I0404 02:30:06.107700 4681 scope.go:117] "RemoveContainer" containerID="cf8c19ef7dccd1910785ed37646ba2265e713e30ebf0ad090ff8b3581dd0fc53" Apr 04 02:30:06 crc kubenswrapper[4681]: I0404 02:30:06.107804 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54fd76d97c-j5cr2" Apr 04 02:30:06 crc kubenswrapper[4681]: I0404 02:30:06.110970 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587830-zbqnl" event={"ID":"c2377ba5-c19e-40bd-918f-e993357fd5e7","Type":"ContainerDied","Data":"c636be3590d141b2eee7ca7075b68c841bd08830966a737d6ce3e8ff0b54a7d0"} Apr 04 02:30:06 crc kubenswrapper[4681]: I0404 02:30:06.111116 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c636be3590d141b2eee7ca7075b68c841bd08830966a737d6ce3e8ff0b54a7d0" Apr 04 02:30:06 crc kubenswrapper[4681]: I0404 02:30:06.111011 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587830-zbqnl" Apr 04 02:30:06 crc kubenswrapper[4681]: I0404 02:30:06.223669 4681 scope.go:117] "RemoveContainer" containerID="b92741ef78aae6d49d83db8007273ea81eb22edf11dffc051be9d1e489900c0b" Apr 04 02:30:06 crc kubenswrapper[4681]: I0404 02:30:06.284048 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54fd76d97c-j5cr2"] Apr 04 02:30:06 crc kubenswrapper[4681]: I0404 02:30:06.304033 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54fd76d97c-j5cr2"] Apr 04 02:30:06 crc kubenswrapper[4681]: I0404 02:30:06.640336 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587824-jmsr7"] Apr 04 02:30:06 crc kubenswrapper[4681]: I0404 02:30:06.652147 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587824-jmsr7"] Apr 04 02:30:07 crc kubenswrapper[4681]: I0404 02:30:07.126192 4681 generic.go:334] "Generic (PLEG): container finished" podID="f63bd22c-53ff-43aa-bc6d-fd388516ef62" containerID="ca788439904f90c26ce8badfd4d8fd38548605fe094cdcbc56229f98f6b2eccc" exitCode=0 Apr 04 02:30:07 crc kubenswrapper[4681]: I0404 02:30:07.126239 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7444fdbf45-49mp6" event={"ID":"f63bd22c-53ff-43aa-bc6d-fd388516ef62","Type":"ContainerDied","Data":"ca788439904f90c26ce8badfd4d8fd38548605fe094cdcbc56229f98f6b2eccc"} Apr 04 02:30:07 crc kubenswrapper[4681]: I0404 02:30:07.126306 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7444fdbf45-49mp6" event={"ID":"f63bd22c-53ff-43aa-bc6d-fd388516ef62","Type":"ContainerStarted","Data":"638b750cc69dd00bc0938ae2672be6b64b641516f73c36f0ee2a6ade0b42e3cb"} Apr 04 02:30:07 crc kubenswrapper[4681]: I0404 02:30:07.126438 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7444fdbf45-49mp6" Apr 04 02:30:07 crc kubenswrapper[4681]: I0404 02:30:07.212653 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c86d6aa-ddc5-4b18-8a91-2d4bc6f62535" path="/var/lib/kubelet/pods/3c86d6aa-ddc5-4b18-8a91-2d4bc6f62535/volumes" Apr 04 02:30:07 crc kubenswrapper[4681]: I0404 02:30:07.213369 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3fb1572-f7e6-4be5-8839-647fa7e78e67" path="/var/lib/kubelet/pods/d3fb1572-f7e6-4be5-8839-647fa7e78e67/volumes" Apr 04 02:30:15 crc kubenswrapper[4681]: I0404 02:30:15.101216 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7444fdbf45-49mp6" Apr 04 02:30:15 crc kubenswrapper[4681]: I0404 02:30:15.128962 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7444fdbf45-49mp6" podStartSLOduration=11.128938979 podStartE2EDuration="11.128938979s" podCreationTimestamp="2026-04-04 02:30:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:30:07.159708218 +0000 UTC m=+2086.825483338" watchObservedRunningTime="2026-04-04 02:30:15.128938979 +0000 UTC m=+2094.794714099" Apr 04 02:30:15 crc kubenswrapper[4681]: I0404 02:30:15.190762 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5769bb9fb9-5w8jm"] Apr 04 02:30:15 crc kubenswrapper[4681]: I0404 02:30:15.191317 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" podUID="9edb6c0a-eff0-4497-9ec3-8949f17e734e" containerName="dnsmasq-dns" containerID="cri-o://65a5de4392075c26cc2c890e3953f6a6f24f0e07a437b71cfb71d43b5b7c4313" gracePeriod=10 Apr 04 02:30:15 crc kubenswrapper[4681]: I0404 02:30:15.654336 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" Apr 04 02:30:15 crc kubenswrapper[4681]: I0404 02:30:15.714973 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-config\") pod \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\" (UID: \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\") " Apr 04 02:30:15 crc kubenswrapper[4681]: I0404 02:30:15.715030 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-dns-swift-storage-0\") pod \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\" (UID: \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\") " Apr 04 02:30:15 crc kubenswrapper[4681]: I0404 02:30:15.715056 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-dns-svc\") pod \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\" (UID: \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\") " Apr 04 02:30:15 crc kubenswrapper[4681]: I0404 02:30:15.715140 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-ovsdbserver-sb\") pod \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\" (UID: \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\") " Apr 04 02:30:15 crc kubenswrapper[4681]: I0404 02:30:15.715168 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-ovsdbserver-nb\") pod \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\" (UID: \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\") " Apr 04 02:30:15 crc kubenswrapper[4681]: I0404 02:30:15.715221 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdznd\" (UniqueName: \"kubernetes.io/projected/9edb6c0a-eff0-4497-9ec3-8949f17e734e-kube-api-access-zdznd\") pod \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\" (UID: \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\") " Apr 04 02:30:15 crc kubenswrapper[4681]: I0404 02:30:15.715294 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-openstack-edpm-ipam\") pod \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\" (UID: \"9edb6c0a-eff0-4497-9ec3-8949f17e734e\") " Apr 04 02:30:15 crc kubenswrapper[4681]: I0404 02:30:15.731515 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9edb6c0a-eff0-4497-9ec3-8949f17e734e-kube-api-access-zdznd" (OuterVolumeSpecName: "kube-api-access-zdznd") pod "9edb6c0a-eff0-4497-9ec3-8949f17e734e" (UID: "9edb6c0a-eff0-4497-9ec3-8949f17e734e"). InnerVolumeSpecName "kube-api-access-zdznd". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:30:15 crc kubenswrapper[4681]: I0404 02:30:15.780163 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9edb6c0a-eff0-4497-9ec3-8949f17e734e" (UID: "9edb6c0a-eff0-4497-9ec3-8949f17e734e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:30:15 crc kubenswrapper[4681]: I0404 02:30:15.784784 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9edb6c0a-eff0-4497-9ec3-8949f17e734e" (UID: "9edb6c0a-eff0-4497-9ec3-8949f17e734e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:30:15 crc kubenswrapper[4681]: I0404 02:30:15.784999 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9edb6c0a-eff0-4497-9ec3-8949f17e734e" (UID: "9edb6c0a-eff0-4497-9ec3-8949f17e734e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:30:15 crc kubenswrapper[4681]: I0404 02:30:15.785574 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9edb6c0a-eff0-4497-9ec3-8949f17e734e" (UID: "9edb6c0a-eff0-4497-9ec3-8949f17e734e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:30:15 crc kubenswrapper[4681]: I0404 02:30:15.789228 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-config" (OuterVolumeSpecName: "config") pod "9edb6c0a-eff0-4497-9ec3-8949f17e734e" (UID: "9edb6c0a-eff0-4497-9ec3-8949f17e734e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:30:15 crc kubenswrapper[4681]: I0404 02:30:15.794173 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "9edb6c0a-eff0-4497-9ec3-8949f17e734e" (UID: "9edb6c0a-eff0-4497-9ec3-8949f17e734e"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:30:15 crc kubenswrapper[4681]: I0404 02:30:15.817529 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Apr 04 02:30:15 crc kubenswrapper[4681]: I0404 02:30:15.817589 4681 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Apr 04 02:30:15 crc kubenswrapper[4681]: I0404 02:30:15.817605 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdznd\" (UniqueName: \"kubernetes.io/projected/9edb6c0a-eff0-4497-9ec3-8949f17e734e-kube-api-access-zdznd\") on node \"crc\" DevicePath \"\"" Apr 04 02:30:15 crc kubenswrapper[4681]: I0404 02:30:15.817620 4681 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Apr 04 02:30:15 crc kubenswrapper[4681]: I0404 02:30:15.817632 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:30:15 crc kubenswrapper[4681]: I0404 02:30:15.817644 4681 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Apr 04 02:30:15 crc kubenswrapper[4681]: I0404 02:30:15.817656 4681 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9edb6c0a-eff0-4497-9ec3-8949f17e734e-dns-svc\") on node \"crc\" DevicePath \"\"" Apr 04 02:30:16 crc kubenswrapper[4681]: I0404 02:30:16.253831 4681 generic.go:334] "Generic (PLEG): container finished" podID="9edb6c0a-eff0-4497-9ec3-8949f17e734e" containerID="65a5de4392075c26cc2c890e3953f6a6f24f0e07a437b71cfb71d43b5b7c4313" exitCode=0 Apr 04 02:30:16 crc kubenswrapper[4681]: I0404 02:30:16.253891 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" event={"ID":"9edb6c0a-eff0-4497-9ec3-8949f17e734e","Type":"ContainerDied","Data":"65a5de4392075c26cc2c890e3953f6a6f24f0e07a437b71cfb71d43b5b7c4313"} Apr 04 02:30:16 crc kubenswrapper[4681]: I0404 02:30:16.253938 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" event={"ID":"9edb6c0a-eff0-4497-9ec3-8949f17e734e","Type":"ContainerDied","Data":"4a13a18003c636e598a21d4f41db9df8f8d3a091f509401f70278d8a0bdd2e84"} Apr 04 02:30:16 crc kubenswrapper[4681]: I0404 02:30:16.253958 4681 scope.go:117] "RemoveContainer" containerID="65a5de4392075c26cc2c890e3953f6a6f24f0e07a437b71cfb71d43b5b7c4313" Apr 04 02:30:16 crc kubenswrapper[4681]: I0404 02:30:16.253958 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5769bb9fb9-5w8jm" Apr 04 02:30:16 crc kubenswrapper[4681]: I0404 02:30:16.276467 4681 scope.go:117] "RemoveContainer" containerID="cd0b6fa2eeee66ec8773e66a0b4dc98386c58d2cb75231c0a6383ce9cffcdf99" Apr 04 02:30:16 crc kubenswrapper[4681]: I0404 02:30:16.291777 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5769bb9fb9-5w8jm"] Apr 04 02:30:16 crc kubenswrapper[4681]: I0404 02:30:16.302956 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5769bb9fb9-5w8jm"] Apr 04 02:30:16 crc kubenswrapper[4681]: I0404 02:30:16.309320 4681 scope.go:117] "RemoveContainer" containerID="65a5de4392075c26cc2c890e3953f6a6f24f0e07a437b71cfb71d43b5b7c4313" Apr 04 02:30:16 crc kubenswrapper[4681]: E0404 02:30:16.309775 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65a5de4392075c26cc2c890e3953f6a6f24f0e07a437b71cfb71d43b5b7c4313\": container with ID starting with 65a5de4392075c26cc2c890e3953f6a6f24f0e07a437b71cfb71d43b5b7c4313 not found: ID does not exist" containerID="65a5de4392075c26cc2c890e3953f6a6f24f0e07a437b71cfb71d43b5b7c4313" Apr 04 02:30:16 crc kubenswrapper[4681]: I0404 02:30:16.309819 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65a5de4392075c26cc2c890e3953f6a6f24f0e07a437b71cfb71d43b5b7c4313"} err="failed to get container status \"65a5de4392075c26cc2c890e3953f6a6f24f0e07a437b71cfb71d43b5b7c4313\": rpc error: code = NotFound desc = could not find container \"65a5de4392075c26cc2c890e3953f6a6f24f0e07a437b71cfb71d43b5b7c4313\": container with ID starting with 65a5de4392075c26cc2c890e3953f6a6f24f0e07a437b71cfb71d43b5b7c4313 not found: ID does not exist" Apr 04 02:30:16 crc kubenswrapper[4681]: I0404 02:30:16.309849 4681 scope.go:117] "RemoveContainer" containerID="cd0b6fa2eeee66ec8773e66a0b4dc98386c58d2cb75231c0a6383ce9cffcdf99" Apr 04 02:30:16 crc kubenswrapper[4681]: E0404 02:30:16.310121 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd0b6fa2eeee66ec8773e66a0b4dc98386c58d2cb75231c0a6383ce9cffcdf99\": container with ID starting with cd0b6fa2eeee66ec8773e66a0b4dc98386c58d2cb75231c0a6383ce9cffcdf99 not found: ID does not exist" containerID="cd0b6fa2eeee66ec8773e66a0b4dc98386c58d2cb75231c0a6383ce9cffcdf99" Apr 04 02:30:16 crc kubenswrapper[4681]: I0404 02:30:16.310153 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd0b6fa2eeee66ec8773e66a0b4dc98386c58d2cb75231c0a6383ce9cffcdf99"} err="failed to get container status \"cd0b6fa2eeee66ec8773e66a0b4dc98386c58d2cb75231c0a6383ce9cffcdf99\": rpc error: code = NotFound desc = could not find container \"cd0b6fa2eeee66ec8773e66a0b4dc98386c58d2cb75231c0a6383ce9cffcdf99\": container with ID starting with cd0b6fa2eeee66ec8773e66a0b4dc98386c58d2cb75231c0a6383ce9cffcdf99 not found: ID does not exist" Apr 04 02:30:17 crc kubenswrapper[4681]: I0404 02:30:17.214956 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9edb6c0a-eff0-4497-9ec3-8949f17e734e" path="/var/lib/kubelet/pods/9edb6c0a-eff0-4497-9ec3-8949f17e734e/volumes" Apr 04 02:30:21 crc kubenswrapper[4681]: I0404 02:30:21.302991 4681 generic.go:334] "Generic (PLEG): container finished" podID="dfd8bf26-d103-4fa4-92d1-b463c9012169" containerID="c42f1e78e1aefdcbbb707ac7f69a463196e4f7690e5f3e3b24aa5de53f2378d8" exitCode=0 Apr 04 02:30:21 crc kubenswrapper[4681]: I0404 02:30:21.303079 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dfd8bf26-d103-4fa4-92d1-b463c9012169","Type":"ContainerDied","Data":"c42f1e78e1aefdcbbb707ac7f69a463196e4f7690e5f3e3b24aa5de53f2378d8"} Apr 04 02:30:21 crc kubenswrapper[4681]: I0404 02:30:21.305527 4681 generic.go:334] "Generic (PLEG): container finished" podID="caa29c68-1123-4e1c-ba0a-8a34a9be0135" containerID="ff87c6c1338fe75e58f46fd4c3bb96e6b9eb007f84cda32e4a0b006b89bf9b22" exitCode=0 Apr 04 02:30:21 crc kubenswrapper[4681]: I0404 02:30:21.305568 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"caa29c68-1123-4e1c-ba0a-8a34a9be0135","Type":"ContainerDied","Data":"ff87c6c1338fe75e58f46fd4c3bb96e6b9eb007f84cda32e4a0b006b89bf9b22"} Apr 04 02:30:22 crc kubenswrapper[4681]: I0404 02:30:22.316705 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"dfd8bf26-d103-4fa4-92d1-b463c9012169","Type":"ContainerStarted","Data":"3142598f2e38653e5c958a81a5f2eee60d4f902ee852baecc07394779105fd16"} Apr 04 02:30:22 crc kubenswrapper[4681]: I0404 02:30:22.317201 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:30:22 crc kubenswrapper[4681]: I0404 02:30:22.318905 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"caa29c68-1123-4e1c-ba0a-8a34a9be0135","Type":"ContainerStarted","Data":"021ada9de75bbdb1a2b654ad5e47ce31d59033d6bce72c67e995a9866eb9c0e3"} Apr 04 02:30:22 crc kubenswrapper[4681]: I0404 02:30:22.319168 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Apr 04 02:30:22 crc kubenswrapper[4681]: I0404 02:30:22.342659 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.342645323 podStartE2EDuration="37.342645323s" podCreationTimestamp="2026-04-04 02:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:30:22.341561523 +0000 UTC m=+2102.007336663" watchObservedRunningTime="2026-04-04 02:30:22.342645323 +0000 UTC m=+2102.008420433" Apr 04 02:30:22 crc kubenswrapper[4681]: I0404 02:30:22.369332 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.369308806 podStartE2EDuration="37.369308806s" podCreationTimestamp="2026-04-04 02:29:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:30:22.365399199 +0000 UTC m=+2102.031174319" watchObservedRunningTime="2026-04-04 02:30:22.369308806 +0000 UTC m=+2102.035083936" Apr 04 02:30:26 crc kubenswrapper[4681]: I0404 02:30:26.903743 4681 scope.go:117] "RemoveContainer" containerID="a41723557255d81eea98849be1108edf01eebcf5bc883e2de0770a3843dfd1ba" Apr 04 02:30:26 crc kubenswrapper[4681]: I0404 02:30:26.945633 4681 scope.go:117] "RemoveContainer" containerID="d0e9586c6a17e8d85e77ce9203d97ce45d37dae710b702194232b433a39aad53" Apr 04 02:30:26 crc kubenswrapper[4681]: I0404 02:30:26.980964 4681 scope.go:117] "RemoveContainer" containerID="40c24a689a1c1c2e579dcd5c0ae257b02ee1eb59fb9e67c39e8b85f9acac6f28" Apr 04 02:30:27 crc kubenswrapper[4681]: I0404 02:30:27.022682 4681 scope.go:117] "RemoveContainer" containerID="6b45a0a11701e0e9c68853c0d349941d9a5177198c7082613ce522d101158e75" Apr 04 02:30:33 crc kubenswrapper[4681]: I0404 02:30:33.472717 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79"] Apr 04 02:30:33 crc kubenswrapper[4681]: E0404 02:30:33.473922 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9edb6c0a-eff0-4497-9ec3-8949f17e734e" containerName="dnsmasq-dns" Apr 04 02:30:33 crc kubenswrapper[4681]: I0404 02:30:33.473940 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="9edb6c0a-eff0-4497-9ec3-8949f17e734e" containerName="dnsmasq-dns" Apr 04 02:30:33 crc kubenswrapper[4681]: E0404 02:30:33.473969 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3fb1572-f7e6-4be5-8839-647fa7e78e67" containerName="init" Apr 04 02:30:33 crc kubenswrapper[4681]: I0404 02:30:33.473977 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3fb1572-f7e6-4be5-8839-647fa7e78e67" containerName="init" Apr 04 02:30:33 crc kubenswrapper[4681]: E0404 02:30:33.473995 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3fb1572-f7e6-4be5-8839-647fa7e78e67" containerName="dnsmasq-dns" Apr 04 02:30:33 crc kubenswrapper[4681]: I0404 02:30:33.474003 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3fb1572-f7e6-4be5-8839-647fa7e78e67" containerName="dnsmasq-dns" Apr 04 02:30:33 crc kubenswrapper[4681]: E0404 02:30:33.474018 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9edb6c0a-eff0-4497-9ec3-8949f17e734e" containerName="init" Apr 04 02:30:33 crc kubenswrapper[4681]: I0404 02:30:33.474025 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="9edb6c0a-eff0-4497-9ec3-8949f17e734e" containerName="init" Apr 04 02:30:33 crc kubenswrapper[4681]: E0404 02:30:33.474038 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2377ba5-c19e-40bd-918f-e993357fd5e7" containerName="oc" Apr 04 02:30:33 crc kubenswrapper[4681]: I0404 02:30:33.474045 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2377ba5-c19e-40bd-918f-e993357fd5e7" containerName="oc" Apr 04 02:30:33 crc kubenswrapper[4681]: I0404 02:30:33.474345 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="9edb6c0a-eff0-4497-9ec3-8949f17e734e" containerName="dnsmasq-dns" Apr 04 02:30:33 crc kubenswrapper[4681]: I0404 02:30:33.474368 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2377ba5-c19e-40bd-918f-e993357fd5e7" containerName="oc" Apr 04 02:30:33 crc kubenswrapper[4681]: I0404 02:30:33.474406 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3fb1572-f7e6-4be5-8839-647fa7e78e67" containerName="dnsmasq-dns" Apr 04 02:30:33 crc kubenswrapper[4681]: I0404 02:30:33.475472 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79" Apr 04 02:30:33 crc kubenswrapper[4681]: I0404 02:30:33.482524 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Apr 04 02:30:33 crc kubenswrapper[4681]: I0404 02:30:33.482679 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Apr 04 02:30:33 crc kubenswrapper[4681]: I0404 02:30:33.482582 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Apr 04 02:30:33 crc kubenswrapper[4681]: I0404 02:30:33.482950 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w7ttd" Apr 04 02:30:33 crc kubenswrapper[4681]: I0404 02:30:33.491942 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79"] Apr 04 02:30:33 crc kubenswrapper[4681]: I0404 02:30:33.503465 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3deb575c-2d6c-41a6-9650-3dddc756bb67-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79\" (UID: \"3deb575c-2d6c-41a6-9650-3dddc756bb67\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79" Apr 04 02:30:33 crc kubenswrapper[4681]: I0404 02:30:33.503631 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3deb575c-2d6c-41a6-9650-3dddc756bb67-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79\" (UID: \"3deb575c-2d6c-41a6-9650-3dddc756bb67\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79" Apr 04 02:30:33 crc kubenswrapper[4681]: I0404 02:30:33.503796 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3deb575c-2d6c-41a6-9650-3dddc756bb67-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79\" (UID: \"3deb575c-2d6c-41a6-9650-3dddc756bb67\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79" Apr 04 02:30:33 crc kubenswrapper[4681]: I0404 02:30:33.503954 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg5vg\" (UniqueName: \"kubernetes.io/projected/3deb575c-2d6c-41a6-9650-3dddc756bb67-kube-api-access-gg5vg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79\" (UID: \"3deb575c-2d6c-41a6-9650-3dddc756bb67\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79" Apr 04 02:30:33 crc kubenswrapper[4681]: I0404 02:30:33.606087 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg5vg\" (UniqueName: \"kubernetes.io/projected/3deb575c-2d6c-41a6-9650-3dddc756bb67-kube-api-access-gg5vg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79\" (UID: \"3deb575c-2d6c-41a6-9650-3dddc756bb67\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79" Apr 04 02:30:33 crc kubenswrapper[4681]: I0404 02:30:33.606400 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3deb575c-2d6c-41a6-9650-3dddc756bb67-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79\" (UID: \"3deb575c-2d6c-41a6-9650-3dddc756bb67\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79" Apr 04 02:30:33 crc kubenswrapper[4681]: I0404 02:30:33.606461 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3deb575c-2d6c-41a6-9650-3dddc756bb67-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79\" (UID: \"3deb575c-2d6c-41a6-9650-3dddc756bb67\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79" Apr 04 02:30:33 crc kubenswrapper[4681]: I0404 02:30:33.607454 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3deb575c-2d6c-41a6-9650-3dddc756bb67-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79\" (UID: \"3deb575c-2d6c-41a6-9650-3dddc756bb67\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79" Apr 04 02:30:33 crc kubenswrapper[4681]: I0404 02:30:33.613021 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3deb575c-2d6c-41a6-9650-3dddc756bb67-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79\" (UID: \"3deb575c-2d6c-41a6-9650-3dddc756bb67\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79" Apr 04 02:30:33 crc kubenswrapper[4681]: I0404 02:30:33.623246 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3deb575c-2d6c-41a6-9650-3dddc756bb67-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79\" (UID: \"3deb575c-2d6c-41a6-9650-3dddc756bb67\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79" Apr 04 02:30:33 crc kubenswrapper[4681]: I0404 02:30:33.626819 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3deb575c-2d6c-41a6-9650-3dddc756bb67-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79\" (UID: \"3deb575c-2d6c-41a6-9650-3dddc756bb67\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79" Apr 04 02:30:33 crc kubenswrapper[4681]: I0404 02:30:33.629469 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg5vg\" (UniqueName: \"kubernetes.io/projected/3deb575c-2d6c-41a6-9650-3dddc756bb67-kube-api-access-gg5vg\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79\" (UID: \"3deb575c-2d6c-41a6-9650-3dddc756bb67\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79" Apr 04 02:30:33 crc kubenswrapper[4681]: I0404 02:30:33.799124 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79" Apr 04 02:30:34 crc kubenswrapper[4681]: I0404 02:30:34.367690 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79"] Apr 04 02:30:34 crc kubenswrapper[4681]: I0404 02:30:34.452342 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79" event={"ID":"3deb575c-2d6c-41a6-9650-3dddc756bb67","Type":"ContainerStarted","Data":"b095b234e8ef72a21b54fb4d859218ac49eec5df7eff15e676c3f12503a1d297"} Apr 04 02:30:36 crc kubenswrapper[4681]: I0404 02:30:36.332501 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Apr 04 02:30:36 crc kubenswrapper[4681]: I0404 02:30:36.362653 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Apr 04 02:30:48 crc kubenswrapper[4681]: I0404 02:30:48.623898 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79" event={"ID":"3deb575c-2d6c-41a6-9650-3dddc756bb67","Type":"ContainerStarted","Data":"e1623fda7215fb9aa6c28d81ccbf2aba68f4ee8fde215a580db0f2010d24ea40"} Apr 04 02:30:48 crc kubenswrapper[4681]: I0404 02:30:48.647284 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79" podStartSLOduration=2.017869681 podStartE2EDuration="15.647250721s" podCreationTimestamp="2026-04-04 02:30:33 +0000 UTC" firstStartedPulling="2026-04-04 02:30:34.371368794 +0000 UTC m=+2114.037143914" lastFinishedPulling="2026-04-04 02:30:48.000749834 +0000 UTC m=+2127.666524954" observedRunningTime="2026-04-04 02:30:48.63847487 +0000 UTC m=+2128.304249990" watchObservedRunningTime="2026-04-04 02:30:48.647250721 +0000 UTC m=+2128.313025841" Apr 04 02:30:58 crc kubenswrapper[4681]: I0404 02:30:58.738184 4681 generic.go:334] "Generic (PLEG): container finished" podID="3deb575c-2d6c-41a6-9650-3dddc756bb67" containerID="e1623fda7215fb9aa6c28d81ccbf2aba68f4ee8fde215a580db0f2010d24ea40" exitCode=0 Apr 04 02:30:58 crc kubenswrapper[4681]: I0404 02:30:58.738630 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79" event={"ID":"3deb575c-2d6c-41a6-9650-3dddc756bb67","Type":"ContainerDied","Data":"e1623fda7215fb9aa6c28d81ccbf2aba68f4ee8fde215a580db0f2010d24ea40"} Apr 04 02:31:00 crc kubenswrapper[4681]: I0404 02:31:00.156894 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79" Apr 04 02:31:00 crc kubenswrapper[4681]: I0404 02:31:00.278389 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3deb575c-2d6c-41a6-9650-3dddc756bb67-inventory\") pod \"3deb575c-2d6c-41a6-9650-3dddc756bb67\" (UID: \"3deb575c-2d6c-41a6-9650-3dddc756bb67\") " Apr 04 02:31:00 crc kubenswrapper[4681]: I0404 02:31:00.278556 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3deb575c-2d6c-41a6-9650-3dddc756bb67-repo-setup-combined-ca-bundle\") pod \"3deb575c-2d6c-41a6-9650-3dddc756bb67\" (UID: \"3deb575c-2d6c-41a6-9650-3dddc756bb67\") " Apr 04 02:31:00 crc kubenswrapper[4681]: I0404 02:31:00.278612 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3deb575c-2d6c-41a6-9650-3dddc756bb67-ssh-key-openstack-edpm-ipam\") pod \"3deb575c-2d6c-41a6-9650-3dddc756bb67\" (UID: \"3deb575c-2d6c-41a6-9650-3dddc756bb67\") " Apr 04 02:31:00 crc kubenswrapper[4681]: I0404 02:31:00.278708 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gg5vg\" (UniqueName: \"kubernetes.io/projected/3deb575c-2d6c-41a6-9650-3dddc756bb67-kube-api-access-gg5vg\") pod \"3deb575c-2d6c-41a6-9650-3dddc756bb67\" (UID: \"3deb575c-2d6c-41a6-9650-3dddc756bb67\") " Apr 04 02:31:00 crc kubenswrapper[4681]: I0404 02:31:00.284125 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3deb575c-2d6c-41a6-9650-3dddc756bb67-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "3deb575c-2d6c-41a6-9650-3dddc756bb67" (UID: "3deb575c-2d6c-41a6-9650-3dddc756bb67"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:31:00 crc kubenswrapper[4681]: I0404 02:31:00.284693 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3deb575c-2d6c-41a6-9650-3dddc756bb67-kube-api-access-gg5vg" (OuterVolumeSpecName: "kube-api-access-gg5vg") pod "3deb575c-2d6c-41a6-9650-3dddc756bb67" (UID: "3deb575c-2d6c-41a6-9650-3dddc756bb67"). InnerVolumeSpecName "kube-api-access-gg5vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:31:00 crc kubenswrapper[4681]: I0404 02:31:00.307661 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3deb575c-2d6c-41a6-9650-3dddc756bb67-inventory" (OuterVolumeSpecName: "inventory") pod "3deb575c-2d6c-41a6-9650-3dddc756bb67" (UID: "3deb575c-2d6c-41a6-9650-3dddc756bb67"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:31:00 crc kubenswrapper[4681]: I0404 02:31:00.309635 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3deb575c-2d6c-41a6-9650-3dddc756bb67-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3deb575c-2d6c-41a6-9650-3dddc756bb67" (UID: "3deb575c-2d6c-41a6-9650-3dddc756bb67"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:31:00 crc kubenswrapper[4681]: I0404 02:31:00.381597 4681 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3deb575c-2d6c-41a6-9650-3dddc756bb67-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:31:00 crc kubenswrapper[4681]: I0404 02:31:00.381638 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3deb575c-2d6c-41a6-9650-3dddc756bb67-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Apr 04 02:31:00 crc kubenswrapper[4681]: I0404 02:31:00.381652 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gg5vg\" (UniqueName: \"kubernetes.io/projected/3deb575c-2d6c-41a6-9650-3dddc756bb67-kube-api-access-gg5vg\") on node \"crc\" DevicePath \"\"" Apr 04 02:31:00 crc kubenswrapper[4681]: I0404 02:31:00.381664 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3deb575c-2d6c-41a6-9650-3dddc756bb67-inventory\") on node \"crc\" DevicePath \"\"" Apr 04 02:31:00 crc kubenswrapper[4681]: I0404 02:31:00.759421 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79" event={"ID":"3deb575c-2d6c-41a6-9650-3dddc756bb67","Type":"ContainerDied","Data":"b095b234e8ef72a21b54fb4d859218ac49eec5df7eff15e676c3f12503a1d297"} Apr 04 02:31:00 crc kubenswrapper[4681]: I0404 02:31:00.759695 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b095b234e8ef72a21b54fb4d859218ac49eec5df7eff15e676c3f12503a1d297" Apr 04 02:31:00 crc kubenswrapper[4681]: I0404 02:31:00.759451 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79" Apr 04 02:31:00 crc kubenswrapper[4681]: I0404 02:31:00.842510 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-4x4ws"] Apr 04 02:31:00 crc kubenswrapper[4681]: E0404 02:31:00.842897 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3deb575c-2d6c-41a6-9650-3dddc756bb67" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Apr 04 02:31:00 crc kubenswrapper[4681]: I0404 02:31:00.842913 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="3deb575c-2d6c-41a6-9650-3dddc756bb67" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Apr 04 02:31:00 crc kubenswrapper[4681]: I0404 02:31:00.843132 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="3deb575c-2d6c-41a6-9650-3dddc756bb67" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Apr 04 02:31:00 crc kubenswrapper[4681]: I0404 02:31:00.843871 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4x4ws" Apr 04 02:31:00 crc kubenswrapper[4681]: I0404 02:31:00.846457 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Apr 04 02:31:00 crc kubenswrapper[4681]: I0404 02:31:00.847240 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w7ttd" Apr 04 02:31:00 crc kubenswrapper[4681]: I0404 02:31:00.847071 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Apr 04 02:31:00 crc kubenswrapper[4681]: I0404 02:31:00.847426 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Apr 04 02:31:00 crc kubenswrapper[4681]: I0404 02:31:00.867167 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-4x4ws"] Apr 04 02:31:00 crc kubenswrapper[4681]: I0404 02:31:00.995697 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3291d540-df5f-43ec-a016-a06df4e58ce6-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4x4ws\" (UID: \"3291d540-df5f-43ec-a016-a06df4e58ce6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4x4ws" Apr 04 02:31:00 crc kubenswrapper[4681]: I0404 02:31:00.995828 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3291d540-df5f-43ec-a016-a06df4e58ce6-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4x4ws\" (UID: \"3291d540-df5f-43ec-a016-a06df4e58ce6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4x4ws" Apr 04 02:31:00 crc kubenswrapper[4681]: I0404 02:31:00.995901 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6scps\" (UniqueName: \"kubernetes.io/projected/3291d540-df5f-43ec-a016-a06df4e58ce6-kube-api-access-6scps\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4x4ws\" (UID: \"3291d540-df5f-43ec-a016-a06df4e58ce6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4x4ws" Apr 04 02:31:01 crc kubenswrapper[4681]: I0404 02:31:01.099023 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3291d540-df5f-43ec-a016-a06df4e58ce6-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4x4ws\" (UID: \"3291d540-df5f-43ec-a016-a06df4e58ce6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4x4ws" Apr 04 02:31:01 crc kubenswrapper[4681]: I0404 02:31:01.099146 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3291d540-df5f-43ec-a016-a06df4e58ce6-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4x4ws\" (UID: \"3291d540-df5f-43ec-a016-a06df4e58ce6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4x4ws" Apr 04 02:31:01 crc kubenswrapper[4681]: I0404 02:31:01.099257 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6scps\" (UniqueName: \"kubernetes.io/projected/3291d540-df5f-43ec-a016-a06df4e58ce6-kube-api-access-6scps\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4x4ws\" (UID: \"3291d540-df5f-43ec-a016-a06df4e58ce6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4x4ws" Apr 04 02:31:01 crc kubenswrapper[4681]: I0404 02:31:01.107113 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3291d540-df5f-43ec-a016-a06df4e58ce6-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4x4ws\" (UID: \"3291d540-df5f-43ec-a016-a06df4e58ce6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4x4ws" Apr 04 02:31:01 crc kubenswrapper[4681]: I0404 02:31:01.108537 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3291d540-df5f-43ec-a016-a06df4e58ce6-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4x4ws\" (UID: \"3291d540-df5f-43ec-a016-a06df4e58ce6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4x4ws" Apr 04 02:31:01 crc kubenswrapper[4681]: I0404 02:31:01.133406 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6scps\" (UniqueName: \"kubernetes.io/projected/3291d540-df5f-43ec-a016-a06df4e58ce6-kube-api-access-6scps\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-4x4ws\" (UID: \"3291d540-df5f-43ec-a016-a06df4e58ce6\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4x4ws" Apr 04 02:31:01 crc kubenswrapper[4681]: I0404 02:31:01.169006 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4x4ws" Apr 04 02:31:01 crc kubenswrapper[4681]: I0404 02:31:01.817994 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-4x4ws"] Apr 04 02:31:01 crc kubenswrapper[4681]: I0404 02:31:01.826829 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 04 02:31:02 crc kubenswrapper[4681]: I0404 02:31:02.779759 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4x4ws" event={"ID":"3291d540-df5f-43ec-a016-a06df4e58ce6","Type":"ContainerStarted","Data":"3db595742b7feee54aea205a033817d5ea87ccea09326510a166596aea1a2447"} Apr 04 02:31:02 crc kubenswrapper[4681]: I0404 02:31:02.780156 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4x4ws" event={"ID":"3291d540-df5f-43ec-a016-a06df4e58ce6","Type":"ContainerStarted","Data":"dcfd2e622cb3aba086f4773c20e8143f5929c8a06fd61322862c558b9b26697e"} Apr 04 02:31:02 crc kubenswrapper[4681]: I0404 02:31:02.803245 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4x4ws" podStartSLOduration=2.411097423 podStartE2EDuration="2.803215511s" podCreationTimestamp="2026-04-04 02:31:00 +0000 UTC" firstStartedPulling="2026-04-04 02:31:01.826616271 +0000 UTC m=+2141.492391391" lastFinishedPulling="2026-04-04 02:31:02.218734319 +0000 UTC m=+2141.884509479" observedRunningTime="2026-04-04 02:31:02.792690111 +0000 UTC m=+2142.458465241" watchObservedRunningTime="2026-04-04 02:31:02.803215511 +0000 UTC m=+2142.468990651" Apr 04 02:31:04 crc kubenswrapper[4681]: I0404 02:31:04.801328 4681 generic.go:334] "Generic (PLEG): container finished" podID="3291d540-df5f-43ec-a016-a06df4e58ce6" containerID="3db595742b7feee54aea205a033817d5ea87ccea09326510a166596aea1a2447" exitCode=0 Apr 04 02:31:04 crc kubenswrapper[4681]: I0404 02:31:04.801399 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4x4ws" event={"ID":"3291d540-df5f-43ec-a016-a06df4e58ce6","Type":"ContainerDied","Data":"3db595742b7feee54aea205a033817d5ea87ccea09326510a166596aea1a2447"} Apr 04 02:31:06 crc kubenswrapper[4681]: I0404 02:31:06.266202 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4x4ws" Apr 04 02:31:06 crc kubenswrapper[4681]: I0404 02:31:06.312118 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3291d540-df5f-43ec-a016-a06df4e58ce6-inventory\") pod \"3291d540-df5f-43ec-a016-a06df4e58ce6\" (UID: \"3291d540-df5f-43ec-a016-a06df4e58ce6\") " Apr 04 02:31:06 crc kubenswrapper[4681]: I0404 02:31:06.312304 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3291d540-df5f-43ec-a016-a06df4e58ce6-ssh-key-openstack-edpm-ipam\") pod \"3291d540-df5f-43ec-a016-a06df4e58ce6\" (UID: \"3291d540-df5f-43ec-a016-a06df4e58ce6\") " Apr 04 02:31:06 crc kubenswrapper[4681]: I0404 02:31:06.312381 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6scps\" (UniqueName: \"kubernetes.io/projected/3291d540-df5f-43ec-a016-a06df4e58ce6-kube-api-access-6scps\") pod \"3291d540-df5f-43ec-a016-a06df4e58ce6\" (UID: \"3291d540-df5f-43ec-a016-a06df4e58ce6\") " Apr 04 02:31:06 crc kubenswrapper[4681]: I0404 02:31:06.318193 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3291d540-df5f-43ec-a016-a06df4e58ce6-kube-api-access-6scps" (OuterVolumeSpecName: "kube-api-access-6scps") pod "3291d540-df5f-43ec-a016-a06df4e58ce6" (UID: "3291d540-df5f-43ec-a016-a06df4e58ce6"). InnerVolumeSpecName "kube-api-access-6scps". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:31:06 crc kubenswrapper[4681]: I0404 02:31:06.339717 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3291d540-df5f-43ec-a016-a06df4e58ce6-inventory" (OuterVolumeSpecName: "inventory") pod "3291d540-df5f-43ec-a016-a06df4e58ce6" (UID: "3291d540-df5f-43ec-a016-a06df4e58ce6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:31:06 crc kubenswrapper[4681]: I0404 02:31:06.358330 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3291d540-df5f-43ec-a016-a06df4e58ce6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3291d540-df5f-43ec-a016-a06df4e58ce6" (UID: "3291d540-df5f-43ec-a016-a06df4e58ce6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:31:06 crc kubenswrapper[4681]: I0404 02:31:06.414955 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6scps\" (UniqueName: \"kubernetes.io/projected/3291d540-df5f-43ec-a016-a06df4e58ce6-kube-api-access-6scps\") on node \"crc\" DevicePath \"\"" Apr 04 02:31:06 crc kubenswrapper[4681]: I0404 02:31:06.415026 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3291d540-df5f-43ec-a016-a06df4e58ce6-inventory\") on node \"crc\" DevicePath \"\"" Apr 04 02:31:06 crc kubenswrapper[4681]: I0404 02:31:06.415043 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3291d540-df5f-43ec-a016-a06df4e58ce6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Apr 04 02:31:06 crc kubenswrapper[4681]: I0404 02:31:06.838731 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4x4ws" event={"ID":"3291d540-df5f-43ec-a016-a06df4e58ce6","Type":"ContainerDied","Data":"dcfd2e622cb3aba086f4773c20e8143f5929c8a06fd61322862c558b9b26697e"} Apr 04 02:31:06 crc kubenswrapper[4681]: I0404 02:31:06.838774 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcfd2e622cb3aba086f4773c20e8143f5929c8a06fd61322862c558b9b26697e" Apr 04 02:31:06 crc kubenswrapper[4681]: I0404 02:31:06.838808 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-4x4ws" Apr 04 02:31:06 crc kubenswrapper[4681]: I0404 02:31:06.900941 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64"] Apr 04 02:31:06 crc kubenswrapper[4681]: E0404 02:31:06.901723 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3291d540-df5f-43ec-a016-a06df4e58ce6" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Apr 04 02:31:06 crc kubenswrapper[4681]: I0404 02:31:06.901744 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="3291d540-df5f-43ec-a016-a06df4e58ce6" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Apr 04 02:31:06 crc kubenswrapper[4681]: I0404 02:31:06.901945 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="3291d540-df5f-43ec-a016-a06df4e58ce6" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Apr 04 02:31:06 crc kubenswrapper[4681]: I0404 02:31:06.902614 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64" Apr 04 02:31:06 crc kubenswrapper[4681]: I0404 02:31:06.904374 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Apr 04 02:31:06 crc kubenswrapper[4681]: I0404 02:31:06.904753 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Apr 04 02:31:06 crc kubenswrapper[4681]: I0404 02:31:06.906052 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Apr 04 02:31:06 crc kubenswrapper[4681]: I0404 02:31:06.906368 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w7ttd" Apr 04 02:31:06 crc kubenswrapper[4681]: I0404 02:31:06.924970 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64"] Apr 04 02:31:07 crc kubenswrapper[4681]: I0404 02:31:07.027524 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00befa4c-4be8-4cc4-8e8e-46c0bb3b6592-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64\" (UID: \"00befa4c-4be8-4cc4-8e8e-46c0bb3b6592\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64" Apr 04 02:31:07 crc kubenswrapper[4681]: I0404 02:31:07.027604 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00befa4c-4be8-4cc4-8e8e-46c0bb3b6592-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64\" (UID: \"00befa4c-4be8-4cc4-8e8e-46c0bb3b6592\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64" Apr 04 02:31:07 crc kubenswrapper[4681]: I0404 02:31:07.027831 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t89pv\" (UniqueName: \"kubernetes.io/projected/00befa4c-4be8-4cc4-8e8e-46c0bb3b6592-kube-api-access-t89pv\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64\" (UID: \"00befa4c-4be8-4cc4-8e8e-46c0bb3b6592\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64" Apr 04 02:31:07 crc kubenswrapper[4681]: I0404 02:31:07.027978 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00befa4c-4be8-4cc4-8e8e-46c0bb3b6592-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64\" (UID: \"00befa4c-4be8-4cc4-8e8e-46c0bb3b6592\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64" Apr 04 02:31:07 crc kubenswrapper[4681]: I0404 02:31:07.130211 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00befa4c-4be8-4cc4-8e8e-46c0bb3b6592-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64\" (UID: \"00befa4c-4be8-4cc4-8e8e-46c0bb3b6592\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64" Apr 04 02:31:07 crc kubenswrapper[4681]: I0404 02:31:07.130428 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t89pv\" (UniqueName: \"kubernetes.io/projected/00befa4c-4be8-4cc4-8e8e-46c0bb3b6592-kube-api-access-t89pv\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64\" (UID: \"00befa4c-4be8-4cc4-8e8e-46c0bb3b6592\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64" Apr 04 02:31:07 crc kubenswrapper[4681]: I0404 02:31:07.130497 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00befa4c-4be8-4cc4-8e8e-46c0bb3b6592-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64\" (UID: \"00befa4c-4be8-4cc4-8e8e-46c0bb3b6592\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64" Apr 04 02:31:07 crc kubenswrapper[4681]: I0404 02:31:07.130722 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00befa4c-4be8-4cc4-8e8e-46c0bb3b6592-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64\" (UID: \"00befa4c-4be8-4cc4-8e8e-46c0bb3b6592\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64" Apr 04 02:31:07 crc kubenswrapper[4681]: I0404 02:31:07.135600 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00befa4c-4be8-4cc4-8e8e-46c0bb3b6592-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64\" (UID: \"00befa4c-4be8-4cc4-8e8e-46c0bb3b6592\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64" Apr 04 02:31:07 crc kubenswrapper[4681]: I0404 02:31:07.138916 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00befa4c-4be8-4cc4-8e8e-46c0bb3b6592-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64\" (UID: \"00befa4c-4be8-4cc4-8e8e-46c0bb3b6592\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64" Apr 04 02:31:07 crc kubenswrapper[4681]: I0404 02:31:07.143044 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00befa4c-4be8-4cc4-8e8e-46c0bb3b6592-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64\" (UID: \"00befa4c-4be8-4cc4-8e8e-46c0bb3b6592\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64" Apr 04 02:31:07 crc kubenswrapper[4681]: I0404 02:31:07.153489 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t89pv\" (UniqueName: \"kubernetes.io/projected/00befa4c-4be8-4cc4-8e8e-46c0bb3b6592-kube-api-access-t89pv\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64\" (UID: \"00befa4c-4be8-4cc4-8e8e-46c0bb3b6592\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64" Apr 04 02:31:07 crc kubenswrapper[4681]: I0404 02:31:07.221000 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64" Apr 04 02:31:07 crc kubenswrapper[4681]: I0404 02:31:07.588969 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64"] Apr 04 02:31:07 crc kubenswrapper[4681]: I0404 02:31:07.865572 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64" event={"ID":"00befa4c-4be8-4cc4-8e8e-46c0bb3b6592","Type":"ContainerStarted","Data":"1a7ef46b7fed968b40cffcc3eabc4a1595675f0beb0a22edf1f3efea8f5af987"} Apr 04 02:31:08 crc kubenswrapper[4681]: I0404 02:31:08.878771 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64" event={"ID":"00befa4c-4be8-4cc4-8e8e-46c0bb3b6592","Type":"ContainerStarted","Data":"7607b4b2ac759c1ffa23b22344b037f2dbfc1950c1816b716956c65e37b89df0"} Apr 04 02:31:08 crc kubenswrapper[4681]: I0404 02:31:08.899306 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64" podStartSLOduration=2.483170336 podStartE2EDuration="2.899284785s" podCreationTimestamp="2026-04-04 02:31:06 +0000 UTC" firstStartedPulling="2026-04-04 02:31:07.592878351 +0000 UTC m=+2147.258653481" lastFinishedPulling="2026-04-04 02:31:08.00899281 +0000 UTC m=+2147.674767930" observedRunningTime="2026-04-04 02:31:08.897400292 +0000 UTC m=+2148.563175422" watchObservedRunningTime="2026-04-04 02:31:08.899284785 +0000 UTC m=+2148.565059905" Apr 04 02:31:27 crc kubenswrapper[4681]: I0404 02:31:27.191927 4681 scope.go:117] "RemoveContainer" containerID="74a88161c23f2dc24c5f4ba47501b13f8a0bb231bd5d732d3c6886ea8fb12991" Apr 04 02:32:00 crc kubenswrapper[4681]: I0404 02:32:00.156241 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587832-grcbh"] Apr 04 02:32:00 crc kubenswrapper[4681]: I0404 02:32:00.158400 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587832-grcbh" Apr 04 02:32:00 crc kubenswrapper[4681]: I0404 02:32:00.161035 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 02:32:00 crc kubenswrapper[4681]: I0404 02:32:00.161074 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 02:32:00 crc kubenswrapper[4681]: I0404 02:32:00.161055 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 02:32:00 crc kubenswrapper[4681]: I0404 02:32:00.176726 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587832-grcbh"] Apr 04 02:32:00 crc kubenswrapper[4681]: I0404 02:32:00.231478 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w577z\" (UniqueName: \"kubernetes.io/projected/34cf84ca-142e-4466-b911-1886a9468d9e-kube-api-access-w577z\") pod \"auto-csr-approver-29587832-grcbh\" (UID: \"34cf84ca-142e-4466-b911-1886a9468d9e\") " pod="openshift-infra/auto-csr-approver-29587832-grcbh" Apr 04 02:32:00 crc kubenswrapper[4681]: I0404 02:32:00.333815 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w577z\" (UniqueName: \"kubernetes.io/projected/34cf84ca-142e-4466-b911-1886a9468d9e-kube-api-access-w577z\") pod \"auto-csr-approver-29587832-grcbh\" (UID: \"34cf84ca-142e-4466-b911-1886a9468d9e\") " pod="openshift-infra/auto-csr-approver-29587832-grcbh" Apr 04 02:32:00 crc kubenswrapper[4681]: I0404 02:32:00.351611 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w577z\" (UniqueName: \"kubernetes.io/projected/34cf84ca-142e-4466-b911-1886a9468d9e-kube-api-access-w577z\") pod \"auto-csr-approver-29587832-grcbh\" (UID: \"34cf84ca-142e-4466-b911-1886a9468d9e\") " pod="openshift-infra/auto-csr-approver-29587832-grcbh" Apr 04 02:32:00 crc kubenswrapper[4681]: I0404 02:32:00.479550 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587832-grcbh" Apr 04 02:32:00 crc kubenswrapper[4681]: I0404 02:32:00.926765 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587832-grcbh"] Apr 04 02:32:01 crc kubenswrapper[4681]: I0404 02:32:01.449568 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587832-grcbh" event={"ID":"34cf84ca-142e-4466-b911-1886a9468d9e","Type":"ContainerStarted","Data":"c128fd0b31a4ea487abb4229c45e0687b70a3c4b68d04ba34bfc30c85ea71c46"} Apr 04 02:32:02 crc kubenswrapper[4681]: I0404 02:32:02.461878 4681 generic.go:334] "Generic (PLEG): container finished" podID="34cf84ca-142e-4466-b911-1886a9468d9e" containerID="9d325544c1fadf7e3b3e6963472a95d649e7ef877e63d44a9ab4a29b2192cb02" exitCode=0 Apr 04 02:32:02 crc kubenswrapper[4681]: I0404 02:32:02.462014 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587832-grcbh" event={"ID":"34cf84ca-142e-4466-b911-1886a9468d9e","Type":"ContainerDied","Data":"9d325544c1fadf7e3b3e6963472a95d649e7ef877e63d44a9ab4a29b2192cb02"} Apr 04 02:32:03 crc kubenswrapper[4681]: I0404 02:32:03.811897 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587832-grcbh" Apr 04 02:32:03 crc kubenswrapper[4681]: I0404 02:32:03.930866 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w577z\" (UniqueName: \"kubernetes.io/projected/34cf84ca-142e-4466-b911-1886a9468d9e-kube-api-access-w577z\") pod \"34cf84ca-142e-4466-b911-1886a9468d9e\" (UID: \"34cf84ca-142e-4466-b911-1886a9468d9e\") " Apr 04 02:32:03 crc kubenswrapper[4681]: I0404 02:32:03.936906 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34cf84ca-142e-4466-b911-1886a9468d9e-kube-api-access-w577z" (OuterVolumeSpecName: "kube-api-access-w577z") pod "34cf84ca-142e-4466-b911-1886a9468d9e" (UID: "34cf84ca-142e-4466-b911-1886a9468d9e"). InnerVolumeSpecName "kube-api-access-w577z". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:32:04 crc kubenswrapper[4681]: I0404 02:32:04.035123 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w577z\" (UniqueName: \"kubernetes.io/projected/34cf84ca-142e-4466-b911-1886a9468d9e-kube-api-access-w577z\") on node \"crc\" DevicePath \"\"" Apr 04 02:32:04 crc kubenswrapper[4681]: I0404 02:32:04.484938 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587832-grcbh" event={"ID":"34cf84ca-142e-4466-b911-1886a9468d9e","Type":"ContainerDied","Data":"c128fd0b31a4ea487abb4229c45e0687b70a3c4b68d04ba34bfc30c85ea71c46"} Apr 04 02:32:04 crc kubenswrapper[4681]: I0404 02:32:04.484986 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c128fd0b31a4ea487abb4229c45e0687b70a3c4b68d04ba34bfc30c85ea71c46" Apr 04 02:32:04 crc kubenswrapper[4681]: I0404 02:32:04.484985 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587832-grcbh" Apr 04 02:32:04 crc kubenswrapper[4681]: I0404 02:32:04.884396 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587826-vvnqf"] Apr 04 02:32:04 crc kubenswrapper[4681]: I0404 02:32:04.891884 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587826-vvnqf"] Apr 04 02:32:05 crc kubenswrapper[4681]: I0404 02:32:05.215212 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9130bd5c-6c50-412e-887f-4b22f4bc5377" path="/var/lib/kubelet/pods/9130bd5c-6c50-412e-887f-4b22f4bc5377/volumes" Apr 04 02:32:10 crc kubenswrapper[4681]: I0404 02:32:10.036761 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-2vj6j"] Apr 04 02:32:10 crc kubenswrapper[4681]: I0404 02:32:10.047705 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-2vj6j"] Apr 04 02:32:10 crc kubenswrapper[4681]: I0404 02:32:10.081139 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-5ngdh"] Apr 04 02:32:10 crc kubenswrapper[4681]: I0404 02:32:10.096592 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-tfrts"] Apr 04 02:32:10 crc kubenswrapper[4681]: I0404 02:32:10.108703 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-5ngdh"] Apr 04 02:32:10 crc kubenswrapper[4681]: I0404 02:32:10.119094 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-tfrts"] Apr 04 02:32:11 crc kubenswrapper[4681]: I0404 02:32:11.032453 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-lnswn"] Apr 04 02:32:11 crc kubenswrapper[4681]: I0404 02:32:11.042125 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-pwvnn"] Apr 04 02:32:11 crc kubenswrapper[4681]: I0404 02:32:11.054973 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-pwvnn"] Apr 04 02:32:11 crc kubenswrapper[4681]: I0404 02:32:11.063786 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-lnswn"] Apr 04 02:32:11 crc kubenswrapper[4681]: I0404 02:32:11.215700 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="658ab0b4-3080-4229-bda8-98cdaeedd719" path="/var/lib/kubelet/pods/658ab0b4-3080-4229-bda8-98cdaeedd719/volumes" Apr 04 02:32:11 crc kubenswrapper[4681]: I0404 02:32:11.216570 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91c9c400-d63b-4f2a-82fe-e178b9d8041d" path="/var/lib/kubelet/pods/91c9c400-d63b-4f2a-82fe-e178b9d8041d/volumes" Apr 04 02:32:11 crc kubenswrapper[4681]: I0404 02:32:11.218099 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7040185-eeba-423a-b853-8b0845725ca7" path="/var/lib/kubelet/pods/d7040185-eeba-423a-b853-8b0845725ca7/volumes" Apr 04 02:32:11 crc kubenswrapper[4681]: I0404 02:32:11.219353 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc5f3138-bfae-4200-9bff-80e1ceae2086" path="/var/lib/kubelet/pods/dc5f3138-bfae-4200-9bff-80e1ceae2086/volumes" Apr 04 02:32:11 crc kubenswrapper[4681]: I0404 02:32:11.223758 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa" path="/var/lib/kubelet/pods/f7f9ce4e-06ad-4f03-95c4-555ac2fcaeaa/volumes" Apr 04 02:32:15 crc kubenswrapper[4681]: I0404 02:32:15.079585 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-4d4e-account-create-update-vwb26"] Apr 04 02:32:15 crc kubenswrapper[4681]: I0404 02:32:15.091946 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-fd52-account-create-update-g868v"] Apr 04 02:32:15 crc kubenswrapper[4681]: I0404 02:32:15.103340 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b64f-account-create-update-rfm44"] Apr 04 02:32:15 crc kubenswrapper[4681]: I0404 02:32:15.115365 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-4d4e-account-create-update-vwb26"] Apr 04 02:32:15 crc kubenswrapper[4681]: I0404 02:32:15.124190 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-fd52-account-create-update-g868v"] Apr 04 02:32:15 crc kubenswrapper[4681]: I0404 02:32:15.132236 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b64f-account-create-update-rfm44"] Apr 04 02:32:15 crc kubenswrapper[4681]: I0404 02:32:15.218662 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b80e6a4-dd65-4faa-8163-342276cd3481" path="/var/lib/kubelet/pods/0b80e6a4-dd65-4faa-8163-342276cd3481/volumes" Apr 04 02:32:15 crc kubenswrapper[4681]: I0404 02:32:15.220594 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="747c7dee-388d-4dc0-8a14-12c94c004057" path="/var/lib/kubelet/pods/747c7dee-388d-4dc0-8a14-12c94c004057/volumes" Apr 04 02:32:15 crc kubenswrapper[4681]: I0404 02:32:15.222997 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d76e7add-8e4a-430f-ac78-55dd1539cb37" path="/var/lib/kubelet/pods/d76e7add-8e4a-430f-ac78-55dd1539cb37/volumes" Apr 04 02:32:16 crc kubenswrapper[4681]: I0404 02:32:16.036234 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-5bc0-account-create-update-hdl7t"] Apr 04 02:32:16 crc kubenswrapper[4681]: I0404 02:32:16.046493 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-5bc0-account-create-update-hdl7t"] Apr 04 02:32:17 crc kubenswrapper[4681]: I0404 02:32:17.036399 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-ce30-account-create-update-6wsbx"] Apr 04 02:32:17 crc kubenswrapper[4681]: I0404 02:32:17.046351 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-ce30-account-create-update-6wsbx"] Apr 04 02:32:17 crc kubenswrapper[4681]: I0404 02:32:17.214202 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89925da5-3840-4ec1-9bbb-1f518d3381b9" path="/var/lib/kubelet/pods/89925da5-3840-4ec1-9bbb-1f518d3381b9/volumes" Apr 04 02:32:17 crc kubenswrapper[4681]: I0404 02:32:17.214967 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf1fdc7f-09be-4dd6-8b31-ff80353025e3" path="/var/lib/kubelet/pods/bf1fdc7f-09be-4dd6-8b31-ff80353025e3/volumes" Apr 04 02:32:18 crc kubenswrapper[4681]: I0404 02:32:18.033853 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-7lzkj"] Apr 04 02:32:18 crc kubenswrapper[4681]: I0404 02:32:18.064545 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-7lzkj"] Apr 04 02:32:19 crc kubenswrapper[4681]: I0404 02:32:19.035774 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-4dhqt"] Apr 04 02:32:19 crc kubenswrapper[4681]: I0404 02:32:19.045004 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-4dhqt"] Apr 04 02:32:19 crc kubenswrapper[4681]: I0404 02:32:19.221123 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78c6ae52-4069-4291-bf5b-2a3567e923d0" path="/var/lib/kubelet/pods/78c6ae52-4069-4291-bf5b-2a3567e923d0/volumes" Apr 04 02:32:19 crc kubenswrapper[4681]: I0404 02:32:19.222608 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c378514c-b92c-4cd6-83a0-c1ac658b6e9b" path="/var/lib/kubelet/pods/c378514c-b92c-4cd6-83a0-c1ac658b6e9b/volumes" Apr 04 02:32:26 crc kubenswrapper[4681]: I0404 02:32:26.524998 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:32:26 crc kubenswrapper[4681]: I0404 02:32:26.525501 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:32:27 crc kubenswrapper[4681]: I0404 02:32:27.278210 4681 scope.go:117] "RemoveContainer" containerID="7e4c9137d135b2fcb0f177a4f676ad1683c4862f5f34ce907a485533a6cabf04" Apr 04 02:32:27 crc kubenswrapper[4681]: I0404 02:32:27.303123 4681 scope.go:117] "RemoveContainer" containerID="0d2683cc806fef9b17226200f3ba10d873e4c34436ebcca3867b93dfab439352" Apr 04 02:32:27 crc kubenswrapper[4681]: I0404 02:32:27.359044 4681 scope.go:117] "RemoveContainer" containerID="015e6476b6f4adbaa77e32abbcceff10eb2bb8d539ab9c4cd2759e3b907df6de" Apr 04 02:32:27 crc kubenswrapper[4681]: I0404 02:32:27.393820 4681 scope.go:117] "RemoveContainer" containerID="5bf421c1163064b0b3abe2737121078d91b5601bc6203f33eb5f9654145e0ced" Apr 04 02:32:27 crc kubenswrapper[4681]: I0404 02:32:27.438997 4681 scope.go:117] "RemoveContainer" containerID="13c21cee4646c0d73633253387281a7f3991e88ca65e2c6fe760a57c88cb50c6" Apr 04 02:32:27 crc kubenswrapper[4681]: I0404 02:32:27.487453 4681 scope.go:117] "RemoveContainer" containerID="7d17ec41e4b267bce3a20031accb704f09acaf9e6a0a2f9431a94dd6889ed48d" Apr 04 02:32:27 crc kubenswrapper[4681]: I0404 02:32:27.533992 4681 scope.go:117] "RemoveContainer" containerID="ebc1366d022faa787b55cb6ca943489edc330ab3c212b0e79399c09278bb276d" Apr 04 02:32:27 crc kubenswrapper[4681]: I0404 02:32:27.554335 4681 scope.go:117] "RemoveContainer" containerID="52c68d734ba6a9807e80976e1095fb0728fa39370bd60b151a1f78a168d61c18" Apr 04 02:32:27 crc kubenswrapper[4681]: I0404 02:32:27.592784 4681 scope.go:117] "RemoveContainer" containerID="8027fab556f5ede2774072e77d8aafeadb021899a22270658e0242f7b2c45284" Apr 04 02:32:27 crc kubenswrapper[4681]: I0404 02:32:27.614590 4681 scope.go:117] "RemoveContainer" containerID="4dcb735305ffd307ec46ac893d47f664e53672c58324610bf11041ba59e76b4e" Apr 04 02:32:27 crc kubenswrapper[4681]: I0404 02:32:27.636371 4681 scope.go:117] "RemoveContainer" containerID="c53d88edb80844cd47a7c826429c345ad1734c4067ea19cf431bddaa3cf78c88" Apr 04 02:32:27 crc kubenswrapper[4681]: I0404 02:32:27.658781 4681 scope.go:117] "RemoveContainer" containerID="21af8f396dcbc6ca76360326bf1de1bf03c9709327dce1353dafb881a6d9abaa" Apr 04 02:32:27 crc kubenswrapper[4681]: I0404 02:32:27.680035 4681 scope.go:117] "RemoveContainer" containerID="6c4073f2a2f3cb8c96e83aaa13dfcb56976589c7f23d418e6f0c29b290422fd1" Apr 04 02:32:27 crc kubenswrapper[4681]: I0404 02:32:27.701778 4681 scope.go:117] "RemoveContainer" containerID="9947c061de491d1176770280a3cafeab6827ef53fcf165fc754e7423901c82da" Apr 04 02:32:27 crc kubenswrapper[4681]: I0404 02:32:27.734751 4681 scope.go:117] "RemoveContainer" containerID="10d0ffbcb1100858bab42542e23f39fb1b56d55c49a433c8c7c0a33121827542" Apr 04 02:32:28 crc kubenswrapper[4681]: I0404 02:32:28.032919 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-3458-account-create-update-ldpbt"] Apr 04 02:32:28 crc kubenswrapper[4681]: I0404 02:32:28.042864 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-81c4-account-create-update-xh2xd"] Apr 04 02:32:28 crc kubenswrapper[4681]: I0404 02:32:28.052504 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-81c4-account-create-update-xh2xd"] Apr 04 02:32:28 crc kubenswrapper[4681]: I0404 02:32:28.061609 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-3458-account-create-update-ldpbt"] Apr 04 02:32:29 crc kubenswrapper[4681]: I0404 02:32:29.215293 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88cffa15-91ef-48fe-bd03-46cf3e2b4b9c" path="/var/lib/kubelet/pods/88cffa15-91ef-48fe-bd03-46cf3e2b4b9c/volumes" Apr 04 02:32:29 crc kubenswrapper[4681]: I0404 02:32:29.216083 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d042e61e-59c3-408a-a5c1-95f6f8f52c21" path="/var/lib/kubelet/pods/d042e61e-59c3-408a-a5c1-95f6f8f52c21/volumes" Apr 04 02:32:56 crc kubenswrapper[4681]: I0404 02:32:56.524232 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:32:56 crc kubenswrapper[4681]: I0404 02:32:56.525222 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:33:25 crc kubenswrapper[4681]: I0404 02:33:25.849324 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x49bx"] Apr 04 02:33:25 crc kubenswrapper[4681]: E0404 02:33:25.850562 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34cf84ca-142e-4466-b911-1886a9468d9e" containerName="oc" Apr 04 02:33:25 crc kubenswrapper[4681]: I0404 02:33:25.850581 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="34cf84ca-142e-4466-b911-1886a9468d9e" containerName="oc" Apr 04 02:33:25 crc kubenswrapper[4681]: I0404 02:33:25.850841 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="34cf84ca-142e-4466-b911-1886a9468d9e" containerName="oc" Apr 04 02:33:25 crc kubenswrapper[4681]: I0404 02:33:25.853107 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x49bx" Apr 04 02:33:25 crc kubenswrapper[4681]: I0404 02:33:25.866259 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x49bx"] Apr 04 02:33:25 crc kubenswrapper[4681]: I0404 02:33:25.945309 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8pbw\" (UniqueName: \"kubernetes.io/projected/fbb216c0-81b4-4cec-a9eb-35dea42c143f-kube-api-access-n8pbw\") pod \"redhat-operators-x49bx\" (UID: \"fbb216c0-81b4-4cec-a9eb-35dea42c143f\") " pod="openshift-marketplace/redhat-operators-x49bx" Apr 04 02:33:25 crc kubenswrapper[4681]: I0404 02:33:25.945351 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbb216c0-81b4-4cec-a9eb-35dea42c143f-utilities\") pod \"redhat-operators-x49bx\" (UID: \"fbb216c0-81b4-4cec-a9eb-35dea42c143f\") " pod="openshift-marketplace/redhat-operators-x49bx" Apr 04 02:33:25 crc kubenswrapper[4681]: I0404 02:33:25.945431 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbb216c0-81b4-4cec-a9eb-35dea42c143f-catalog-content\") pod \"redhat-operators-x49bx\" (UID: \"fbb216c0-81b4-4cec-a9eb-35dea42c143f\") " pod="openshift-marketplace/redhat-operators-x49bx" Apr 04 02:33:26 crc kubenswrapper[4681]: I0404 02:33:26.047165 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbb216c0-81b4-4cec-a9eb-35dea42c143f-catalog-content\") pod \"redhat-operators-x49bx\" (UID: \"fbb216c0-81b4-4cec-a9eb-35dea42c143f\") " pod="openshift-marketplace/redhat-operators-x49bx" Apr 04 02:33:26 crc kubenswrapper[4681]: I0404 02:33:26.047366 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8pbw\" (UniqueName: \"kubernetes.io/projected/fbb216c0-81b4-4cec-a9eb-35dea42c143f-kube-api-access-n8pbw\") pod \"redhat-operators-x49bx\" (UID: \"fbb216c0-81b4-4cec-a9eb-35dea42c143f\") " pod="openshift-marketplace/redhat-operators-x49bx" Apr 04 02:33:26 crc kubenswrapper[4681]: I0404 02:33:26.047404 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbb216c0-81b4-4cec-a9eb-35dea42c143f-utilities\") pod \"redhat-operators-x49bx\" (UID: \"fbb216c0-81b4-4cec-a9eb-35dea42c143f\") " pod="openshift-marketplace/redhat-operators-x49bx" Apr 04 02:33:26 crc kubenswrapper[4681]: I0404 02:33:26.048093 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbb216c0-81b4-4cec-a9eb-35dea42c143f-catalog-content\") pod \"redhat-operators-x49bx\" (UID: \"fbb216c0-81b4-4cec-a9eb-35dea42c143f\") " pod="openshift-marketplace/redhat-operators-x49bx" Apr 04 02:33:26 crc kubenswrapper[4681]: I0404 02:33:26.048398 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbb216c0-81b4-4cec-a9eb-35dea42c143f-utilities\") pod \"redhat-operators-x49bx\" (UID: \"fbb216c0-81b4-4cec-a9eb-35dea42c143f\") " pod="openshift-marketplace/redhat-operators-x49bx" Apr 04 02:33:26 crc kubenswrapper[4681]: I0404 02:33:26.048641 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-4q2fr"] Apr 04 02:33:26 crc kubenswrapper[4681]: I0404 02:33:26.060810 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-4q2fr"] Apr 04 02:33:26 crc kubenswrapper[4681]: I0404 02:33:26.071962 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8pbw\" (UniqueName: \"kubernetes.io/projected/fbb216c0-81b4-4cec-a9eb-35dea42c143f-kube-api-access-n8pbw\") pod \"redhat-operators-x49bx\" (UID: \"fbb216c0-81b4-4cec-a9eb-35dea42c143f\") " pod="openshift-marketplace/redhat-operators-x49bx" Apr 04 02:33:26 crc kubenswrapper[4681]: I0404 02:33:26.193969 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x49bx" Apr 04 02:33:26 crc kubenswrapper[4681]: I0404 02:33:26.527609 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:33:26 crc kubenswrapper[4681]: I0404 02:33:26.527947 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:33:26 crc kubenswrapper[4681]: I0404 02:33:26.527996 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 02:33:26 crc kubenswrapper[4681]: I0404 02:33:26.528803 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cd044358ac75974487a44a4e933ddc9b9a48d95be8a57a85af2e38de9daa1d56"} pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Apr 04 02:33:26 crc kubenswrapper[4681]: I0404 02:33:26.528864 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" containerID="cri-o://cd044358ac75974487a44a4e933ddc9b9a48d95be8a57a85af2e38de9daa1d56" gracePeriod=600 Apr 04 02:33:26 crc kubenswrapper[4681]: I0404 02:33:26.711011 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x49bx"] Apr 04 02:33:27 crc kubenswrapper[4681]: I0404 02:33:27.212827 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18b756c8-3f48-42ed-a4e4-895e2335fdb3" path="/var/lib/kubelet/pods/18b756c8-3f48-42ed-a4e4-895e2335fdb3/volumes" Apr 04 02:33:27 crc kubenswrapper[4681]: I0404 02:33:27.413510 4681 generic.go:334] "Generic (PLEG): container finished" podID="fbb216c0-81b4-4cec-a9eb-35dea42c143f" containerID="9363345bd7024bd65cfb4560816623cb2dc00ed0ec76d5d92055b13e97708da7" exitCode=0 Apr 04 02:33:27 crc kubenswrapper[4681]: I0404 02:33:27.413609 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x49bx" event={"ID":"fbb216c0-81b4-4cec-a9eb-35dea42c143f","Type":"ContainerDied","Data":"9363345bd7024bd65cfb4560816623cb2dc00ed0ec76d5d92055b13e97708da7"} Apr 04 02:33:27 crc kubenswrapper[4681]: I0404 02:33:27.413662 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x49bx" event={"ID":"fbb216c0-81b4-4cec-a9eb-35dea42c143f","Type":"ContainerStarted","Data":"22e6fe6213be6cb009f7aea63f3f32c75bbe7d19f110fc3dbe102ffd626e30e4"} Apr 04 02:33:27 crc kubenswrapper[4681]: I0404 02:33:27.416316 4681 generic.go:334] "Generic (PLEG): container finished" podID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerID="cd044358ac75974487a44a4e933ddc9b9a48d95be8a57a85af2e38de9daa1d56" exitCode=0 Apr 04 02:33:27 crc kubenswrapper[4681]: I0404 02:33:27.416347 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerDied","Data":"cd044358ac75974487a44a4e933ddc9b9a48d95be8a57a85af2e38de9daa1d56"} Apr 04 02:33:27 crc kubenswrapper[4681]: I0404 02:33:27.416375 4681 scope.go:117] "RemoveContainer" containerID="a115e63e478f2dcd96d6a1ea5579c4abd8544184899aee1395f08415f92823da" Apr 04 02:33:27 crc kubenswrapper[4681]: I0404 02:33:27.970153 4681 scope.go:117] "RemoveContainer" containerID="15b539cfe9d4581737836dbe14ea6deedb8a8fad8ff36f006ff507ffa4ac7136" Apr 04 02:33:28 crc kubenswrapper[4681]: I0404 02:33:28.063354 4681 scope.go:117] "RemoveContainer" containerID="f3646211de8e3cb02146e2c4f79e70656d3d863b138564f95885191fb284eab9" Apr 04 02:33:28 crc kubenswrapper[4681]: I0404 02:33:28.093062 4681 scope.go:117] "RemoveContainer" containerID="72bedd2aeed60c819f3ec3bb587a679907467c7231b1ce44fab871ce6c27f73c" Apr 04 02:33:28 crc kubenswrapper[4681]: I0404 02:33:28.434835 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerStarted","Data":"d0a6982dc73391bfcdbea37affa340ab6c2c8bfe22d8dc69ff3f47913a1861ee"} Apr 04 02:33:29 crc kubenswrapper[4681]: I0404 02:33:29.444524 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x49bx" event={"ID":"fbb216c0-81b4-4cec-a9eb-35dea42c143f","Type":"ContainerStarted","Data":"d17d7e2e16006a64a38a6ec784c3520ddbb19abb63a8345799d60a5302db62ca"} Apr 04 02:33:30 crc kubenswrapper[4681]: I0404 02:33:30.460194 4681 generic.go:334] "Generic (PLEG): container finished" podID="fbb216c0-81b4-4cec-a9eb-35dea42c143f" containerID="d17d7e2e16006a64a38a6ec784c3520ddbb19abb63a8345799d60a5302db62ca" exitCode=0 Apr 04 02:33:30 crc kubenswrapper[4681]: I0404 02:33:30.460253 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x49bx" event={"ID":"fbb216c0-81b4-4cec-a9eb-35dea42c143f","Type":"ContainerDied","Data":"d17d7e2e16006a64a38a6ec784c3520ddbb19abb63a8345799d60a5302db62ca"} Apr 04 02:33:35 crc kubenswrapper[4681]: I0404 02:33:35.516989 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x49bx" event={"ID":"fbb216c0-81b4-4cec-a9eb-35dea42c143f","Type":"ContainerStarted","Data":"d703ff356a91bdf15c802b70e2b3c4602a63a9b0871d8d6941ce685d4c421fb1"} Apr 04 02:33:35 crc kubenswrapper[4681]: I0404 02:33:35.548687 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x49bx" podStartSLOduration=3.613535125 podStartE2EDuration="10.548662689s" podCreationTimestamp="2026-04-04 02:33:25 +0000 UTC" firstStartedPulling="2026-04-04 02:33:27.415072969 +0000 UTC m=+2287.080848099" lastFinishedPulling="2026-04-04 02:33:34.350200513 +0000 UTC m=+2294.015975663" observedRunningTime="2026-04-04 02:33:35.543739364 +0000 UTC m=+2295.209514494" watchObservedRunningTime="2026-04-04 02:33:35.548662689 +0000 UTC m=+2295.214437809" Apr 04 02:33:36 crc kubenswrapper[4681]: I0404 02:33:36.194541 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x49bx" Apr 04 02:33:36 crc kubenswrapper[4681]: I0404 02:33:36.195479 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x49bx" Apr 04 02:33:37 crc kubenswrapper[4681]: I0404 02:33:37.243884 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x49bx" podUID="fbb216c0-81b4-4cec-a9eb-35dea42c143f" containerName="registry-server" probeResult="failure" output=< Apr 04 02:33:37 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Apr 04 02:33:37 crc kubenswrapper[4681]: > Apr 04 02:33:46 crc kubenswrapper[4681]: I0404 02:33:46.242917 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x49bx" Apr 04 02:33:46 crc kubenswrapper[4681]: I0404 02:33:46.292844 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x49bx" Apr 04 02:33:46 crc kubenswrapper[4681]: I0404 02:33:46.478907 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x49bx"] Apr 04 02:33:47 crc kubenswrapper[4681]: I0404 02:33:47.674675 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x49bx" podUID="fbb216c0-81b4-4cec-a9eb-35dea42c143f" containerName="registry-server" containerID="cri-o://d703ff356a91bdf15c802b70e2b3c4602a63a9b0871d8d6941ce685d4c421fb1" gracePeriod=2 Apr 04 02:33:48 crc kubenswrapper[4681]: I0404 02:33:48.254807 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x49bx" Apr 04 02:33:48 crc kubenswrapper[4681]: I0404 02:33:48.313144 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbb216c0-81b4-4cec-a9eb-35dea42c143f-catalog-content\") pod \"fbb216c0-81b4-4cec-a9eb-35dea42c143f\" (UID: \"fbb216c0-81b4-4cec-a9eb-35dea42c143f\") " Apr 04 02:33:48 crc kubenswrapper[4681]: I0404 02:33:48.313345 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbb216c0-81b4-4cec-a9eb-35dea42c143f-utilities\") pod \"fbb216c0-81b4-4cec-a9eb-35dea42c143f\" (UID: \"fbb216c0-81b4-4cec-a9eb-35dea42c143f\") " Apr 04 02:33:48 crc kubenswrapper[4681]: I0404 02:33:48.313409 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8pbw\" (UniqueName: \"kubernetes.io/projected/fbb216c0-81b4-4cec-a9eb-35dea42c143f-kube-api-access-n8pbw\") pod \"fbb216c0-81b4-4cec-a9eb-35dea42c143f\" (UID: \"fbb216c0-81b4-4cec-a9eb-35dea42c143f\") " Apr 04 02:33:48 crc kubenswrapper[4681]: I0404 02:33:48.315007 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbb216c0-81b4-4cec-a9eb-35dea42c143f-utilities" (OuterVolumeSpecName: "utilities") pod "fbb216c0-81b4-4cec-a9eb-35dea42c143f" (UID: "fbb216c0-81b4-4cec-a9eb-35dea42c143f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:33:48 crc kubenswrapper[4681]: I0404 02:33:48.318938 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbb216c0-81b4-4cec-a9eb-35dea42c143f-kube-api-access-n8pbw" (OuterVolumeSpecName: "kube-api-access-n8pbw") pod "fbb216c0-81b4-4cec-a9eb-35dea42c143f" (UID: "fbb216c0-81b4-4cec-a9eb-35dea42c143f"). InnerVolumeSpecName "kube-api-access-n8pbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:33:48 crc kubenswrapper[4681]: I0404 02:33:48.416113 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8pbw\" (UniqueName: \"kubernetes.io/projected/fbb216c0-81b4-4cec-a9eb-35dea42c143f-kube-api-access-n8pbw\") on node \"crc\" DevicePath \"\"" Apr 04 02:33:48 crc kubenswrapper[4681]: I0404 02:33:48.416154 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbb216c0-81b4-4cec-a9eb-35dea42c143f-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 02:33:48 crc kubenswrapper[4681]: I0404 02:33:48.452753 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbb216c0-81b4-4cec-a9eb-35dea42c143f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fbb216c0-81b4-4cec-a9eb-35dea42c143f" (UID: "fbb216c0-81b4-4cec-a9eb-35dea42c143f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:33:48 crc kubenswrapper[4681]: I0404 02:33:48.517943 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbb216c0-81b4-4cec-a9eb-35dea42c143f-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 02:33:48 crc kubenswrapper[4681]: I0404 02:33:48.689972 4681 generic.go:334] "Generic (PLEG): container finished" podID="fbb216c0-81b4-4cec-a9eb-35dea42c143f" containerID="d703ff356a91bdf15c802b70e2b3c4602a63a9b0871d8d6941ce685d4c421fb1" exitCode=0 Apr 04 02:33:48 crc kubenswrapper[4681]: I0404 02:33:48.690036 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x49bx" event={"ID":"fbb216c0-81b4-4cec-a9eb-35dea42c143f","Type":"ContainerDied","Data":"d703ff356a91bdf15c802b70e2b3c4602a63a9b0871d8d6941ce685d4c421fb1"} Apr 04 02:33:48 crc kubenswrapper[4681]: I0404 02:33:48.690061 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x49bx" Apr 04 02:33:48 crc kubenswrapper[4681]: I0404 02:33:48.690072 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x49bx" event={"ID":"fbb216c0-81b4-4cec-a9eb-35dea42c143f","Type":"ContainerDied","Data":"22e6fe6213be6cb009f7aea63f3f32c75bbe7d19f110fc3dbe102ffd626e30e4"} Apr 04 02:33:48 crc kubenswrapper[4681]: I0404 02:33:48.690096 4681 scope.go:117] "RemoveContainer" containerID="d703ff356a91bdf15c802b70e2b3c4602a63a9b0871d8d6941ce685d4c421fb1" Apr 04 02:33:48 crc kubenswrapper[4681]: I0404 02:33:48.738006 4681 scope.go:117] "RemoveContainer" containerID="d17d7e2e16006a64a38a6ec784c3520ddbb19abb63a8345799d60a5302db62ca" Apr 04 02:33:48 crc kubenswrapper[4681]: I0404 02:33:48.749066 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x49bx"] Apr 04 02:33:48 crc kubenswrapper[4681]: I0404 02:33:48.757687 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x49bx"] Apr 04 02:33:48 crc kubenswrapper[4681]: I0404 02:33:48.763494 4681 scope.go:117] "RemoveContainer" containerID="9363345bd7024bd65cfb4560816623cb2dc00ed0ec76d5d92055b13e97708da7" Apr 04 02:33:48 crc kubenswrapper[4681]: I0404 02:33:48.823574 4681 scope.go:117] "RemoveContainer" containerID="d703ff356a91bdf15c802b70e2b3c4602a63a9b0871d8d6941ce685d4c421fb1" Apr 04 02:33:48 crc kubenswrapper[4681]: E0404 02:33:48.824132 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d703ff356a91bdf15c802b70e2b3c4602a63a9b0871d8d6941ce685d4c421fb1\": container with ID starting with d703ff356a91bdf15c802b70e2b3c4602a63a9b0871d8d6941ce685d4c421fb1 not found: ID does not exist" containerID="d703ff356a91bdf15c802b70e2b3c4602a63a9b0871d8d6941ce685d4c421fb1" Apr 04 02:33:48 crc kubenswrapper[4681]: I0404 02:33:48.824186 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d703ff356a91bdf15c802b70e2b3c4602a63a9b0871d8d6941ce685d4c421fb1"} err="failed to get container status \"d703ff356a91bdf15c802b70e2b3c4602a63a9b0871d8d6941ce685d4c421fb1\": rpc error: code = NotFound desc = could not find container \"d703ff356a91bdf15c802b70e2b3c4602a63a9b0871d8d6941ce685d4c421fb1\": container with ID starting with d703ff356a91bdf15c802b70e2b3c4602a63a9b0871d8d6941ce685d4c421fb1 not found: ID does not exist" Apr 04 02:33:48 crc kubenswrapper[4681]: I0404 02:33:48.824216 4681 scope.go:117] "RemoveContainer" containerID="d17d7e2e16006a64a38a6ec784c3520ddbb19abb63a8345799d60a5302db62ca" Apr 04 02:33:48 crc kubenswrapper[4681]: E0404 02:33:48.824739 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d17d7e2e16006a64a38a6ec784c3520ddbb19abb63a8345799d60a5302db62ca\": container with ID starting with d17d7e2e16006a64a38a6ec784c3520ddbb19abb63a8345799d60a5302db62ca not found: ID does not exist" containerID="d17d7e2e16006a64a38a6ec784c3520ddbb19abb63a8345799d60a5302db62ca" Apr 04 02:33:48 crc kubenswrapper[4681]: I0404 02:33:48.824780 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d17d7e2e16006a64a38a6ec784c3520ddbb19abb63a8345799d60a5302db62ca"} err="failed to get container status \"d17d7e2e16006a64a38a6ec784c3520ddbb19abb63a8345799d60a5302db62ca\": rpc error: code = NotFound desc = could not find container \"d17d7e2e16006a64a38a6ec784c3520ddbb19abb63a8345799d60a5302db62ca\": container with ID starting with d17d7e2e16006a64a38a6ec784c3520ddbb19abb63a8345799d60a5302db62ca not found: ID does not exist" Apr 04 02:33:48 crc kubenswrapper[4681]: I0404 02:33:48.824808 4681 scope.go:117] "RemoveContainer" containerID="9363345bd7024bd65cfb4560816623cb2dc00ed0ec76d5d92055b13e97708da7" Apr 04 02:33:48 crc kubenswrapper[4681]: E0404 02:33:48.825368 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9363345bd7024bd65cfb4560816623cb2dc00ed0ec76d5d92055b13e97708da7\": container with ID starting with 9363345bd7024bd65cfb4560816623cb2dc00ed0ec76d5d92055b13e97708da7 not found: ID does not exist" containerID="9363345bd7024bd65cfb4560816623cb2dc00ed0ec76d5d92055b13e97708da7" Apr 04 02:33:48 crc kubenswrapper[4681]: I0404 02:33:48.825420 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9363345bd7024bd65cfb4560816623cb2dc00ed0ec76d5d92055b13e97708da7"} err="failed to get container status \"9363345bd7024bd65cfb4560816623cb2dc00ed0ec76d5d92055b13e97708da7\": rpc error: code = NotFound desc = could not find container \"9363345bd7024bd65cfb4560816623cb2dc00ed0ec76d5d92055b13e97708da7\": container with ID starting with 9363345bd7024bd65cfb4560816623cb2dc00ed0ec76d5d92055b13e97708da7 not found: ID does not exist" Apr 04 02:33:49 crc kubenswrapper[4681]: I0404 02:33:49.213786 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbb216c0-81b4-4cec-a9eb-35dea42c143f" path="/var/lib/kubelet/pods/fbb216c0-81b4-4cec-a9eb-35dea42c143f/volumes" Apr 04 02:34:00 crc kubenswrapper[4681]: I0404 02:34:00.145961 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587834-t97x5"] Apr 04 02:34:00 crc kubenswrapper[4681]: E0404 02:34:00.147061 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb216c0-81b4-4cec-a9eb-35dea42c143f" containerName="registry-server" Apr 04 02:34:00 crc kubenswrapper[4681]: I0404 02:34:00.147077 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb216c0-81b4-4cec-a9eb-35dea42c143f" containerName="registry-server" Apr 04 02:34:00 crc kubenswrapper[4681]: E0404 02:34:00.147097 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb216c0-81b4-4cec-a9eb-35dea42c143f" containerName="extract-utilities" Apr 04 02:34:00 crc kubenswrapper[4681]: I0404 02:34:00.147108 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb216c0-81b4-4cec-a9eb-35dea42c143f" containerName="extract-utilities" Apr 04 02:34:00 crc kubenswrapper[4681]: E0404 02:34:00.147148 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbb216c0-81b4-4cec-a9eb-35dea42c143f" containerName="extract-content" Apr 04 02:34:00 crc kubenswrapper[4681]: I0404 02:34:00.147156 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbb216c0-81b4-4cec-a9eb-35dea42c143f" containerName="extract-content" Apr 04 02:34:00 crc kubenswrapper[4681]: I0404 02:34:00.147470 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbb216c0-81b4-4cec-a9eb-35dea42c143f" containerName="registry-server" Apr 04 02:34:00 crc kubenswrapper[4681]: I0404 02:34:00.148485 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587834-t97x5" Apr 04 02:34:00 crc kubenswrapper[4681]: I0404 02:34:00.155642 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587834-t97x5"] Apr 04 02:34:00 crc kubenswrapper[4681]: I0404 02:34:00.157209 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 02:34:00 crc kubenswrapper[4681]: I0404 02:34:00.157277 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 02:34:00 crc kubenswrapper[4681]: I0404 02:34:00.157628 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 02:34:00 crc kubenswrapper[4681]: I0404 02:34:00.247551 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2rsl\" (UniqueName: \"kubernetes.io/projected/ad3faf5c-9f9c-4abb-9fec-ecf6960105ab-kube-api-access-q2rsl\") pod \"auto-csr-approver-29587834-t97x5\" (UID: \"ad3faf5c-9f9c-4abb-9fec-ecf6960105ab\") " pod="openshift-infra/auto-csr-approver-29587834-t97x5" Apr 04 02:34:00 crc kubenswrapper[4681]: I0404 02:34:00.350632 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2rsl\" (UniqueName: \"kubernetes.io/projected/ad3faf5c-9f9c-4abb-9fec-ecf6960105ab-kube-api-access-q2rsl\") pod \"auto-csr-approver-29587834-t97x5\" (UID: \"ad3faf5c-9f9c-4abb-9fec-ecf6960105ab\") " pod="openshift-infra/auto-csr-approver-29587834-t97x5" Apr 04 02:34:00 crc kubenswrapper[4681]: I0404 02:34:00.372572 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2rsl\" (UniqueName: \"kubernetes.io/projected/ad3faf5c-9f9c-4abb-9fec-ecf6960105ab-kube-api-access-q2rsl\") pod \"auto-csr-approver-29587834-t97x5\" (UID: \"ad3faf5c-9f9c-4abb-9fec-ecf6960105ab\") " pod="openshift-infra/auto-csr-approver-29587834-t97x5" Apr 04 02:34:00 crc kubenswrapper[4681]: I0404 02:34:00.474455 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587834-t97x5" Apr 04 02:34:00 crc kubenswrapper[4681]: I0404 02:34:00.821150 4681 generic.go:334] "Generic (PLEG): container finished" podID="00befa4c-4be8-4cc4-8e8e-46c0bb3b6592" containerID="7607b4b2ac759c1ffa23b22344b037f2dbfc1950c1816b716956c65e37b89df0" exitCode=0 Apr 04 02:34:00 crc kubenswrapper[4681]: I0404 02:34:00.821240 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64" event={"ID":"00befa4c-4be8-4cc4-8e8e-46c0bb3b6592","Type":"ContainerDied","Data":"7607b4b2ac759c1ffa23b22344b037f2dbfc1950c1816b716956c65e37b89df0"} Apr 04 02:34:00 crc kubenswrapper[4681]: I0404 02:34:00.902249 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587834-t97x5"] Apr 04 02:34:01 crc kubenswrapper[4681]: I0404 02:34:01.833408 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587834-t97x5" event={"ID":"ad3faf5c-9f9c-4abb-9fec-ecf6960105ab","Type":"ContainerStarted","Data":"1480f655c207fbac944b08202bfeb1377f8ab748cbb479f71400f31680959ff2"} Apr 04 02:34:02 crc kubenswrapper[4681]: I0404 02:34:02.233758 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64" Apr 04 02:34:02 crc kubenswrapper[4681]: I0404 02:34:02.288110 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t89pv\" (UniqueName: \"kubernetes.io/projected/00befa4c-4be8-4cc4-8e8e-46c0bb3b6592-kube-api-access-t89pv\") pod \"00befa4c-4be8-4cc4-8e8e-46c0bb3b6592\" (UID: \"00befa4c-4be8-4cc4-8e8e-46c0bb3b6592\") " Apr 04 02:34:02 crc kubenswrapper[4681]: I0404 02:34:02.288171 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00befa4c-4be8-4cc4-8e8e-46c0bb3b6592-bootstrap-combined-ca-bundle\") pod \"00befa4c-4be8-4cc4-8e8e-46c0bb3b6592\" (UID: \"00befa4c-4be8-4cc4-8e8e-46c0bb3b6592\") " Apr 04 02:34:02 crc kubenswrapper[4681]: I0404 02:34:02.288339 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00befa4c-4be8-4cc4-8e8e-46c0bb3b6592-ssh-key-openstack-edpm-ipam\") pod \"00befa4c-4be8-4cc4-8e8e-46c0bb3b6592\" (UID: \"00befa4c-4be8-4cc4-8e8e-46c0bb3b6592\") " Apr 04 02:34:02 crc kubenswrapper[4681]: I0404 02:34:02.288419 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00befa4c-4be8-4cc4-8e8e-46c0bb3b6592-inventory\") pod \"00befa4c-4be8-4cc4-8e8e-46c0bb3b6592\" (UID: \"00befa4c-4be8-4cc4-8e8e-46c0bb3b6592\") " Apr 04 02:34:02 crc kubenswrapper[4681]: I0404 02:34:02.294043 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00befa4c-4be8-4cc4-8e8e-46c0bb3b6592-kube-api-access-t89pv" (OuterVolumeSpecName: "kube-api-access-t89pv") pod "00befa4c-4be8-4cc4-8e8e-46c0bb3b6592" (UID: "00befa4c-4be8-4cc4-8e8e-46c0bb3b6592"). InnerVolumeSpecName "kube-api-access-t89pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:34:02 crc kubenswrapper[4681]: I0404 02:34:02.295947 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00befa4c-4be8-4cc4-8e8e-46c0bb3b6592-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "00befa4c-4be8-4cc4-8e8e-46c0bb3b6592" (UID: "00befa4c-4be8-4cc4-8e8e-46c0bb3b6592"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:34:02 crc kubenswrapper[4681]: I0404 02:34:02.319299 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00befa4c-4be8-4cc4-8e8e-46c0bb3b6592-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "00befa4c-4be8-4cc4-8e8e-46c0bb3b6592" (UID: "00befa4c-4be8-4cc4-8e8e-46c0bb3b6592"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:34:02 crc kubenswrapper[4681]: I0404 02:34:02.321748 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00befa4c-4be8-4cc4-8e8e-46c0bb3b6592-inventory" (OuterVolumeSpecName: "inventory") pod "00befa4c-4be8-4cc4-8e8e-46c0bb3b6592" (UID: "00befa4c-4be8-4cc4-8e8e-46c0bb3b6592"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:34:02 crc kubenswrapper[4681]: I0404 02:34:02.391110 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t89pv\" (UniqueName: \"kubernetes.io/projected/00befa4c-4be8-4cc4-8e8e-46c0bb3b6592-kube-api-access-t89pv\") on node \"crc\" DevicePath \"\"" Apr 04 02:34:02 crc kubenswrapper[4681]: I0404 02:34:02.391146 4681 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00befa4c-4be8-4cc4-8e8e-46c0bb3b6592-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:34:02 crc kubenswrapper[4681]: I0404 02:34:02.391159 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00befa4c-4be8-4cc4-8e8e-46c0bb3b6592-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Apr 04 02:34:02 crc kubenswrapper[4681]: I0404 02:34:02.391172 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/00befa4c-4be8-4cc4-8e8e-46c0bb3b6592-inventory\") on node \"crc\" DevicePath \"\"" Apr 04 02:34:02 crc kubenswrapper[4681]: I0404 02:34:02.845393 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64" Apr 04 02:34:02 crc kubenswrapper[4681]: I0404 02:34:02.845399 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64" event={"ID":"00befa4c-4be8-4cc4-8e8e-46c0bb3b6592","Type":"ContainerDied","Data":"1a7ef46b7fed968b40cffcc3eabc4a1595675f0beb0a22edf1f3efea8f5af987"} Apr 04 02:34:02 crc kubenswrapper[4681]: I0404 02:34:02.845733 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a7ef46b7fed968b40cffcc3eabc4a1595675f0beb0a22edf1f3efea8f5af987" Apr 04 02:34:02 crc kubenswrapper[4681]: I0404 02:34:02.847817 4681 generic.go:334] "Generic (PLEG): container finished" podID="ad3faf5c-9f9c-4abb-9fec-ecf6960105ab" containerID="bff0ae772352cc97eb4be6c1f87de25ba85314992d0cb8fb6a1107e40e4b9ad6" exitCode=0 Apr 04 02:34:02 crc kubenswrapper[4681]: I0404 02:34:02.847918 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587834-t97x5" event={"ID":"ad3faf5c-9f9c-4abb-9fec-ecf6960105ab","Type":"ContainerDied","Data":"bff0ae772352cc97eb4be6c1f87de25ba85314992d0cb8fb6a1107e40e4b9ad6"} Apr 04 02:34:02 crc kubenswrapper[4681]: I0404 02:34:02.928651 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99rqr"] Apr 04 02:34:02 crc kubenswrapper[4681]: E0404 02:34:02.929288 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00befa4c-4be8-4cc4-8e8e-46c0bb3b6592" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Apr 04 02:34:02 crc kubenswrapper[4681]: I0404 02:34:02.929379 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="00befa4c-4be8-4cc4-8e8e-46c0bb3b6592" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Apr 04 02:34:02 crc kubenswrapper[4681]: I0404 02:34:02.929693 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="00befa4c-4be8-4cc4-8e8e-46c0bb3b6592" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Apr 04 02:34:02 crc kubenswrapper[4681]: I0404 02:34:02.930394 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99rqr" Apr 04 02:34:02 crc kubenswrapper[4681]: I0404 02:34:02.934028 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Apr 04 02:34:02 crc kubenswrapper[4681]: I0404 02:34:02.934119 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Apr 04 02:34:02 crc kubenswrapper[4681]: I0404 02:34:02.934316 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Apr 04 02:34:02 crc kubenswrapper[4681]: I0404 02:34:02.934653 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w7ttd" Apr 04 02:34:02 crc kubenswrapper[4681]: I0404 02:34:02.956201 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99rqr"] Apr 04 02:34:03 crc kubenswrapper[4681]: I0404 02:34:03.005779 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3b7061a-37ce-4302-a3a3-f06aff60e3a3-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-99rqr\" (UID: \"b3b7061a-37ce-4302-a3a3-f06aff60e3a3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99rqr" Apr 04 02:34:03 crc kubenswrapper[4681]: I0404 02:34:03.005880 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgthx\" (UniqueName: \"kubernetes.io/projected/b3b7061a-37ce-4302-a3a3-f06aff60e3a3-kube-api-access-hgthx\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-99rqr\" (UID: \"b3b7061a-37ce-4302-a3a3-f06aff60e3a3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99rqr" Apr 04 02:34:03 crc kubenswrapper[4681]: I0404 02:34:03.005967 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3b7061a-37ce-4302-a3a3-f06aff60e3a3-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-99rqr\" (UID: \"b3b7061a-37ce-4302-a3a3-f06aff60e3a3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99rqr" Apr 04 02:34:03 crc kubenswrapper[4681]: I0404 02:34:03.108963 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3b7061a-37ce-4302-a3a3-f06aff60e3a3-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-99rqr\" (UID: \"b3b7061a-37ce-4302-a3a3-f06aff60e3a3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99rqr" Apr 04 02:34:03 crc kubenswrapper[4681]: I0404 02:34:03.109088 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgthx\" (UniqueName: \"kubernetes.io/projected/b3b7061a-37ce-4302-a3a3-f06aff60e3a3-kube-api-access-hgthx\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-99rqr\" (UID: \"b3b7061a-37ce-4302-a3a3-f06aff60e3a3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99rqr" Apr 04 02:34:03 crc kubenswrapper[4681]: I0404 02:34:03.109225 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3b7061a-37ce-4302-a3a3-f06aff60e3a3-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-99rqr\" (UID: \"b3b7061a-37ce-4302-a3a3-f06aff60e3a3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99rqr" Apr 04 02:34:03 crc kubenswrapper[4681]: I0404 02:34:03.113411 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3b7061a-37ce-4302-a3a3-f06aff60e3a3-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-99rqr\" (UID: \"b3b7061a-37ce-4302-a3a3-f06aff60e3a3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99rqr" Apr 04 02:34:03 crc kubenswrapper[4681]: I0404 02:34:03.115135 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3b7061a-37ce-4302-a3a3-f06aff60e3a3-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-99rqr\" (UID: \"b3b7061a-37ce-4302-a3a3-f06aff60e3a3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99rqr" Apr 04 02:34:03 crc kubenswrapper[4681]: I0404 02:34:03.129085 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgthx\" (UniqueName: \"kubernetes.io/projected/b3b7061a-37ce-4302-a3a3-f06aff60e3a3-kube-api-access-hgthx\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-99rqr\" (UID: \"b3b7061a-37ce-4302-a3a3-f06aff60e3a3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99rqr" Apr 04 02:34:03 crc kubenswrapper[4681]: I0404 02:34:03.256843 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99rqr" Apr 04 02:34:03 crc kubenswrapper[4681]: I0404 02:34:03.809617 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99rqr"] Apr 04 02:34:03 crc kubenswrapper[4681]: I0404 02:34:03.859228 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99rqr" event={"ID":"b3b7061a-37ce-4302-a3a3-f06aff60e3a3","Type":"ContainerStarted","Data":"c2b3c3b02552a5975c8aefb6331608cf9e76aabbefd44278afb715069813dc96"} Apr 04 02:34:04 crc kubenswrapper[4681]: I0404 02:34:04.080807 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587834-t97x5" Apr 04 02:34:04 crc kubenswrapper[4681]: I0404 02:34:04.231288 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2rsl\" (UniqueName: \"kubernetes.io/projected/ad3faf5c-9f9c-4abb-9fec-ecf6960105ab-kube-api-access-q2rsl\") pod \"ad3faf5c-9f9c-4abb-9fec-ecf6960105ab\" (UID: \"ad3faf5c-9f9c-4abb-9fec-ecf6960105ab\") " Apr 04 02:34:04 crc kubenswrapper[4681]: I0404 02:34:04.235750 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad3faf5c-9f9c-4abb-9fec-ecf6960105ab-kube-api-access-q2rsl" (OuterVolumeSpecName: "kube-api-access-q2rsl") pod "ad3faf5c-9f9c-4abb-9fec-ecf6960105ab" (UID: "ad3faf5c-9f9c-4abb-9fec-ecf6960105ab"). InnerVolumeSpecName "kube-api-access-q2rsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:34:04 crc kubenswrapper[4681]: I0404 02:34:04.333915 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2rsl\" (UniqueName: \"kubernetes.io/projected/ad3faf5c-9f9c-4abb-9fec-ecf6960105ab-kube-api-access-q2rsl\") on node \"crc\" DevicePath \"\"" Apr 04 02:34:04 crc kubenswrapper[4681]: I0404 02:34:04.870367 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99rqr" event={"ID":"b3b7061a-37ce-4302-a3a3-f06aff60e3a3","Type":"ContainerStarted","Data":"80c67edb16152b53bda743e89baac904b416695522750c6b44dc85d1264db96e"} Apr 04 02:34:04 crc kubenswrapper[4681]: I0404 02:34:04.878683 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587834-t97x5" event={"ID":"ad3faf5c-9f9c-4abb-9fec-ecf6960105ab","Type":"ContainerDied","Data":"1480f655c207fbac944b08202bfeb1377f8ab748cbb479f71400f31680959ff2"} Apr 04 02:34:04 crc kubenswrapper[4681]: I0404 02:34:04.878728 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1480f655c207fbac944b08202bfeb1377f8ab748cbb479f71400f31680959ff2" Apr 04 02:34:04 crc kubenswrapper[4681]: I0404 02:34:04.878818 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587834-t97x5" Apr 04 02:34:04 crc kubenswrapper[4681]: I0404 02:34:04.894780 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99rqr" podStartSLOduration=2.517510707 podStartE2EDuration="2.894761534s" podCreationTimestamp="2026-04-04 02:34:02 +0000 UTC" firstStartedPulling="2026-04-04 02:34:03.819709037 +0000 UTC m=+2323.485484157" lastFinishedPulling="2026-04-04 02:34:04.196959864 +0000 UTC m=+2323.862734984" observedRunningTime="2026-04-04 02:34:04.894663681 +0000 UTC m=+2324.560438801" watchObservedRunningTime="2026-04-04 02:34:04.894761534 +0000 UTC m=+2324.560536664" Apr 04 02:34:05 crc kubenswrapper[4681]: I0404 02:34:05.161329 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587828-ngsn5"] Apr 04 02:34:05 crc kubenswrapper[4681]: I0404 02:34:05.171752 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587828-ngsn5"] Apr 04 02:34:05 crc kubenswrapper[4681]: I0404 02:34:05.211453 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25c42585-4d6b-4b0b-8ae4-d2913e833b34" path="/var/lib/kubelet/pods/25c42585-4d6b-4b0b-8ae4-d2913e833b34/volumes" Apr 04 02:34:25 crc kubenswrapper[4681]: I0404 02:34:25.039869 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-ck5kg"] Apr 04 02:34:25 crc kubenswrapper[4681]: I0404 02:34:25.052376 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-ck5kg"] Apr 04 02:34:25 crc kubenswrapper[4681]: I0404 02:34:25.214163 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab5ad0f4-4c98-4351-83df-037a25fe6447" path="/var/lib/kubelet/pods/ab5ad0f4-4c98-4351-83df-037a25fe6447/volumes" Apr 04 02:34:28 crc kubenswrapper[4681]: I0404 02:34:28.213651 4681 scope.go:117] "RemoveContainer" containerID="fe8eb6697cf3f151e7f12279d37ffd126e139882026bf4e41885afea76100b12" Apr 04 02:34:28 crc kubenswrapper[4681]: I0404 02:34:28.255584 4681 scope.go:117] "RemoveContainer" containerID="aeaccd3eb19be27ef00b5d7f9e2bc27017b9474b0500bc6ac26f0013dfc3b9f2" Apr 04 02:34:28 crc kubenswrapper[4681]: I0404 02:34:28.335782 4681 scope.go:117] "RemoveContainer" containerID="42da8ad75335c5becfcb54d152bc67843e6986e0d8a5a9d34d9d03a922114826" Apr 04 02:34:28 crc kubenswrapper[4681]: I0404 02:34:28.364206 4681 scope.go:117] "RemoveContainer" containerID="9890d47df7f2165a2a2111ce5a9e7bec4e87a6fd1f43fb7380ba868f835e5c08" Apr 04 02:34:28 crc kubenswrapper[4681]: I0404 02:34:28.383324 4681 scope.go:117] "RemoveContainer" containerID="459c06cd809192edf78ac623358ea754e0ca8f5b49f6fb7755df40d1f148c468" Apr 04 02:34:28 crc kubenswrapper[4681]: I0404 02:34:28.406855 4681 scope.go:117] "RemoveContainer" containerID="3a5cd134d53b4dd5cb2ad4a9b33e01d47db4c43856148d962308e8d387b51070" Apr 04 02:34:28 crc kubenswrapper[4681]: I0404 02:34:28.427595 4681 scope.go:117] "RemoveContainer" containerID="f99760b19a10d6991768c5b00c03b065466eb4182071e7c363261547edb8ae3b" Apr 04 02:34:28 crc kubenswrapper[4681]: I0404 02:34:28.479240 4681 scope.go:117] "RemoveContainer" containerID="e57fbec9638241c28657308939ec81020cacd82283192da6742fd4fe223deeaa" Apr 04 02:34:28 crc kubenswrapper[4681]: I0404 02:34:28.516401 4681 scope.go:117] "RemoveContainer" containerID="c5eca742119d2cb291e484e36d8e5886d69c117055b411bf5be0f5e309cde66a" Apr 04 02:34:57 crc kubenswrapper[4681]: I0404 02:34:57.038495 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-hrgm8"] Apr 04 02:34:57 crc kubenswrapper[4681]: I0404 02:34:57.047912 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-hrgm8"] Apr 04 02:34:57 crc kubenswrapper[4681]: I0404 02:34:57.212233 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0011158a-2855-4b60-9798-77badda0f40c" path="/var/lib/kubelet/pods/0011158a-2855-4b60-9798-77badda0f40c/volumes" Apr 04 02:35:28 crc kubenswrapper[4681]: I0404 02:35:28.650436 4681 scope.go:117] "RemoveContainer" containerID="38cc4e3a4a4b3258af6cd176ca88aa5d76a1f8c1cb46392b6b2217526bbf2c23" Apr 04 02:35:31 crc kubenswrapper[4681]: I0404 02:35:31.764867 4681 generic.go:334] "Generic (PLEG): container finished" podID="b3b7061a-37ce-4302-a3a3-f06aff60e3a3" containerID="80c67edb16152b53bda743e89baac904b416695522750c6b44dc85d1264db96e" exitCode=0 Apr 04 02:35:31 crc kubenswrapper[4681]: I0404 02:35:31.764956 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99rqr" event={"ID":"b3b7061a-37ce-4302-a3a3-f06aff60e3a3","Type":"ContainerDied","Data":"80c67edb16152b53bda743e89baac904b416695522750c6b44dc85d1264db96e"} Apr 04 02:35:33 crc kubenswrapper[4681]: I0404 02:35:33.261157 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99rqr" Apr 04 02:35:33 crc kubenswrapper[4681]: I0404 02:35:33.351178 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3b7061a-37ce-4302-a3a3-f06aff60e3a3-inventory\") pod \"b3b7061a-37ce-4302-a3a3-f06aff60e3a3\" (UID: \"b3b7061a-37ce-4302-a3a3-f06aff60e3a3\") " Apr 04 02:35:33 crc kubenswrapper[4681]: I0404 02:35:33.351393 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgthx\" (UniqueName: \"kubernetes.io/projected/b3b7061a-37ce-4302-a3a3-f06aff60e3a3-kube-api-access-hgthx\") pod \"b3b7061a-37ce-4302-a3a3-f06aff60e3a3\" (UID: \"b3b7061a-37ce-4302-a3a3-f06aff60e3a3\") " Apr 04 02:35:33 crc kubenswrapper[4681]: I0404 02:35:33.351437 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3b7061a-37ce-4302-a3a3-f06aff60e3a3-ssh-key-openstack-edpm-ipam\") pod \"b3b7061a-37ce-4302-a3a3-f06aff60e3a3\" (UID: \"b3b7061a-37ce-4302-a3a3-f06aff60e3a3\") " Apr 04 02:35:33 crc kubenswrapper[4681]: I0404 02:35:33.358098 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3b7061a-37ce-4302-a3a3-f06aff60e3a3-kube-api-access-hgthx" (OuterVolumeSpecName: "kube-api-access-hgthx") pod "b3b7061a-37ce-4302-a3a3-f06aff60e3a3" (UID: "b3b7061a-37ce-4302-a3a3-f06aff60e3a3"). InnerVolumeSpecName "kube-api-access-hgthx". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:35:33 crc kubenswrapper[4681]: I0404 02:35:33.384611 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3b7061a-37ce-4302-a3a3-f06aff60e3a3-inventory" (OuterVolumeSpecName: "inventory") pod "b3b7061a-37ce-4302-a3a3-f06aff60e3a3" (UID: "b3b7061a-37ce-4302-a3a3-f06aff60e3a3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:35:33 crc kubenswrapper[4681]: I0404 02:35:33.386981 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3b7061a-37ce-4302-a3a3-f06aff60e3a3-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b3b7061a-37ce-4302-a3a3-f06aff60e3a3" (UID: "b3b7061a-37ce-4302-a3a3-f06aff60e3a3"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:35:33 crc kubenswrapper[4681]: I0404 02:35:33.454474 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b3b7061a-37ce-4302-a3a3-f06aff60e3a3-inventory\") on node \"crc\" DevicePath \"\"" Apr 04 02:35:33 crc kubenswrapper[4681]: I0404 02:35:33.454524 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgthx\" (UniqueName: \"kubernetes.io/projected/b3b7061a-37ce-4302-a3a3-f06aff60e3a3-kube-api-access-hgthx\") on node \"crc\" DevicePath \"\"" Apr 04 02:35:33 crc kubenswrapper[4681]: I0404 02:35:33.454543 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b3b7061a-37ce-4302-a3a3-f06aff60e3a3-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Apr 04 02:35:33 crc kubenswrapper[4681]: I0404 02:35:33.785082 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99rqr" event={"ID":"b3b7061a-37ce-4302-a3a3-f06aff60e3a3","Type":"ContainerDied","Data":"c2b3c3b02552a5975c8aefb6331608cf9e76aabbefd44278afb715069813dc96"} Apr 04 02:35:33 crc kubenswrapper[4681]: I0404 02:35:33.785122 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2b3c3b02552a5975c8aefb6331608cf9e76aabbefd44278afb715069813dc96" Apr 04 02:35:33 crc kubenswrapper[4681]: I0404 02:35:33.785131 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-99rqr" Apr 04 02:35:33 crc kubenswrapper[4681]: I0404 02:35:33.878889 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g74jm"] Apr 04 02:35:33 crc kubenswrapper[4681]: E0404 02:35:33.879423 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b7061a-37ce-4302-a3a3-f06aff60e3a3" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Apr 04 02:35:33 crc kubenswrapper[4681]: I0404 02:35:33.879447 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b7061a-37ce-4302-a3a3-f06aff60e3a3" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Apr 04 02:35:33 crc kubenswrapper[4681]: E0404 02:35:33.879475 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad3faf5c-9f9c-4abb-9fec-ecf6960105ab" containerName="oc" Apr 04 02:35:33 crc kubenswrapper[4681]: I0404 02:35:33.879484 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad3faf5c-9f9c-4abb-9fec-ecf6960105ab" containerName="oc" Apr 04 02:35:33 crc kubenswrapper[4681]: I0404 02:35:33.879738 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3b7061a-37ce-4302-a3a3-f06aff60e3a3" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Apr 04 02:35:33 crc kubenswrapper[4681]: I0404 02:35:33.879803 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad3faf5c-9f9c-4abb-9fec-ecf6960105ab" containerName="oc" Apr 04 02:35:33 crc kubenswrapper[4681]: I0404 02:35:33.880852 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g74jm" Apr 04 02:35:33 crc kubenswrapper[4681]: I0404 02:35:33.884245 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w7ttd" Apr 04 02:35:33 crc kubenswrapper[4681]: I0404 02:35:33.884470 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Apr 04 02:35:33 crc kubenswrapper[4681]: I0404 02:35:33.884596 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Apr 04 02:35:33 crc kubenswrapper[4681]: I0404 02:35:33.886134 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Apr 04 02:35:33 crc kubenswrapper[4681]: I0404 02:35:33.889083 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g74jm"] Apr 04 02:35:33 crc kubenswrapper[4681]: I0404 02:35:33.963648 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e1248b6b-52bc-4b4a-b901-afa695bb799f-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g74jm\" (UID: \"e1248b6b-52bc-4b4a-b901-afa695bb799f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g74jm" Apr 04 02:35:33 crc kubenswrapper[4681]: I0404 02:35:33.963702 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvtxb\" (UniqueName: \"kubernetes.io/projected/e1248b6b-52bc-4b4a-b901-afa695bb799f-kube-api-access-vvtxb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g74jm\" (UID: \"e1248b6b-52bc-4b4a-b901-afa695bb799f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g74jm" Apr 04 02:35:33 crc kubenswrapper[4681]: I0404 02:35:33.963895 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1248b6b-52bc-4b4a-b901-afa695bb799f-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g74jm\" (UID: \"e1248b6b-52bc-4b4a-b901-afa695bb799f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g74jm" Apr 04 02:35:34 crc kubenswrapper[4681]: I0404 02:35:34.043901 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xmmq7"] Apr 04 02:35:34 crc kubenswrapper[4681]: I0404 02:35:34.057491 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xmmq7"] Apr 04 02:35:34 crc kubenswrapper[4681]: I0404 02:35:34.066584 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e1248b6b-52bc-4b4a-b901-afa695bb799f-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g74jm\" (UID: \"e1248b6b-52bc-4b4a-b901-afa695bb799f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g74jm" Apr 04 02:35:34 crc kubenswrapper[4681]: I0404 02:35:34.066639 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvtxb\" (UniqueName: \"kubernetes.io/projected/e1248b6b-52bc-4b4a-b901-afa695bb799f-kube-api-access-vvtxb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g74jm\" (UID: \"e1248b6b-52bc-4b4a-b901-afa695bb799f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g74jm" Apr 04 02:35:34 crc kubenswrapper[4681]: I0404 02:35:34.066693 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1248b6b-52bc-4b4a-b901-afa695bb799f-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g74jm\" (UID: \"e1248b6b-52bc-4b4a-b901-afa695bb799f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g74jm" Apr 04 02:35:34 crc kubenswrapper[4681]: I0404 02:35:34.071311 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1248b6b-52bc-4b4a-b901-afa695bb799f-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g74jm\" (UID: \"e1248b6b-52bc-4b4a-b901-afa695bb799f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g74jm" Apr 04 02:35:34 crc kubenswrapper[4681]: I0404 02:35:34.071341 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e1248b6b-52bc-4b4a-b901-afa695bb799f-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g74jm\" (UID: \"e1248b6b-52bc-4b4a-b901-afa695bb799f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g74jm" Apr 04 02:35:34 crc kubenswrapper[4681]: I0404 02:35:34.090911 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvtxb\" (UniqueName: \"kubernetes.io/projected/e1248b6b-52bc-4b4a-b901-afa695bb799f-kube-api-access-vvtxb\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-g74jm\" (UID: \"e1248b6b-52bc-4b4a-b901-afa695bb799f\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g74jm" Apr 04 02:35:34 crc kubenswrapper[4681]: I0404 02:35:34.216019 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g74jm" Apr 04 02:35:34 crc kubenswrapper[4681]: I0404 02:35:34.764852 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g74jm"] Apr 04 02:35:34 crc kubenswrapper[4681]: W0404 02:35:34.769615 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1248b6b_52bc_4b4a_b901_afa695bb799f.slice/crio-bda480aa93d6945d6647dd80573fd501836cc2c8bab694bdc581f6abfa1c9d22 WatchSource:0}: Error finding container bda480aa93d6945d6647dd80573fd501836cc2c8bab694bdc581f6abfa1c9d22: Status 404 returned error can't find the container with id bda480aa93d6945d6647dd80573fd501836cc2c8bab694bdc581f6abfa1c9d22 Apr 04 02:35:34 crc kubenswrapper[4681]: I0404 02:35:34.800030 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g74jm" event={"ID":"e1248b6b-52bc-4b4a-b901-afa695bb799f","Type":"ContainerStarted","Data":"bda480aa93d6945d6647dd80573fd501836cc2c8bab694bdc581f6abfa1c9d22"} Apr 04 02:35:35 crc kubenswrapper[4681]: I0404 02:35:35.364978 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf297ed4-229b-492f-bd27-5ea5e2279816" path="/var/lib/kubelet/pods/bf297ed4-229b-492f-bd27-5ea5e2279816/volumes" Apr 04 02:35:35 crc kubenswrapper[4681]: I0404 02:35:35.814379 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g74jm" event={"ID":"e1248b6b-52bc-4b4a-b901-afa695bb799f","Type":"ContainerStarted","Data":"7e2600b22a8a0ece2bea453ecc1b3c4456f4090a1f2bf79c8e12427df76389ba"} Apr 04 02:35:35 crc kubenswrapper[4681]: I0404 02:35:35.836285 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g74jm" podStartSLOduration=2.399755139 podStartE2EDuration="2.836242543s" podCreationTimestamp="2026-04-04 02:35:33 +0000 UTC" firstStartedPulling="2026-04-04 02:35:34.771910568 +0000 UTC m=+2414.437685688" lastFinishedPulling="2026-04-04 02:35:35.208397972 +0000 UTC m=+2414.874173092" observedRunningTime="2026-04-04 02:35:35.831294937 +0000 UTC m=+2415.497070057" watchObservedRunningTime="2026-04-04 02:35:35.836242543 +0000 UTC m=+2415.502017663" Apr 04 02:35:36 crc kubenswrapper[4681]: I0404 02:35:36.040207 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-qdpnl"] Apr 04 02:35:36 crc kubenswrapper[4681]: I0404 02:35:36.053761 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-qdpnl"] Apr 04 02:35:37 crc kubenswrapper[4681]: I0404 02:35:37.038095 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-27c9s"] Apr 04 02:35:37 crc kubenswrapper[4681]: I0404 02:35:37.049280 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-27c9s"] Apr 04 02:35:37 crc kubenswrapper[4681]: I0404 02:35:37.212229 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04422fbb-2a81-4627-9df3-dce05665ec03" path="/var/lib/kubelet/pods/04422fbb-2a81-4627-9df3-dce05665ec03/volumes" Apr 04 02:35:37 crc kubenswrapper[4681]: I0404 02:35:37.212873 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96da315f-3451-45c0-b1fc-687c9d18dccf" path="/var/lib/kubelet/pods/96da315f-3451-45c0-b1fc-687c9d18dccf/volumes" Apr 04 02:35:38 crc kubenswrapper[4681]: I0404 02:35:38.035079 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-9vwnm"] Apr 04 02:35:38 crc kubenswrapper[4681]: I0404 02:35:38.045283 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-9vwnm"] Apr 04 02:35:39 crc kubenswrapper[4681]: I0404 02:35:39.214814 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e" path="/var/lib/kubelet/pods/ac1bff3d-1bb6-4ab9-9540-46f39fea9a8e/volumes" Apr 04 02:35:56 crc kubenswrapper[4681]: I0404 02:35:56.524829 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:35:56 crc kubenswrapper[4681]: I0404 02:35:56.526815 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:36:00 crc kubenswrapper[4681]: I0404 02:36:00.143711 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587836-zh5fv"] Apr 04 02:36:00 crc kubenswrapper[4681]: I0404 02:36:00.145473 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587836-zh5fv" Apr 04 02:36:00 crc kubenswrapper[4681]: I0404 02:36:00.147796 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 02:36:00 crc kubenswrapper[4681]: I0404 02:36:00.152341 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 02:36:00 crc kubenswrapper[4681]: I0404 02:36:00.154275 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 02:36:00 crc kubenswrapper[4681]: I0404 02:36:00.162222 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587836-zh5fv"] Apr 04 02:36:00 crc kubenswrapper[4681]: I0404 02:36:00.163423 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xt7f\" (UniqueName: \"kubernetes.io/projected/d5b33547-7be2-4182-878e-f992e13e6c86-kube-api-access-5xt7f\") pod \"auto-csr-approver-29587836-zh5fv\" (UID: \"d5b33547-7be2-4182-878e-f992e13e6c86\") " pod="openshift-infra/auto-csr-approver-29587836-zh5fv" Apr 04 02:36:00 crc kubenswrapper[4681]: I0404 02:36:00.265161 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xt7f\" (UniqueName: \"kubernetes.io/projected/d5b33547-7be2-4182-878e-f992e13e6c86-kube-api-access-5xt7f\") pod \"auto-csr-approver-29587836-zh5fv\" (UID: \"d5b33547-7be2-4182-878e-f992e13e6c86\") " pod="openshift-infra/auto-csr-approver-29587836-zh5fv" Apr 04 02:36:00 crc kubenswrapper[4681]: I0404 02:36:00.285048 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xt7f\" (UniqueName: \"kubernetes.io/projected/d5b33547-7be2-4182-878e-f992e13e6c86-kube-api-access-5xt7f\") pod \"auto-csr-approver-29587836-zh5fv\" (UID: \"d5b33547-7be2-4182-878e-f992e13e6c86\") " pod="openshift-infra/auto-csr-approver-29587836-zh5fv" Apr 04 02:36:00 crc kubenswrapper[4681]: I0404 02:36:00.463372 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587836-zh5fv" Apr 04 02:36:00 crc kubenswrapper[4681]: I0404 02:36:00.918929 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587836-zh5fv"] Apr 04 02:36:01 crc kubenswrapper[4681]: I0404 02:36:01.076511 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587836-zh5fv" event={"ID":"d5b33547-7be2-4182-878e-f992e13e6c86","Type":"ContainerStarted","Data":"347a532b3a4af9751ad19fa03b6fe07fb4874633ff87e6c7d00a38b56ddbe29c"} Apr 04 02:36:02 crc kubenswrapper[4681]: I0404 02:36:02.088248 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587836-zh5fv" event={"ID":"d5b33547-7be2-4182-878e-f992e13e6c86","Type":"ContainerStarted","Data":"912bd90cbb25f57ca00e377f51fcabe6692ce58f9244d17a84cb89a4f08dfa6e"} Apr 04 02:36:03 crc kubenswrapper[4681]: I0404 02:36:03.101252 4681 generic.go:334] "Generic (PLEG): container finished" podID="d5b33547-7be2-4182-878e-f992e13e6c86" containerID="912bd90cbb25f57ca00e377f51fcabe6692ce58f9244d17a84cb89a4f08dfa6e" exitCode=0 Apr 04 02:36:03 crc kubenswrapper[4681]: I0404 02:36:03.101400 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587836-zh5fv" event={"ID":"d5b33547-7be2-4182-878e-f992e13e6c86","Type":"ContainerDied","Data":"912bd90cbb25f57ca00e377f51fcabe6692ce58f9244d17a84cb89a4f08dfa6e"} Apr 04 02:36:04 crc kubenswrapper[4681]: I0404 02:36:04.461620 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587836-zh5fv" Apr 04 02:36:04 crc kubenswrapper[4681]: I0404 02:36:04.549157 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xt7f\" (UniqueName: \"kubernetes.io/projected/d5b33547-7be2-4182-878e-f992e13e6c86-kube-api-access-5xt7f\") pod \"d5b33547-7be2-4182-878e-f992e13e6c86\" (UID: \"d5b33547-7be2-4182-878e-f992e13e6c86\") " Apr 04 02:36:04 crc kubenswrapper[4681]: I0404 02:36:04.554676 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5b33547-7be2-4182-878e-f992e13e6c86-kube-api-access-5xt7f" (OuterVolumeSpecName: "kube-api-access-5xt7f") pod "d5b33547-7be2-4182-878e-f992e13e6c86" (UID: "d5b33547-7be2-4182-878e-f992e13e6c86"). InnerVolumeSpecName "kube-api-access-5xt7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:36:04 crc kubenswrapper[4681]: I0404 02:36:04.651639 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xt7f\" (UniqueName: \"kubernetes.io/projected/d5b33547-7be2-4182-878e-f992e13e6c86-kube-api-access-5xt7f\") on node \"crc\" DevicePath \"\"" Apr 04 02:36:05 crc kubenswrapper[4681]: I0404 02:36:05.119694 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587836-zh5fv" event={"ID":"d5b33547-7be2-4182-878e-f992e13e6c86","Type":"ContainerDied","Data":"347a532b3a4af9751ad19fa03b6fe07fb4874633ff87e6c7d00a38b56ddbe29c"} Apr 04 02:36:05 crc kubenswrapper[4681]: I0404 02:36:05.119741 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="347a532b3a4af9751ad19fa03b6fe07fb4874633ff87e6c7d00a38b56ddbe29c" Apr 04 02:36:05 crc kubenswrapper[4681]: I0404 02:36:05.119799 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587836-zh5fv" Apr 04 02:36:05 crc kubenswrapper[4681]: I0404 02:36:05.532847 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587830-zbqnl"] Apr 04 02:36:05 crc kubenswrapper[4681]: I0404 02:36:05.541324 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587830-zbqnl"] Apr 04 02:36:06 crc kubenswrapper[4681]: I0404 02:36:06.040355 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-5sd54"] Apr 04 02:36:06 crc kubenswrapper[4681]: I0404 02:36:06.048533 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-5sd54"] Apr 04 02:36:07 crc kubenswrapper[4681]: I0404 02:36:07.212128 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b185d1fc-0c71-44ee-bb6d-915189acc4d8" path="/var/lib/kubelet/pods/b185d1fc-0c71-44ee-bb6d-915189acc4d8/volumes" Apr 04 02:36:07 crc kubenswrapper[4681]: I0404 02:36:07.213334 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2377ba5-c19e-40bd-918f-e993357fd5e7" path="/var/lib/kubelet/pods/c2377ba5-c19e-40bd-918f-e993357fd5e7/volumes" Apr 04 02:36:26 crc kubenswrapper[4681]: I0404 02:36:26.046532 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-x6pg5"] Apr 04 02:36:26 crc kubenswrapper[4681]: I0404 02:36:26.054328 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-x6pg5"] Apr 04 02:36:26 crc kubenswrapper[4681]: I0404 02:36:26.524452 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:36:26 crc kubenswrapper[4681]: I0404 02:36:26.524526 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:36:27 crc kubenswrapper[4681]: I0404 02:36:27.211819 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f2f493b-34f1-492d-834d-50b24313791c" path="/var/lib/kubelet/pods/0f2f493b-34f1-492d-834d-50b24313791c/volumes" Apr 04 02:36:28 crc kubenswrapper[4681]: I0404 02:36:28.746810 4681 scope.go:117] "RemoveContainer" containerID="d6503b4cf5a6450cce71c8e8e8835f62ed80d92a0688c8b72802cd1862e2f541" Apr 04 02:36:28 crc kubenswrapper[4681]: I0404 02:36:28.796020 4681 scope.go:117] "RemoveContainer" containerID="48562f62ef66c90bddf853a7d75b62412d3429566b2fc45fddf86e672ea4d924" Apr 04 02:36:28 crc kubenswrapper[4681]: I0404 02:36:28.847143 4681 scope.go:117] "RemoveContainer" containerID="2c4572453ef291c8bd6015810304578098b778e473aa7ccdb187df0cb8d69cdc" Apr 04 02:36:28 crc kubenswrapper[4681]: I0404 02:36:28.875163 4681 scope.go:117] "RemoveContainer" containerID="a915244e7d7e523425e20fe41c6eae5b7c6e80833a51146876261da7673a8ada" Apr 04 02:36:28 crc kubenswrapper[4681]: I0404 02:36:28.934449 4681 scope.go:117] "RemoveContainer" containerID="8229141681ecc07a19f839c7acec322417be94f11887bb96ea0c7ea1ebe8263a" Apr 04 02:36:28 crc kubenswrapper[4681]: I0404 02:36:28.974098 4681 scope.go:117] "RemoveContainer" containerID="53a76e0e222e8715df091f7206da8e2c31b906920aef8edfa64147798f0d00a7" Apr 04 02:36:29 crc kubenswrapper[4681]: I0404 02:36:29.030120 4681 scope.go:117] "RemoveContainer" containerID="3594b47fea9f64289499cbf6739da54b8e258e9170155f1eee69bfeb43a9f8d9" Apr 04 02:36:42 crc kubenswrapper[4681]: I0404 02:36:42.657895 4681 generic.go:334] "Generic (PLEG): container finished" podID="e1248b6b-52bc-4b4a-b901-afa695bb799f" containerID="7e2600b22a8a0ece2bea453ecc1b3c4456f4090a1f2bf79c8e12427df76389ba" exitCode=0 Apr 04 02:36:42 crc kubenswrapper[4681]: I0404 02:36:42.657960 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g74jm" event={"ID":"e1248b6b-52bc-4b4a-b901-afa695bb799f","Type":"ContainerDied","Data":"7e2600b22a8a0ece2bea453ecc1b3c4456f4090a1f2bf79c8e12427df76389ba"} Apr 04 02:36:44 crc kubenswrapper[4681]: I0404 02:36:44.063183 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g74jm" Apr 04 02:36:44 crc kubenswrapper[4681]: I0404 02:36:44.202833 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e1248b6b-52bc-4b4a-b901-afa695bb799f-ssh-key-openstack-edpm-ipam\") pod \"e1248b6b-52bc-4b4a-b901-afa695bb799f\" (UID: \"e1248b6b-52bc-4b4a-b901-afa695bb799f\") " Apr 04 02:36:44 crc kubenswrapper[4681]: I0404 02:36:44.202920 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvtxb\" (UniqueName: \"kubernetes.io/projected/e1248b6b-52bc-4b4a-b901-afa695bb799f-kube-api-access-vvtxb\") pod \"e1248b6b-52bc-4b4a-b901-afa695bb799f\" (UID: \"e1248b6b-52bc-4b4a-b901-afa695bb799f\") " Apr 04 02:36:44 crc kubenswrapper[4681]: I0404 02:36:44.202970 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1248b6b-52bc-4b4a-b901-afa695bb799f-inventory\") pod \"e1248b6b-52bc-4b4a-b901-afa695bb799f\" (UID: \"e1248b6b-52bc-4b4a-b901-afa695bb799f\") " Apr 04 02:36:44 crc kubenswrapper[4681]: I0404 02:36:44.209395 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1248b6b-52bc-4b4a-b901-afa695bb799f-kube-api-access-vvtxb" (OuterVolumeSpecName: "kube-api-access-vvtxb") pod "e1248b6b-52bc-4b4a-b901-afa695bb799f" (UID: "e1248b6b-52bc-4b4a-b901-afa695bb799f"). InnerVolumeSpecName "kube-api-access-vvtxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:36:44 crc kubenswrapper[4681]: I0404 02:36:44.234072 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1248b6b-52bc-4b4a-b901-afa695bb799f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e1248b6b-52bc-4b4a-b901-afa695bb799f" (UID: "e1248b6b-52bc-4b4a-b901-afa695bb799f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:36:44 crc kubenswrapper[4681]: I0404 02:36:44.234235 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1248b6b-52bc-4b4a-b901-afa695bb799f-inventory" (OuterVolumeSpecName: "inventory") pod "e1248b6b-52bc-4b4a-b901-afa695bb799f" (UID: "e1248b6b-52bc-4b4a-b901-afa695bb799f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:36:44 crc kubenswrapper[4681]: I0404 02:36:44.305467 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e1248b6b-52bc-4b4a-b901-afa695bb799f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Apr 04 02:36:44 crc kubenswrapper[4681]: I0404 02:36:44.305520 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvtxb\" (UniqueName: \"kubernetes.io/projected/e1248b6b-52bc-4b4a-b901-afa695bb799f-kube-api-access-vvtxb\") on node \"crc\" DevicePath \"\"" Apr 04 02:36:44 crc kubenswrapper[4681]: I0404 02:36:44.305534 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1248b6b-52bc-4b4a-b901-afa695bb799f-inventory\") on node \"crc\" DevicePath \"\"" Apr 04 02:36:44 crc kubenswrapper[4681]: I0404 02:36:44.676778 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g74jm" event={"ID":"e1248b6b-52bc-4b4a-b901-afa695bb799f","Type":"ContainerDied","Data":"bda480aa93d6945d6647dd80573fd501836cc2c8bab694bdc581f6abfa1c9d22"} Apr 04 02:36:44 crc kubenswrapper[4681]: I0404 02:36:44.676811 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bda480aa93d6945d6647dd80573fd501836cc2c8bab694bdc581f6abfa1c9d22" Apr 04 02:36:44 crc kubenswrapper[4681]: I0404 02:36:44.676840 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-g74jm" Apr 04 02:36:44 crc kubenswrapper[4681]: I0404 02:36:44.769892 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm"] Apr 04 02:36:44 crc kubenswrapper[4681]: E0404 02:36:44.770396 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5b33547-7be2-4182-878e-f992e13e6c86" containerName="oc" Apr 04 02:36:44 crc kubenswrapper[4681]: I0404 02:36:44.770416 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5b33547-7be2-4182-878e-f992e13e6c86" containerName="oc" Apr 04 02:36:44 crc kubenswrapper[4681]: E0404 02:36:44.770431 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1248b6b-52bc-4b4a-b901-afa695bb799f" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Apr 04 02:36:44 crc kubenswrapper[4681]: I0404 02:36:44.770440 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1248b6b-52bc-4b4a-b901-afa695bb799f" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Apr 04 02:36:44 crc kubenswrapper[4681]: I0404 02:36:44.770692 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5b33547-7be2-4182-878e-f992e13e6c86" containerName="oc" Apr 04 02:36:44 crc kubenswrapper[4681]: I0404 02:36:44.770712 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1248b6b-52bc-4b4a-b901-afa695bb799f" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Apr 04 02:36:44 crc kubenswrapper[4681]: I0404 02:36:44.771454 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm" Apr 04 02:36:44 crc kubenswrapper[4681]: E0404 02:36:44.778827 4681 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1248b6b_52bc_4b4a_b901_afa695bb799f.slice\": RecentStats: unable to find data in memory cache]" Apr 04 02:36:44 crc kubenswrapper[4681]: I0404 02:36:44.780507 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm"] Apr 04 02:36:44 crc kubenswrapper[4681]: I0404 02:36:44.781177 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Apr 04 02:36:44 crc kubenswrapper[4681]: I0404 02:36:44.781190 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w7ttd" Apr 04 02:36:44 crc kubenswrapper[4681]: I0404 02:36:44.782091 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Apr 04 02:36:44 crc kubenswrapper[4681]: I0404 02:36:44.782666 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Apr 04 02:36:44 crc kubenswrapper[4681]: I0404 02:36:44.915842 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79cff0ca-47f9-4198-abf2-a488089c2ade-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm\" (UID: \"79cff0ca-47f9-4198-abf2-a488089c2ade\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm" Apr 04 02:36:44 crc kubenswrapper[4681]: I0404 02:36:44.915970 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79cff0ca-47f9-4198-abf2-a488089c2ade-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm\" (UID: \"79cff0ca-47f9-4198-abf2-a488089c2ade\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm" Apr 04 02:36:44 crc kubenswrapper[4681]: I0404 02:36:44.916007 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw6dh\" (UniqueName: \"kubernetes.io/projected/79cff0ca-47f9-4198-abf2-a488089c2ade-kube-api-access-sw6dh\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm\" (UID: \"79cff0ca-47f9-4198-abf2-a488089c2ade\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm" Apr 04 02:36:45 crc kubenswrapper[4681]: I0404 02:36:45.018767 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79cff0ca-47f9-4198-abf2-a488089c2ade-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm\" (UID: \"79cff0ca-47f9-4198-abf2-a488089c2ade\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm" Apr 04 02:36:45 crc kubenswrapper[4681]: I0404 02:36:45.018973 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79cff0ca-47f9-4198-abf2-a488089c2ade-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm\" (UID: \"79cff0ca-47f9-4198-abf2-a488089c2ade\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm" Apr 04 02:36:45 crc kubenswrapper[4681]: I0404 02:36:45.019029 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw6dh\" (UniqueName: \"kubernetes.io/projected/79cff0ca-47f9-4198-abf2-a488089c2ade-kube-api-access-sw6dh\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm\" (UID: \"79cff0ca-47f9-4198-abf2-a488089c2ade\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm" Apr 04 02:36:45 crc kubenswrapper[4681]: I0404 02:36:45.027959 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79cff0ca-47f9-4198-abf2-a488089c2ade-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm\" (UID: \"79cff0ca-47f9-4198-abf2-a488089c2ade\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm" Apr 04 02:36:45 crc kubenswrapper[4681]: I0404 02:36:45.027959 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79cff0ca-47f9-4198-abf2-a488089c2ade-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm\" (UID: \"79cff0ca-47f9-4198-abf2-a488089c2ade\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm" Apr 04 02:36:45 crc kubenswrapper[4681]: I0404 02:36:45.036809 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw6dh\" (UniqueName: \"kubernetes.io/projected/79cff0ca-47f9-4198-abf2-a488089c2ade-kube-api-access-sw6dh\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm\" (UID: \"79cff0ca-47f9-4198-abf2-a488089c2ade\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm" Apr 04 02:36:45 crc kubenswrapper[4681]: I0404 02:36:45.091493 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm" Apr 04 02:36:45 crc kubenswrapper[4681]: I0404 02:36:45.644073 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm"] Apr 04 02:36:45 crc kubenswrapper[4681]: W0404 02:36:45.649041 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79cff0ca_47f9_4198_abf2_a488089c2ade.slice/crio-e0f09cbab83b19c63419174e9943da6d8b871c35fb51a213dc356b4fb28af66a WatchSource:0}: Error finding container e0f09cbab83b19c63419174e9943da6d8b871c35fb51a213dc356b4fb28af66a: Status 404 returned error can't find the container with id e0f09cbab83b19c63419174e9943da6d8b871c35fb51a213dc356b4fb28af66a Apr 04 02:36:45 crc kubenswrapper[4681]: I0404 02:36:45.653427 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 04 02:36:45 crc kubenswrapper[4681]: I0404 02:36:45.689406 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm" event={"ID":"79cff0ca-47f9-4198-abf2-a488089c2ade","Type":"ContainerStarted","Data":"e0f09cbab83b19c63419174e9943da6d8b871c35fb51a213dc356b4fb28af66a"} Apr 04 02:36:46 crc kubenswrapper[4681]: I0404 02:36:46.701667 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm" event={"ID":"79cff0ca-47f9-4198-abf2-a488089c2ade","Type":"ContainerStarted","Data":"497d09a39e76fc72dc8c42ffde19012c4e38b2aa185e9dc1439aa6bb7e091576"} Apr 04 02:36:46 crc kubenswrapper[4681]: I0404 02:36:46.720643 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm" podStartSLOduration=2.085565511 podStartE2EDuration="2.720624671s" podCreationTimestamp="2026-04-04 02:36:44 +0000 UTC" firstStartedPulling="2026-04-04 02:36:45.652520796 +0000 UTC m=+2485.318295936" lastFinishedPulling="2026-04-04 02:36:46.287579966 +0000 UTC m=+2485.953355096" observedRunningTime="2026-04-04 02:36:46.71951511 +0000 UTC m=+2486.385290230" watchObservedRunningTime="2026-04-04 02:36:46.720624671 +0000 UTC m=+2486.386399781" Apr 04 02:36:51 crc kubenswrapper[4681]: I0404 02:36:51.760448 4681 generic.go:334] "Generic (PLEG): container finished" podID="79cff0ca-47f9-4198-abf2-a488089c2ade" containerID="497d09a39e76fc72dc8c42ffde19012c4e38b2aa185e9dc1439aa6bb7e091576" exitCode=0 Apr 04 02:36:51 crc kubenswrapper[4681]: I0404 02:36:51.760529 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm" event={"ID":"79cff0ca-47f9-4198-abf2-a488089c2ade","Type":"ContainerDied","Data":"497d09a39e76fc72dc8c42ffde19012c4e38b2aa185e9dc1439aa6bb7e091576"} Apr 04 02:36:53 crc kubenswrapper[4681]: I0404 02:36:53.174014 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm" Apr 04 02:36:53 crc kubenswrapper[4681]: I0404 02:36:53.291868 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw6dh\" (UniqueName: \"kubernetes.io/projected/79cff0ca-47f9-4198-abf2-a488089c2ade-kube-api-access-sw6dh\") pod \"79cff0ca-47f9-4198-abf2-a488089c2ade\" (UID: \"79cff0ca-47f9-4198-abf2-a488089c2ade\") " Apr 04 02:36:53 crc kubenswrapper[4681]: I0404 02:36:53.291975 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79cff0ca-47f9-4198-abf2-a488089c2ade-inventory\") pod \"79cff0ca-47f9-4198-abf2-a488089c2ade\" (UID: \"79cff0ca-47f9-4198-abf2-a488089c2ade\") " Apr 04 02:36:53 crc kubenswrapper[4681]: I0404 02:36:53.292037 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79cff0ca-47f9-4198-abf2-a488089c2ade-ssh-key-openstack-edpm-ipam\") pod \"79cff0ca-47f9-4198-abf2-a488089c2ade\" (UID: \"79cff0ca-47f9-4198-abf2-a488089c2ade\") " Apr 04 02:36:53 crc kubenswrapper[4681]: I0404 02:36:53.301577 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79cff0ca-47f9-4198-abf2-a488089c2ade-kube-api-access-sw6dh" (OuterVolumeSpecName: "kube-api-access-sw6dh") pod "79cff0ca-47f9-4198-abf2-a488089c2ade" (UID: "79cff0ca-47f9-4198-abf2-a488089c2ade"). InnerVolumeSpecName "kube-api-access-sw6dh". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:36:53 crc kubenswrapper[4681]: I0404 02:36:53.321397 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79cff0ca-47f9-4198-abf2-a488089c2ade-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "79cff0ca-47f9-4198-abf2-a488089c2ade" (UID: "79cff0ca-47f9-4198-abf2-a488089c2ade"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:36:53 crc kubenswrapper[4681]: I0404 02:36:53.321960 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79cff0ca-47f9-4198-abf2-a488089c2ade-inventory" (OuterVolumeSpecName: "inventory") pod "79cff0ca-47f9-4198-abf2-a488089c2ade" (UID: "79cff0ca-47f9-4198-abf2-a488089c2ade"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:36:53 crc kubenswrapper[4681]: I0404 02:36:53.394877 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw6dh\" (UniqueName: \"kubernetes.io/projected/79cff0ca-47f9-4198-abf2-a488089c2ade-kube-api-access-sw6dh\") on node \"crc\" DevicePath \"\"" Apr 04 02:36:53 crc kubenswrapper[4681]: I0404 02:36:53.395060 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/79cff0ca-47f9-4198-abf2-a488089c2ade-inventory\") on node \"crc\" DevicePath \"\"" Apr 04 02:36:53 crc kubenswrapper[4681]: I0404 02:36:53.395070 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/79cff0ca-47f9-4198-abf2-a488089c2ade-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Apr 04 02:36:53 crc kubenswrapper[4681]: I0404 02:36:53.782503 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm" event={"ID":"79cff0ca-47f9-4198-abf2-a488089c2ade","Type":"ContainerDied","Data":"e0f09cbab83b19c63419174e9943da6d8b871c35fb51a213dc356b4fb28af66a"} Apr 04 02:36:53 crc kubenswrapper[4681]: I0404 02:36:53.782565 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0f09cbab83b19c63419174e9943da6d8b871c35fb51a213dc356b4fb28af66a" Apr 04 02:36:53 crc kubenswrapper[4681]: I0404 02:36:53.782575 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm" Apr 04 02:36:53 crc kubenswrapper[4681]: I0404 02:36:53.892501 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqsxv"] Apr 04 02:36:53 crc kubenswrapper[4681]: E0404 02:36:53.893064 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79cff0ca-47f9-4198-abf2-a488089c2ade" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Apr 04 02:36:53 crc kubenswrapper[4681]: I0404 02:36:53.893094 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="79cff0ca-47f9-4198-abf2-a488089c2ade" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Apr 04 02:36:53 crc kubenswrapper[4681]: I0404 02:36:53.893413 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="79cff0ca-47f9-4198-abf2-a488089c2ade" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Apr 04 02:36:53 crc kubenswrapper[4681]: I0404 02:36:53.894423 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqsxv" Apr 04 02:36:53 crc kubenswrapper[4681]: I0404 02:36:53.907905 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Apr 04 02:36:53 crc kubenswrapper[4681]: I0404 02:36:53.908214 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Apr 04 02:36:53 crc kubenswrapper[4681]: I0404 02:36:53.908484 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Apr 04 02:36:53 crc kubenswrapper[4681]: I0404 02:36:53.908638 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w7ttd" Apr 04 02:36:53 crc kubenswrapper[4681]: I0404 02:36:53.924097 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqsxv"] Apr 04 02:36:54 crc kubenswrapper[4681]: I0404 02:36:54.005047 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cqsxv\" (UID: \"c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqsxv" Apr 04 02:36:54 crc kubenswrapper[4681]: I0404 02:36:54.005187 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms4qn\" (UniqueName: \"kubernetes.io/projected/c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9-kube-api-access-ms4qn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cqsxv\" (UID: \"c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqsxv" Apr 04 02:36:54 crc kubenswrapper[4681]: I0404 02:36:54.005328 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cqsxv\" (UID: \"c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqsxv" Apr 04 02:36:54 crc kubenswrapper[4681]: I0404 02:36:54.107004 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cqsxv\" (UID: \"c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqsxv" Apr 04 02:36:54 crc kubenswrapper[4681]: I0404 02:36:54.107109 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms4qn\" (UniqueName: \"kubernetes.io/projected/c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9-kube-api-access-ms4qn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cqsxv\" (UID: \"c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqsxv" Apr 04 02:36:54 crc kubenswrapper[4681]: I0404 02:36:54.107246 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cqsxv\" (UID: \"c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqsxv" Apr 04 02:36:54 crc kubenswrapper[4681]: I0404 02:36:54.111417 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cqsxv\" (UID: \"c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqsxv" Apr 04 02:36:54 crc kubenswrapper[4681]: I0404 02:36:54.111767 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cqsxv\" (UID: \"c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqsxv" Apr 04 02:36:54 crc kubenswrapper[4681]: I0404 02:36:54.127304 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms4qn\" (UniqueName: \"kubernetes.io/projected/c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9-kube-api-access-ms4qn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-cqsxv\" (UID: \"c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqsxv" Apr 04 02:36:54 crc kubenswrapper[4681]: I0404 02:36:54.223735 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqsxv" Apr 04 02:36:54 crc kubenswrapper[4681]: I0404 02:36:54.763653 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqsxv"] Apr 04 02:36:54 crc kubenswrapper[4681]: I0404 02:36:54.793642 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqsxv" event={"ID":"c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9","Type":"ContainerStarted","Data":"d9f8919fda56c37ce656125db867c3761cf760c1bac2678659ff612e931dd463"} Apr 04 02:36:55 crc kubenswrapper[4681]: I0404 02:36:55.804613 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqsxv" event={"ID":"c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9","Type":"ContainerStarted","Data":"81171b2ac42ac9acc89c432e958ef05e17ebdd601fce78df0096429164092bf2"} Apr 04 02:36:55 crc kubenswrapper[4681]: I0404 02:36:55.834224 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqsxv" podStartSLOduration=2.439201732 podStartE2EDuration="2.834205307s" podCreationTimestamp="2026-04-04 02:36:53 +0000 UTC" firstStartedPulling="2026-04-04 02:36:54.765099925 +0000 UTC m=+2494.430875055" lastFinishedPulling="2026-04-04 02:36:55.16010351 +0000 UTC m=+2494.825878630" observedRunningTime="2026-04-04 02:36:55.823316019 +0000 UTC m=+2495.489091139" watchObservedRunningTime="2026-04-04 02:36:55.834205307 +0000 UTC m=+2495.499980427" Apr 04 02:36:56 crc kubenswrapper[4681]: I0404 02:36:56.524172 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:36:56 crc kubenswrapper[4681]: I0404 02:36:56.524245 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:36:56 crc kubenswrapper[4681]: I0404 02:36:56.524320 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 02:36:56 crc kubenswrapper[4681]: I0404 02:36:56.525179 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d0a6982dc73391bfcdbea37affa340ab6c2c8bfe22d8dc69ff3f47913a1861ee"} pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Apr 04 02:36:56 crc kubenswrapper[4681]: I0404 02:36:56.525308 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" containerID="cri-o://d0a6982dc73391bfcdbea37affa340ab6c2c8bfe22d8dc69ff3f47913a1861ee" gracePeriod=600 Apr 04 02:36:56 crc kubenswrapper[4681]: E0404 02:36:56.646178 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:36:56 crc kubenswrapper[4681]: I0404 02:36:56.818550 4681 generic.go:334] "Generic (PLEG): container finished" podID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerID="d0a6982dc73391bfcdbea37affa340ab6c2c8bfe22d8dc69ff3f47913a1861ee" exitCode=0 Apr 04 02:36:56 crc kubenswrapper[4681]: I0404 02:36:56.819355 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerDied","Data":"d0a6982dc73391bfcdbea37affa340ab6c2c8bfe22d8dc69ff3f47913a1861ee"} Apr 04 02:36:56 crc kubenswrapper[4681]: I0404 02:36:56.819393 4681 scope.go:117] "RemoveContainer" containerID="cd044358ac75974487a44a4e933ddc9b9a48d95be8a57a85af2e38de9daa1d56" Apr 04 02:36:56 crc kubenswrapper[4681]: I0404 02:36:56.819742 4681 scope.go:117] "RemoveContainer" containerID="d0a6982dc73391bfcdbea37affa340ab6c2c8bfe22d8dc69ff3f47913a1861ee" Apr 04 02:36:56 crc kubenswrapper[4681]: E0404 02:36:56.819947 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:37:03 crc kubenswrapper[4681]: I0404 02:37:03.048047 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-dn8zm"] Apr 04 02:37:03 crc kubenswrapper[4681]: I0404 02:37:03.056689 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-dn8zm"] Apr 04 02:37:03 crc kubenswrapper[4681]: I0404 02:37:03.068174 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-zz49b"] Apr 04 02:37:03 crc kubenswrapper[4681]: I0404 02:37:03.076811 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-zz49b"] Apr 04 02:37:03 crc kubenswrapper[4681]: I0404 02:37:03.211473 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c9e030b-26b4-4add-95b6-aaf9b50907db" path="/var/lib/kubelet/pods/5c9e030b-26b4-4add-95b6-aaf9b50907db/volumes" Apr 04 02:37:03 crc kubenswrapper[4681]: I0404 02:37:03.212316 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="624921f8-2de2-4354-9a6b-c5cb0c9e9a21" path="/var/lib/kubelet/pods/624921f8-2de2-4354-9a6b-c5cb0c9e9a21/volumes" Apr 04 02:37:04 crc kubenswrapper[4681]: I0404 02:37:04.030514 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-qlsx4"] Apr 04 02:37:04 crc kubenswrapper[4681]: I0404 02:37:04.040352 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-dc13-account-create-update-bm9fk"] Apr 04 02:37:04 crc kubenswrapper[4681]: I0404 02:37:04.049334 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-qlsx4"] Apr 04 02:37:04 crc kubenswrapper[4681]: I0404 02:37:04.060286 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-dc13-account-create-update-bm9fk"] Apr 04 02:37:05 crc kubenswrapper[4681]: I0404 02:37:05.211747 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f70a3b1-7886-4a8d-988e-8bf233a96729" path="/var/lib/kubelet/pods/0f70a3b1-7886-4a8d-988e-8bf233a96729/volumes" Apr 04 02:37:05 crc kubenswrapper[4681]: I0404 02:37:05.212369 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23481861-506b-4a5d-a1da-d6a21811d7c5" path="/var/lib/kubelet/pods/23481861-506b-4a5d-a1da-d6a21811d7c5/volumes" Apr 04 02:37:09 crc kubenswrapper[4681]: I0404 02:37:09.029184 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-4867-account-create-update-gj56s"] Apr 04 02:37:09 crc kubenswrapper[4681]: I0404 02:37:09.039618 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-2f8c-account-create-update-96qv2"] Apr 04 02:37:09 crc kubenswrapper[4681]: I0404 02:37:09.047566 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-2f8c-account-create-update-96qv2"] Apr 04 02:37:09 crc kubenswrapper[4681]: I0404 02:37:09.071756 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-4867-account-create-update-gj56s"] Apr 04 02:37:09 crc kubenswrapper[4681]: I0404 02:37:09.211600 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43326c52-09d6-47c0-a336-cd16e11dd6a0" path="/var/lib/kubelet/pods/43326c52-09d6-47c0-a336-cd16e11dd6a0/volumes" Apr 04 02:37:09 crc kubenswrapper[4681]: I0404 02:37:09.212570 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61bda19d-5103-4038-8296-583fb1d25024" path="/var/lib/kubelet/pods/61bda19d-5103-4038-8296-583fb1d25024/volumes" Apr 04 02:37:12 crc kubenswrapper[4681]: I0404 02:37:12.201472 4681 scope.go:117] "RemoveContainer" containerID="d0a6982dc73391bfcdbea37affa340ab6c2c8bfe22d8dc69ff3f47913a1861ee" Apr 04 02:37:12 crc kubenswrapper[4681]: E0404 02:37:12.201985 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:37:25 crc kubenswrapper[4681]: I0404 02:37:25.201239 4681 scope.go:117] "RemoveContainer" containerID="d0a6982dc73391bfcdbea37affa340ab6c2c8bfe22d8dc69ff3f47913a1861ee" Apr 04 02:37:25 crc kubenswrapper[4681]: E0404 02:37:25.202114 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:37:29 crc kubenswrapper[4681]: I0404 02:37:29.197837 4681 scope.go:117] "RemoveContainer" containerID="69024f7ab31c18cfb5865f3d5ec39c5c664924e6314921baf872776089090cf9" Apr 04 02:37:29 crc kubenswrapper[4681]: I0404 02:37:29.231854 4681 scope.go:117] "RemoveContainer" containerID="e1e3f8c4c2eaa5fa6f2817639852c3f7064d58c649cb967fc4717e06ce9b7fd8" Apr 04 02:37:29 crc kubenswrapper[4681]: I0404 02:37:29.287317 4681 scope.go:117] "RemoveContainer" containerID="79c496998594a50611ff5d009d57137b1e9fded3c052d1bee84662db1ca68783" Apr 04 02:37:29 crc kubenswrapper[4681]: I0404 02:37:29.337624 4681 scope.go:117] "RemoveContainer" containerID="311673f6c79575cc287edbc697b98f4db2d232b9478454010740f1f10de0a239" Apr 04 02:37:29 crc kubenswrapper[4681]: I0404 02:37:29.377775 4681 scope.go:117] "RemoveContainer" containerID="b674a83f619baadebecdb85e2d2cf9e2b2914920f0d44fb1a5207e4f0c022473" Apr 04 02:37:29 crc kubenswrapper[4681]: I0404 02:37:29.428320 4681 scope.go:117] "RemoveContainer" containerID="f9dbf785a3d3b41f29e3dc2fc882897b370437dc2d62b37f2d236a7f8dbad8b4" Apr 04 02:37:32 crc kubenswrapper[4681]: I0404 02:37:32.159524 4681 generic.go:334] "Generic (PLEG): container finished" podID="c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9" containerID="81171b2ac42ac9acc89c432e958ef05e17ebdd601fce78df0096429164092bf2" exitCode=0 Apr 04 02:37:32 crc kubenswrapper[4681]: I0404 02:37:32.159601 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqsxv" event={"ID":"c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9","Type":"ContainerDied","Data":"81171b2ac42ac9acc89c432e958ef05e17ebdd601fce78df0096429164092bf2"} Apr 04 02:37:33 crc kubenswrapper[4681]: I0404 02:37:33.579472 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqsxv" Apr 04 02:37:33 crc kubenswrapper[4681]: I0404 02:37:33.724387 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms4qn\" (UniqueName: \"kubernetes.io/projected/c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9-kube-api-access-ms4qn\") pod \"c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9\" (UID: \"c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9\") " Apr 04 02:37:33 crc kubenswrapper[4681]: I0404 02:37:33.724667 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9-ssh-key-openstack-edpm-ipam\") pod \"c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9\" (UID: \"c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9\") " Apr 04 02:37:33 crc kubenswrapper[4681]: I0404 02:37:33.724697 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9-inventory\") pod \"c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9\" (UID: \"c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9\") " Apr 04 02:37:33 crc kubenswrapper[4681]: I0404 02:37:33.738796 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9-kube-api-access-ms4qn" (OuterVolumeSpecName: "kube-api-access-ms4qn") pod "c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9" (UID: "c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9"). InnerVolumeSpecName "kube-api-access-ms4qn". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:37:33 crc kubenswrapper[4681]: I0404 02:37:33.753484 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9-inventory" (OuterVolumeSpecName: "inventory") pod "c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9" (UID: "c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:37:33 crc kubenswrapper[4681]: I0404 02:37:33.765700 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9" (UID: "c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:37:33 crc kubenswrapper[4681]: I0404 02:37:33.827108 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms4qn\" (UniqueName: \"kubernetes.io/projected/c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9-kube-api-access-ms4qn\") on node \"crc\" DevicePath \"\"" Apr 04 02:37:33 crc kubenswrapper[4681]: I0404 02:37:33.827141 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Apr 04 02:37:33 crc kubenswrapper[4681]: I0404 02:37:33.827156 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9-inventory\") on node \"crc\" DevicePath \"\"" Apr 04 02:37:34 crc kubenswrapper[4681]: I0404 02:37:34.179648 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqsxv" event={"ID":"c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9","Type":"ContainerDied","Data":"d9f8919fda56c37ce656125db867c3761cf760c1bac2678659ff612e931dd463"} Apr 04 02:37:34 crc kubenswrapper[4681]: I0404 02:37:34.179919 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9f8919fda56c37ce656125db867c3761cf760c1bac2678659ff612e931dd463" Apr 04 02:37:34 crc kubenswrapper[4681]: I0404 02:37:34.179683 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-cqsxv" Apr 04 02:37:34 crc kubenswrapper[4681]: I0404 02:37:34.254104 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6b966"] Apr 04 02:37:34 crc kubenswrapper[4681]: E0404 02:37:34.254694 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Apr 04 02:37:34 crc kubenswrapper[4681]: I0404 02:37:34.254722 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Apr 04 02:37:34 crc kubenswrapper[4681]: I0404 02:37:34.254948 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Apr 04 02:37:34 crc kubenswrapper[4681]: I0404 02:37:34.255812 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6b966" Apr 04 02:37:34 crc kubenswrapper[4681]: I0404 02:37:34.257633 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Apr 04 02:37:34 crc kubenswrapper[4681]: I0404 02:37:34.258229 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w7ttd" Apr 04 02:37:34 crc kubenswrapper[4681]: I0404 02:37:34.258327 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Apr 04 02:37:34 crc kubenswrapper[4681]: I0404 02:37:34.258835 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Apr 04 02:37:34 crc kubenswrapper[4681]: I0404 02:37:34.266750 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6b966"] Apr 04 02:37:34 crc kubenswrapper[4681]: I0404 02:37:34.339004 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9f8m\" (UniqueName: \"kubernetes.io/projected/6d18b62e-86ae-4c2b-864c-315581ca4f1a-kube-api-access-d9f8m\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6b966\" (UID: \"6d18b62e-86ae-4c2b-864c-315581ca4f1a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6b966" Apr 04 02:37:34 crc kubenswrapper[4681]: I0404 02:37:34.339340 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d18b62e-86ae-4c2b-864c-315581ca4f1a-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6b966\" (UID: \"6d18b62e-86ae-4c2b-864c-315581ca4f1a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6b966" Apr 04 02:37:34 crc kubenswrapper[4681]: I0404 02:37:34.339709 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d18b62e-86ae-4c2b-864c-315581ca4f1a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6b966\" (UID: \"6d18b62e-86ae-4c2b-864c-315581ca4f1a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6b966" Apr 04 02:37:34 crc kubenswrapper[4681]: I0404 02:37:34.442166 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9f8m\" (UniqueName: \"kubernetes.io/projected/6d18b62e-86ae-4c2b-864c-315581ca4f1a-kube-api-access-d9f8m\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6b966\" (UID: \"6d18b62e-86ae-4c2b-864c-315581ca4f1a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6b966" Apr 04 02:37:34 crc kubenswrapper[4681]: I0404 02:37:34.442302 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d18b62e-86ae-4c2b-864c-315581ca4f1a-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6b966\" (UID: \"6d18b62e-86ae-4c2b-864c-315581ca4f1a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6b966" Apr 04 02:37:34 crc kubenswrapper[4681]: I0404 02:37:34.442378 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d18b62e-86ae-4c2b-864c-315581ca4f1a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6b966\" (UID: \"6d18b62e-86ae-4c2b-864c-315581ca4f1a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6b966" Apr 04 02:37:34 crc kubenswrapper[4681]: I0404 02:37:34.446726 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d18b62e-86ae-4c2b-864c-315581ca4f1a-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6b966\" (UID: \"6d18b62e-86ae-4c2b-864c-315581ca4f1a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6b966" Apr 04 02:37:34 crc kubenswrapper[4681]: I0404 02:37:34.446898 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d18b62e-86ae-4c2b-864c-315581ca4f1a-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6b966\" (UID: \"6d18b62e-86ae-4c2b-864c-315581ca4f1a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6b966" Apr 04 02:37:34 crc kubenswrapper[4681]: I0404 02:37:34.465646 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9f8m\" (UniqueName: \"kubernetes.io/projected/6d18b62e-86ae-4c2b-864c-315581ca4f1a-kube-api-access-d9f8m\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-6b966\" (UID: \"6d18b62e-86ae-4c2b-864c-315581ca4f1a\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6b966" Apr 04 02:37:34 crc kubenswrapper[4681]: I0404 02:37:34.573777 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6b966" Apr 04 02:37:35 crc kubenswrapper[4681]: I0404 02:37:35.108706 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6b966"] Apr 04 02:37:35 crc kubenswrapper[4681]: W0404 02:37:35.117716 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d18b62e_86ae_4c2b_864c_315581ca4f1a.slice/crio-1cf0634598a739ee9edb8c0df685154aea9b63c87c1ae6040b14e7c4c432eaaa WatchSource:0}: Error finding container 1cf0634598a739ee9edb8c0df685154aea9b63c87c1ae6040b14e7c4c432eaaa: Status 404 returned error can't find the container with id 1cf0634598a739ee9edb8c0df685154aea9b63c87c1ae6040b14e7c4c432eaaa Apr 04 02:37:35 crc kubenswrapper[4681]: I0404 02:37:35.191693 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6b966" event={"ID":"6d18b62e-86ae-4c2b-864c-315581ca4f1a","Type":"ContainerStarted","Data":"1cf0634598a739ee9edb8c0df685154aea9b63c87c1ae6040b14e7c4c432eaaa"} Apr 04 02:37:36 crc kubenswrapper[4681]: I0404 02:37:36.208547 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6b966" event={"ID":"6d18b62e-86ae-4c2b-864c-315581ca4f1a","Type":"ContainerStarted","Data":"18a63c62fb2eb1cdef2a374086d8bc806d0c864a28f832a920c553b6a26a10b7"} Apr 04 02:37:36 crc kubenswrapper[4681]: I0404 02:37:36.228451 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6b966" podStartSLOduration=1.845580981 podStartE2EDuration="2.228435805s" podCreationTimestamp="2026-04-04 02:37:34 +0000 UTC" firstStartedPulling="2026-04-04 02:37:35.120417621 +0000 UTC m=+2534.786192741" lastFinishedPulling="2026-04-04 02:37:35.503272435 +0000 UTC m=+2535.169047565" observedRunningTime="2026-04-04 02:37:36.227333495 +0000 UTC m=+2535.893108615" watchObservedRunningTime="2026-04-04 02:37:36.228435805 +0000 UTC m=+2535.894210925" Apr 04 02:37:38 crc kubenswrapper[4681]: I0404 02:37:38.201425 4681 scope.go:117] "RemoveContainer" containerID="d0a6982dc73391bfcdbea37affa340ab6c2c8bfe22d8dc69ff3f47913a1861ee" Apr 04 02:37:38 crc kubenswrapper[4681]: E0404 02:37:38.202400 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:37:53 crc kubenswrapper[4681]: I0404 02:37:53.201505 4681 scope.go:117] "RemoveContainer" containerID="d0a6982dc73391bfcdbea37affa340ab6c2c8bfe22d8dc69ff3f47913a1861ee" Apr 04 02:37:53 crc kubenswrapper[4681]: E0404 02:37:53.202349 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:37:58 crc kubenswrapper[4681]: I0404 02:37:58.037719 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dgfkz"] Apr 04 02:37:58 crc kubenswrapper[4681]: I0404 02:37:58.049901 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-dgfkz"] Apr 04 02:37:59 crc kubenswrapper[4681]: I0404 02:37:59.243092 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb3d2475-e275-4139-8d39-3b0518fa8e02" path="/var/lib/kubelet/pods/eb3d2475-e275-4139-8d39-3b0518fa8e02/volumes" Apr 04 02:38:00 crc kubenswrapper[4681]: I0404 02:38:00.160733 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587838-rgp5g"] Apr 04 02:38:00 crc kubenswrapper[4681]: I0404 02:38:00.162757 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587838-rgp5g" Apr 04 02:38:00 crc kubenswrapper[4681]: I0404 02:38:00.164961 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 02:38:00 crc kubenswrapper[4681]: I0404 02:38:00.164963 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 02:38:00 crc kubenswrapper[4681]: I0404 02:38:00.165674 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 02:38:00 crc kubenswrapper[4681]: I0404 02:38:00.197798 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587838-rgp5g"] Apr 04 02:38:00 crc kubenswrapper[4681]: I0404 02:38:00.268003 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qncpr\" (UniqueName: \"kubernetes.io/projected/c52306af-2657-4a6c-b9c3-b902bfc18de5-kube-api-access-qncpr\") pod \"auto-csr-approver-29587838-rgp5g\" (UID: \"c52306af-2657-4a6c-b9c3-b902bfc18de5\") " pod="openshift-infra/auto-csr-approver-29587838-rgp5g" Apr 04 02:38:00 crc kubenswrapper[4681]: I0404 02:38:00.370508 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qncpr\" (UniqueName: \"kubernetes.io/projected/c52306af-2657-4a6c-b9c3-b902bfc18de5-kube-api-access-qncpr\") pod \"auto-csr-approver-29587838-rgp5g\" (UID: \"c52306af-2657-4a6c-b9c3-b902bfc18de5\") " pod="openshift-infra/auto-csr-approver-29587838-rgp5g" Apr 04 02:38:00 crc kubenswrapper[4681]: I0404 02:38:00.397989 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qncpr\" (UniqueName: \"kubernetes.io/projected/c52306af-2657-4a6c-b9c3-b902bfc18de5-kube-api-access-qncpr\") pod \"auto-csr-approver-29587838-rgp5g\" (UID: \"c52306af-2657-4a6c-b9c3-b902bfc18de5\") " pod="openshift-infra/auto-csr-approver-29587838-rgp5g" Apr 04 02:38:00 crc kubenswrapper[4681]: I0404 02:38:00.500608 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587838-rgp5g" Apr 04 02:38:00 crc kubenswrapper[4681]: I0404 02:38:00.946394 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587838-rgp5g"] Apr 04 02:38:01 crc kubenswrapper[4681]: I0404 02:38:01.456327 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587838-rgp5g" event={"ID":"c52306af-2657-4a6c-b9c3-b902bfc18de5","Type":"ContainerStarted","Data":"f53b4cda207dbe6d94d201ff7cc26669db29308bdaf01f212899762fa43646bb"} Apr 04 02:38:02 crc kubenswrapper[4681]: I0404 02:38:02.467520 4681 generic.go:334] "Generic (PLEG): container finished" podID="c52306af-2657-4a6c-b9c3-b902bfc18de5" containerID="3ef0632f9358875838dc8e66eae889b5b88ce9fb0a6a7ae2cdfdc5a936583931" exitCode=0 Apr 04 02:38:02 crc kubenswrapper[4681]: I0404 02:38:02.467987 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587838-rgp5g" event={"ID":"c52306af-2657-4a6c-b9c3-b902bfc18de5","Type":"ContainerDied","Data":"3ef0632f9358875838dc8e66eae889b5b88ce9fb0a6a7ae2cdfdc5a936583931"} Apr 04 02:38:03 crc kubenswrapper[4681]: I0404 02:38:03.840865 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587838-rgp5g" Apr 04 02:38:03 crc kubenswrapper[4681]: I0404 02:38:03.943452 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qncpr\" (UniqueName: \"kubernetes.io/projected/c52306af-2657-4a6c-b9c3-b902bfc18de5-kube-api-access-qncpr\") pod \"c52306af-2657-4a6c-b9c3-b902bfc18de5\" (UID: \"c52306af-2657-4a6c-b9c3-b902bfc18de5\") " Apr 04 02:38:03 crc kubenswrapper[4681]: I0404 02:38:03.951550 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c52306af-2657-4a6c-b9c3-b902bfc18de5-kube-api-access-qncpr" (OuterVolumeSpecName: "kube-api-access-qncpr") pod "c52306af-2657-4a6c-b9c3-b902bfc18de5" (UID: "c52306af-2657-4a6c-b9c3-b902bfc18de5"). InnerVolumeSpecName "kube-api-access-qncpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:38:04 crc kubenswrapper[4681]: I0404 02:38:04.046994 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qncpr\" (UniqueName: \"kubernetes.io/projected/c52306af-2657-4a6c-b9c3-b902bfc18de5-kube-api-access-qncpr\") on node \"crc\" DevicePath \"\"" Apr 04 02:38:04 crc kubenswrapper[4681]: I0404 02:38:04.484685 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587838-rgp5g" event={"ID":"c52306af-2657-4a6c-b9c3-b902bfc18de5","Type":"ContainerDied","Data":"f53b4cda207dbe6d94d201ff7cc26669db29308bdaf01f212899762fa43646bb"} Apr 04 02:38:04 crc kubenswrapper[4681]: I0404 02:38:04.485008 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f53b4cda207dbe6d94d201ff7cc26669db29308bdaf01f212899762fa43646bb" Apr 04 02:38:04 crc kubenswrapper[4681]: I0404 02:38:04.484736 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587838-rgp5g" Apr 04 02:38:04 crc kubenswrapper[4681]: I0404 02:38:04.902779 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587832-grcbh"] Apr 04 02:38:04 crc kubenswrapper[4681]: I0404 02:38:04.914096 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587832-grcbh"] Apr 04 02:38:05 crc kubenswrapper[4681]: I0404 02:38:05.214596 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34cf84ca-142e-4466-b911-1886a9468d9e" path="/var/lib/kubelet/pods/34cf84ca-142e-4466-b911-1886a9468d9e/volumes" Apr 04 02:38:07 crc kubenswrapper[4681]: I0404 02:38:07.201144 4681 scope.go:117] "RemoveContainer" containerID="d0a6982dc73391bfcdbea37affa340ab6c2c8bfe22d8dc69ff3f47913a1861ee" Apr 04 02:38:07 crc kubenswrapper[4681]: E0404 02:38:07.202056 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:38:18 crc kubenswrapper[4681]: I0404 02:38:18.048009 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-zbjjd"] Apr 04 02:38:18 crc kubenswrapper[4681]: I0404 02:38:18.062004 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-zbjjd"] Apr 04 02:38:19 crc kubenswrapper[4681]: I0404 02:38:19.215075 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b124b58f-5f56-47cf-a141-28e8786a0673" path="/var/lib/kubelet/pods/b124b58f-5f56-47cf-a141-28e8786a0673/volumes" Apr 04 02:38:21 crc kubenswrapper[4681]: I0404 02:38:21.211648 4681 scope.go:117] "RemoveContainer" containerID="d0a6982dc73391bfcdbea37affa340ab6c2c8bfe22d8dc69ff3f47913a1861ee" Apr 04 02:38:21 crc kubenswrapper[4681]: E0404 02:38:21.212178 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:38:23 crc kubenswrapper[4681]: I0404 02:38:23.680463 4681 generic.go:334] "Generic (PLEG): container finished" podID="6d18b62e-86ae-4c2b-864c-315581ca4f1a" containerID="18a63c62fb2eb1cdef2a374086d8bc806d0c864a28f832a920c553b6a26a10b7" exitCode=0 Apr 04 02:38:23 crc kubenswrapper[4681]: I0404 02:38:23.680529 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6b966" event={"ID":"6d18b62e-86ae-4c2b-864c-315581ca4f1a","Type":"ContainerDied","Data":"18a63c62fb2eb1cdef2a374086d8bc806d0c864a28f832a920c553b6a26a10b7"} Apr 04 02:38:25 crc kubenswrapper[4681]: I0404 02:38:25.178013 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6b966" Apr 04 02:38:25 crc kubenswrapper[4681]: I0404 02:38:25.284629 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d18b62e-86ae-4c2b-864c-315581ca4f1a-ssh-key-openstack-edpm-ipam\") pod \"6d18b62e-86ae-4c2b-864c-315581ca4f1a\" (UID: \"6d18b62e-86ae-4c2b-864c-315581ca4f1a\") " Apr 04 02:38:25 crc kubenswrapper[4681]: I0404 02:38:25.284833 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d18b62e-86ae-4c2b-864c-315581ca4f1a-inventory\") pod \"6d18b62e-86ae-4c2b-864c-315581ca4f1a\" (UID: \"6d18b62e-86ae-4c2b-864c-315581ca4f1a\") " Apr 04 02:38:25 crc kubenswrapper[4681]: I0404 02:38:25.285008 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9f8m\" (UniqueName: \"kubernetes.io/projected/6d18b62e-86ae-4c2b-864c-315581ca4f1a-kube-api-access-d9f8m\") pod \"6d18b62e-86ae-4c2b-864c-315581ca4f1a\" (UID: \"6d18b62e-86ae-4c2b-864c-315581ca4f1a\") " Apr 04 02:38:25 crc kubenswrapper[4681]: I0404 02:38:25.290093 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d18b62e-86ae-4c2b-864c-315581ca4f1a-kube-api-access-d9f8m" (OuterVolumeSpecName: "kube-api-access-d9f8m") pod "6d18b62e-86ae-4c2b-864c-315581ca4f1a" (UID: "6d18b62e-86ae-4c2b-864c-315581ca4f1a"). InnerVolumeSpecName "kube-api-access-d9f8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:38:25 crc kubenswrapper[4681]: I0404 02:38:25.313532 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d18b62e-86ae-4c2b-864c-315581ca4f1a-inventory" (OuterVolumeSpecName: "inventory") pod "6d18b62e-86ae-4c2b-864c-315581ca4f1a" (UID: "6d18b62e-86ae-4c2b-864c-315581ca4f1a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:38:25 crc kubenswrapper[4681]: I0404 02:38:25.316782 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d18b62e-86ae-4c2b-864c-315581ca4f1a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6d18b62e-86ae-4c2b-864c-315581ca4f1a" (UID: "6d18b62e-86ae-4c2b-864c-315581ca4f1a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:38:25 crc kubenswrapper[4681]: I0404 02:38:25.388386 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9f8m\" (UniqueName: \"kubernetes.io/projected/6d18b62e-86ae-4c2b-864c-315581ca4f1a-kube-api-access-d9f8m\") on node \"crc\" DevicePath \"\"" Apr 04 02:38:25 crc kubenswrapper[4681]: I0404 02:38:25.388419 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6d18b62e-86ae-4c2b-864c-315581ca4f1a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Apr 04 02:38:25 crc kubenswrapper[4681]: I0404 02:38:25.388430 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d18b62e-86ae-4c2b-864c-315581ca4f1a-inventory\") on node \"crc\" DevicePath \"\"" Apr 04 02:38:25 crc kubenswrapper[4681]: I0404 02:38:25.701159 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6b966" event={"ID":"6d18b62e-86ae-4c2b-864c-315581ca4f1a","Type":"ContainerDied","Data":"1cf0634598a739ee9edb8c0df685154aea9b63c87c1ae6040b14e7c4c432eaaa"} Apr 04 02:38:25 crc kubenswrapper[4681]: I0404 02:38:25.701511 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cf0634598a739ee9edb8c0df685154aea9b63c87c1ae6040b14e7c4c432eaaa" Apr 04 02:38:25 crc kubenswrapper[4681]: I0404 02:38:25.701284 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-6b966" Apr 04 02:38:25 crc kubenswrapper[4681]: I0404 02:38:25.808308 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-bdcnq"] Apr 04 02:38:25 crc kubenswrapper[4681]: E0404 02:38:25.809186 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c52306af-2657-4a6c-b9c3-b902bfc18de5" containerName="oc" Apr 04 02:38:25 crc kubenswrapper[4681]: I0404 02:38:25.809216 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="c52306af-2657-4a6c-b9c3-b902bfc18de5" containerName="oc" Apr 04 02:38:25 crc kubenswrapper[4681]: E0404 02:38:25.809292 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d18b62e-86ae-4c2b-864c-315581ca4f1a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Apr 04 02:38:25 crc kubenswrapper[4681]: I0404 02:38:25.809308 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d18b62e-86ae-4c2b-864c-315581ca4f1a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Apr 04 02:38:25 crc kubenswrapper[4681]: I0404 02:38:25.809640 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="c52306af-2657-4a6c-b9c3-b902bfc18de5" containerName="oc" Apr 04 02:38:25 crc kubenswrapper[4681]: I0404 02:38:25.809696 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d18b62e-86ae-4c2b-864c-315581ca4f1a" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Apr 04 02:38:25 crc kubenswrapper[4681]: I0404 02:38:25.810878 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-bdcnq" Apr 04 02:38:25 crc kubenswrapper[4681]: I0404 02:38:25.814394 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Apr 04 02:38:25 crc kubenswrapper[4681]: I0404 02:38:25.814393 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Apr 04 02:38:25 crc kubenswrapper[4681]: I0404 02:38:25.815032 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Apr 04 02:38:25 crc kubenswrapper[4681]: I0404 02:38:25.815238 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w7ttd" Apr 04 02:38:25 crc kubenswrapper[4681]: I0404 02:38:25.825460 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-bdcnq"] Apr 04 02:38:25 crc kubenswrapper[4681]: I0404 02:38:25.898195 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/00fa5e33-c452-4b88-bd67-bc0e6094d232-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-bdcnq\" (UID: \"00fa5e33-c452-4b88-bd67-bc0e6094d232\") " pod="openstack/ssh-known-hosts-edpm-deployment-bdcnq" Apr 04 02:38:25 crc kubenswrapper[4681]: I0404 02:38:25.898366 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzhnr\" (UniqueName: \"kubernetes.io/projected/00fa5e33-c452-4b88-bd67-bc0e6094d232-kube-api-access-hzhnr\") pod \"ssh-known-hosts-edpm-deployment-bdcnq\" (UID: \"00fa5e33-c452-4b88-bd67-bc0e6094d232\") " pod="openstack/ssh-known-hosts-edpm-deployment-bdcnq" Apr 04 02:38:25 crc kubenswrapper[4681]: I0404 02:38:25.898495 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00fa5e33-c452-4b88-bd67-bc0e6094d232-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-bdcnq\" (UID: \"00fa5e33-c452-4b88-bd67-bc0e6094d232\") " pod="openstack/ssh-known-hosts-edpm-deployment-bdcnq" Apr 04 02:38:26 crc kubenswrapper[4681]: I0404 02:38:26.000169 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/00fa5e33-c452-4b88-bd67-bc0e6094d232-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-bdcnq\" (UID: \"00fa5e33-c452-4b88-bd67-bc0e6094d232\") " pod="openstack/ssh-known-hosts-edpm-deployment-bdcnq" Apr 04 02:38:26 crc kubenswrapper[4681]: I0404 02:38:26.000226 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzhnr\" (UniqueName: \"kubernetes.io/projected/00fa5e33-c452-4b88-bd67-bc0e6094d232-kube-api-access-hzhnr\") pod \"ssh-known-hosts-edpm-deployment-bdcnq\" (UID: \"00fa5e33-c452-4b88-bd67-bc0e6094d232\") " pod="openstack/ssh-known-hosts-edpm-deployment-bdcnq" Apr 04 02:38:26 crc kubenswrapper[4681]: I0404 02:38:26.000344 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00fa5e33-c452-4b88-bd67-bc0e6094d232-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-bdcnq\" (UID: \"00fa5e33-c452-4b88-bd67-bc0e6094d232\") " pod="openstack/ssh-known-hosts-edpm-deployment-bdcnq" Apr 04 02:38:26 crc kubenswrapper[4681]: I0404 02:38:26.004462 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00fa5e33-c452-4b88-bd67-bc0e6094d232-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-bdcnq\" (UID: \"00fa5e33-c452-4b88-bd67-bc0e6094d232\") " pod="openstack/ssh-known-hosts-edpm-deployment-bdcnq" Apr 04 02:38:26 crc kubenswrapper[4681]: I0404 02:38:26.006056 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/00fa5e33-c452-4b88-bd67-bc0e6094d232-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-bdcnq\" (UID: \"00fa5e33-c452-4b88-bd67-bc0e6094d232\") " pod="openstack/ssh-known-hosts-edpm-deployment-bdcnq" Apr 04 02:38:26 crc kubenswrapper[4681]: I0404 02:38:26.026671 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzhnr\" (UniqueName: \"kubernetes.io/projected/00fa5e33-c452-4b88-bd67-bc0e6094d232-kube-api-access-hzhnr\") pod \"ssh-known-hosts-edpm-deployment-bdcnq\" (UID: \"00fa5e33-c452-4b88-bd67-bc0e6094d232\") " pod="openstack/ssh-known-hosts-edpm-deployment-bdcnq" Apr 04 02:38:26 crc kubenswrapper[4681]: I0404 02:38:26.146679 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-bdcnq" Apr 04 02:38:26 crc kubenswrapper[4681]: I0404 02:38:26.676377 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-bdcnq"] Apr 04 02:38:26 crc kubenswrapper[4681]: I0404 02:38:26.717513 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-bdcnq" event={"ID":"00fa5e33-c452-4b88-bd67-bc0e6094d232","Type":"ContainerStarted","Data":"49dbae4aa5e19cbb532e4e09b908dcc7868713e64be5974d5c86ffc18f9a67da"} Apr 04 02:38:27 crc kubenswrapper[4681]: I0404 02:38:27.729695 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-bdcnq" event={"ID":"00fa5e33-c452-4b88-bd67-bc0e6094d232","Type":"ContainerStarted","Data":"d75ae9baa8a7c3e5159a6dd0a9b6cf3dc3a5af3e8ed126a3d07aa547fba7bb49"} Apr 04 02:38:27 crc kubenswrapper[4681]: I0404 02:38:27.749921 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-bdcnq" podStartSLOduration=2.322687436 podStartE2EDuration="2.749899771s" podCreationTimestamp="2026-04-04 02:38:25 +0000 UTC" firstStartedPulling="2026-04-04 02:38:26.687214435 +0000 UTC m=+2586.352989555" lastFinishedPulling="2026-04-04 02:38:27.11442677 +0000 UTC m=+2586.780201890" observedRunningTime="2026-04-04 02:38:27.743544818 +0000 UTC m=+2587.409319938" watchObservedRunningTime="2026-04-04 02:38:27.749899771 +0000 UTC m=+2587.415674891" Apr 04 02:38:28 crc kubenswrapper[4681]: I0404 02:38:28.045055 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nnf4c"] Apr 04 02:38:28 crc kubenswrapper[4681]: I0404 02:38:28.066025 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nnf4c"] Apr 04 02:38:29 crc kubenswrapper[4681]: I0404 02:38:29.210902 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2883f88-a3fd-46b2-8452-32974b0c6b4f" path="/var/lib/kubelet/pods/f2883f88-a3fd-46b2-8452-32974b0c6b4f/volumes" Apr 04 02:38:29 crc kubenswrapper[4681]: I0404 02:38:29.592412 4681 scope.go:117] "RemoveContainer" containerID="3728b19c83666aac895486a18b6e03ef521b83f65df191dcdd7eb17506ee7643" Apr 04 02:38:29 crc kubenswrapper[4681]: I0404 02:38:29.653853 4681 scope.go:117] "RemoveContainer" containerID="69baf56b8e9d8b8f84b6a5cd7d34cf35744bb98017478f06e79186c84a1cc68c" Apr 04 02:38:29 crc kubenswrapper[4681]: I0404 02:38:29.714909 4681 scope.go:117] "RemoveContainer" containerID="9d325544c1fadf7e3b3e6963472a95d649e7ef877e63d44a9ab4a29b2192cb02" Apr 04 02:38:29 crc kubenswrapper[4681]: I0404 02:38:29.792364 4681 scope.go:117] "RemoveContainer" containerID="47636a03276739c914fbe54c0af431ff55d270f3ca8f7e1ea8fee46b07d26d00" Apr 04 02:38:33 crc kubenswrapper[4681]: I0404 02:38:33.810946 4681 generic.go:334] "Generic (PLEG): container finished" podID="00fa5e33-c452-4b88-bd67-bc0e6094d232" containerID="d75ae9baa8a7c3e5159a6dd0a9b6cf3dc3a5af3e8ed126a3d07aa547fba7bb49" exitCode=0 Apr 04 02:38:33 crc kubenswrapper[4681]: I0404 02:38:33.811045 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-bdcnq" event={"ID":"00fa5e33-c452-4b88-bd67-bc0e6094d232","Type":"ContainerDied","Data":"d75ae9baa8a7c3e5159a6dd0a9b6cf3dc3a5af3e8ed126a3d07aa547fba7bb49"} Apr 04 02:38:34 crc kubenswrapper[4681]: I0404 02:38:34.200950 4681 scope.go:117] "RemoveContainer" containerID="d0a6982dc73391bfcdbea37affa340ab6c2c8bfe22d8dc69ff3f47913a1861ee" Apr 04 02:38:34 crc kubenswrapper[4681]: E0404 02:38:34.201234 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:38:35 crc kubenswrapper[4681]: I0404 02:38:35.283956 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-bdcnq" Apr 04 02:38:35 crc kubenswrapper[4681]: I0404 02:38:35.405619 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/00fa5e33-c452-4b88-bd67-bc0e6094d232-inventory-0\") pod \"00fa5e33-c452-4b88-bd67-bc0e6094d232\" (UID: \"00fa5e33-c452-4b88-bd67-bc0e6094d232\") " Apr 04 02:38:35 crc kubenswrapper[4681]: I0404 02:38:35.405714 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzhnr\" (UniqueName: \"kubernetes.io/projected/00fa5e33-c452-4b88-bd67-bc0e6094d232-kube-api-access-hzhnr\") pod \"00fa5e33-c452-4b88-bd67-bc0e6094d232\" (UID: \"00fa5e33-c452-4b88-bd67-bc0e6094d232\") " Apr 04 02:38:35 crc kubenswrapper[4681]: I0404 02:38:35.405943 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00fa5e33-c452-4b88-bd67-bc0e6094d232-ssh-key-openstack-edpm-ipam\") pod \"00fa5e33-c452-4b88-bd67-bc0e6094d232\" (UID: \"00fa5e33-c452-4b88-bd67-bc0e6094d232\") " Apr 04 02:38:35 crc kubenswrapper[4681]: I0404 02:38:35.411493 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00fa5e33-c452-4b88-bd67-bc0e6094d232-kube-api-access-hzhnr" (OuterVolumeSpecName: "kube-api-access-hzhnr") pod "00fa5e33-c452-4b88-bd67-bc0e6094d232" (UID: "00fa5e33-c452-4b88-bd67-bc0e6094d232"). InnerVolumeSpecName "kube-api-access-hzhnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:38:35 crc kubenswrapper[4681]: I0404 02:38:35.435691 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00fa5e33-c452-4b88-bd67-bc0e6094d232-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "00fa5e33-c452-4b88-bd67-bc0e6094d232" (UID: "00fa5e33-c452-4b88-bd67-bc0e6094d232"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:38:35 crc kubenswrapper[4681]: I0404 02:38:35.439604 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00fa5e33-c452-4b88-bd67-bc0e6094d232-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "00fa5e33-c452-4b88-bd67-bc0e6094d232" (UID: "00fa5e33-c452-4b88-bd67-bc0e6094d232"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:38:35 crc kubenswrapper[4681]: I0404 02:38:35.508335 4681 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/00fa5e33-c452-4b88-bd67-bc0e6094d232-inventory-0\") on node \"crc\" DevicePath \"\"" Apr 04 02:38:35 crc kubenswrapper[4681]: I0404 02:38:35.508364 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzhnr\" (UniqueName: \"kubernetes.io/projected/00fa5e33-c452-4b88-bd67-bc0e6094d232-kube-api-access-hzhnr\") on node \"crc\" DevicePath \"\"" Apr 04 02:38:35 crc kubenswrapper[4681]: I0404 02:38:35.508374 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/00fa5e33-c452-4b88-bd67-bc0e6094d232-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Apr 04 02:38:35 crc kubenswrapper[4681]: I0404 02:38:35.831768 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-bdcnq" event={"ID":"00fa5e33-c452-4b88-bd67-bc0e6094d232","Type":"ContainerDied","Data":"49dbae4aa5e19cbb532e4e09b908dcc7868713e64be5974d5c86ffc18f9a67da"} Apr 04 02:38:35 crc kubenswrapper[4681]: I0404 02:38:35.831813 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-bdcnq" Apr 04 02:38:35 crc kubenswrapper[4681]: I0404 02:38:35.831823 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49dbae4aa5e19cbb532e4e09b908dcc7868713e64be5974d5c86ffc18f9a67da" Apr 04 02:38:35 crc kubenswrapper[4681]: I0404 02:38:35.935795 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-cwn7s"] Apr 04 02:38:35 crc kubenswrapper[4681]: E0404 02:38:35.936320 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00fa5e33-c452-4b88-bd67-bc0e6094d232" containerName="ssh-known-hosts-edpm-deployment" Apr 04 02:38:35 crc kubenswrapper[4681]: I0404 02:38:35.936344 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="00fa5e33-c452-4b88-bd67-bc0e6094d232" containerName="ssh-known-hosts-edpm-deployment" Apr 04 02:38:35 crc kubenswrapper[4681]: I0404 02:38:35.936655 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="00fa5e33-c452-4b88-bd67-bc0e6094d232" containerName="ssh-known-hosts-edpm-deployment" Apr 04 02:38:35 crc kubenswrapper[4681]: I0404 02:38:35.937654 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cwn7s" Apr 04 02:38:35 crc kubenswrapper[4681]: I0404 02:38:35.940644 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Apr 04 02:38:35 crc kubenswrapper[4681]: I0404 02:38:35.941111 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Apr 04 02:38:35 crc kubenswrapper[4681]: I0404 02:38:35.941388 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Apr 04 02:38:35 crc kubenswrapper[4681]: I0404 02:38:35.943373 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w7ttd" Apr 04 02:38:35 crc kubenswrapper[4681]: I0404 02:38:35.959232 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-cwn7s"] Apr 04 02:38:36 crc kubenswrapper[4681]: I0404 02:38:36.016859 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z65xt\" (UniqueName: \"kubernetes.io/projected/17d6bc83-830a-47e3-b5c6-96ae2ecfad52-kube-api-access-z65xt\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cwn7s\" (UID: \"17d6bc83-830a-47e3-b5c6-96ae2ecfad52\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cwn7s" Apr 04 02:38:36 crc kubenswrapper[4681]: I0404 02:38:36.017165 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/17d6bc83-830a-47e3-b5c6-96ae2ecfad52-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cwn7s\" (UID: \"17d6bc83-830a-47e3-b5c6-96ae2ecfad52\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cwn7s" Apr 04 02:38:36 crc kubenswrapper[4681]: I0404 02:38:36.017395 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17d6bc83-830a-47e3-b5c6-96ae2ecfad52-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cwn7s\" (UID: \"17d6bc83-830a-47e3-b5c6-96ae2ecfad52\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cwn7s" Apr 04 02:38:36 crc kubenswrapper[4681]: I0404 02:38:36.119064 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z65xt\" (UniqueName: \"kubernetes.io/projected/17d6bc83-830a-47e3-b5c6-96ae2ecfad52-kube-api-access-z65xt\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cwn7s\" (UID: \"17d6bc83-830a-47e3-b5c6-96ae2ecfad52\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cwn7s" Apr 04 02:38:36 crc kubenswrapper[4681]: I0404 02:38:36.119185 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/17d6bc83-830a-47e3-b5c6-96ae2ecfad52-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cwn7s\" (UID: \"17d6bc83-830a-47e3-b5c6-96ae2ecfad52\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cwn7s" Apr 04 02:38:36 crc kubenswrapper[4681]: I0404 02:38:36.119249 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17d6bc83-830a-47e3-b5c6-96ae2ecfad52-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cwn7s\" (UID: \"17d6bc83-830a-47e3-b5c6-96ae2ecfad52\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cwn7s" Apr 04 02:38:36 crc kubenswrapper[4681]: I0404 02:38:36.135064 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17d6bc83-830a-47e3-b5c6-96ae2ecfad52-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cwn7s\" (UID: \"17d6bc83-830a-47e3-b5c6-96ae2ecfad52\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cwn7s" Apr 04 02:38:36 crc kubenswrapper[4681]: I0404 02:38:36.135079 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/17d6bc83-830a-47e3-b5c6-96ae2ecfad52-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cwn7s\" (UID: \"17d6bc83-830a-47e3-b5c6-96ae2ecfad52\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cwn7s" Apr 04 02:38:36 crc kubenswrapper[4681]: I0404 02:38:36.154203 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z65xt\" (UniqueName: \"kubernetes.io/projected/17d6bc83-830a-47e3-b5c6-96ae2ecfad52-kube-api-access-z65xt\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-cwn7s\" (UID: \"17d6bc83-830a-47e3-b5c6-96ae2ecfad52\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cwn7s" Apr 04 02:38:36 crc kubenswrapper[4681]: I0404 02:38:36.264880 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cwn7s" Apr 04 02:38:36 crc kubenswrapper[4681]: I0404 02:38:36.789205 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-cwn7s"] Apr 04 02:38:36 crc kubenswrapper[4681]: I0404 02:38:36.840690 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cwn7s" event={"ID":"17d6bc83-830a-47e3-b5c6-96ae2ecfad52","Type":"ContainerStarted","Data":"ddf3b6dc7b9776a578264d70cd0ded494854cab8e0c8beb56417d12e916a0f05"} Apr 04 02:38:37 crc kubenswrapper[4681]: I0404 02:38:37.852441 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cwn7s" event={"ID":"17d6bc83-830a-47e3-b5c6-96ae2ecfad52","Type":"ContainerStarted","Data":"e483598877ff999285b63821910974ec76981be04c1ec85a2b7315b5a71060ac"} Apr 04 02:38:37 crc kubenswrapper[4681]: I0404 02:38:37.893270 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cwn7s" podStartSLOduration=2.474781518 podStartE2EDuration="2.893242724s" podCreationTimestamp="2026-04-04 02:38:35 +0000 UTC" firstStartedPulling="2026-04-04 02:38:36.7870894 +0000 UTC m=+2596.452864520" lastFinishedPulling="2026-04-04 02:38:37.205550606 +0000 UTC m=+2596.871325726" observedRunningTime="2026-04-04 02:38:37.865404054 +0000 UTC m=+2597.531179194" watchObservedRunningTime="2026-04-04 02:38:37.893242724 +0000 UTC m=+2597.559017844" Apr 04 02:38:43 crc kubenswrapper[4681]: I0404 02:38:43.139824 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qdd8v"] Apr 04 02:38:43 crc kubenswrapper[4681]: I0404 02:38:43.151916 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qdd8v" Apr 04 02:38:43 crc kubenswrapper[4681]: I0404 02:38:43.166597 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qdd8v"] Apr 04 02:38:43 crc kubenswrapper[4681]: I0404 02:38:43.283009 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7-catalog-content\") pod \"community-operators-qdd8v\" (UID: \"cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7\") " pod="openshift-marketplace/community-operators-qdd8v" Apr 04 02:38:43 crc kubenswrapper[4681]: I0404 02:38:43.283188 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcw7p\" (UniqueName: \"kubernetes.io/projected/cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7-kube-api-access-hcw7p\") pod \"community-operators-qdd8v\" (UID: \"cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7\") " pod="openshift-marketplace/community-operators-qdd8v" Apr 04 02:38:43 crc kubenswrapper[4681]: I0404 02:38:43.283251 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7-utilities\") pod \"community-operators-qdd8v\" (UID: \"cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7\") " pod="openshift-marketplace/community-operators-qdd8v" Apr 04 02:38:43 crc kubenswrapper[4681]: I0404 02:38:43.385235 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7-catalog-content\") pod \"community-operators-qdd8v\" (UID: \"cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7\") " pod="openshift-marketplace/community-operators-qdd8v" Apr 04 02:38:43 crc kubenswrapper[4681]: I0404 02:38:43.385437 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcw7p\" (UniqueName: \"kubernetes.io/projected/cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7-kube-api-access-hcw7p\") pod \"community-operators-qdd8v\" (UID: \"cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7\") " pod="openshift-marketplace/community-operators-qdd8v" Apr 04 02:38:43 crc kubenswrapper[4681]: I0404 02:38:43.385702 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7-catalog-content\") pod \"community-operators-qdd8v\" (UID: \"cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7\") " pod="openshift-marketplace/community-operators-qdd8v" Apr 04 02:38:43 crc kubenswrapper[4681]: I0404 02:38:43.385837 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7-utilities\") pod \"community-operators-qdd8v\" (UID: \"cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7\") " pod="openshift-marketplace/community-operators-qdd8v" Apr 04 02:38:43 crc kubenswrapper[4681]: I0404 02:38:43.386123 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7-utilities\") pod \"community-operators-qdd8v\" (UID: \"cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7\") " pod="openshift-marketplace/community-operators-qdd8v" Apr 04 02:38:43 crc kubenswrapper[4681]: I0404 02:38:43.405004 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcw7p\" (UniqueName: \"kubernetes.io/projected/cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7-kube-api-access-hcw7p\") pod \"community-operators-qdd8v\" (UID: \"cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7\") " pod="openshift-marketplace/community-operators-qdd8v" Apr 04 02:38:43 crc kubenswrapper[4681]: I0404 02:38:43.475967 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qdd8v" Apr 04 02:38:43 crc kubenswrapper[4681]: I0404 02:38:43.995833 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qdd8v"] Apr 04 02:38:44 crc kubenswrapper[4681]: I0404 02:38:44.935553 4681 generic.go:334] "Generic (PLEG): container finished" podID="17d6bc83-830a-47e3-b5c6-96ae2ecfad52" containerID="e483598877ff999285b63821910974ec76981be04c1ec85a2b7315b5a71060ac" exitCode=0 Apr 04 02:38:44 crc kubenswrapper[4681]: I0404 02:38:44.935604 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cwn7s" event={"ID":"17d6bc83-830a-47e3-b5c6-96ae2ecfad52","Type":"ContainerDied","Data":"e483598877ff999285b63821910974ec76981be04c1ec85a2b7315b5a71060ac"} Apr 04 02:38:44 crc kubenswrapper[4681]: I0404 02:38:44.939232 4681 generic.go:334] "Generic (PLEG): container finished" podID="cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7" containerID="0cffff52057722914ae771550d24a28e8660f0c2f9be7ed9675d080f8348d29a" exitCode=0 Apr 04 02:38:44 crc kubenswrapper[4681]: I0404 02:38:44.939290 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdd8v" event={"ID":"cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7","Type":"ContainerDied","Data":"0cffff52057722914ae771550d24a28e8660f0c2f9be7ed9675d080f8348d29a"} Apr 04 02:38:44 crc kubenswrapper[4681]: I0404 02:38:44.939331 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdd8v" event={"ID":"cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7","Type":"ContainerStarted","Data":"3ef35335fcdda1a65e60f76e44e6b625abd8f8884a08648cbc746e8765d7321b"} Apr 04 02:38:46 crc kubenswrapper[4681]: I0404 02:38:46.202139 4681 scope.go:117] "RemoveContainer" containerID="d0a6982dc73391bfcdbea37affa340ab6c2c8bfe22d8dc69ff3f47913a1861ee" Apr 04 02:38:46 crc kubenswrapper[4681]: E0404 02:38:46.202741 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:38:46 crc kubenswrapper[4681]: I0404 02:38:46.420785 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cwn7s" Apr 04 02:38:46 crc kubenswrapper[4681]: I0404 02:38:46.549981 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/17d6bc83-830a-47e3-b5c6-96ae2ecfad52-ssh-key-openstack-edpm-ipam\") pod \"17d6bc83-830a-47e3-b5c6-96ae2ecfad52\" (UID: \"17d6bc83-830a-47e3-b5c6-96ae2ecfad52\") " Apr 04 02:38:46 crc kubenswrapper[4681]: I0404 02:38:46.550201 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17d6bc83-830a-47e3-b5c6-96ae2ecfad52-inventory\") pod \"17d6bc83-830a-47e3-b5c6-96ae2ecfad52\" (UID: \"17d6bc83-830a-47e3-b5c6-96ae2ecfad52\") " Apr 04 02:38:46 crc kubenswrapper[4681]: I0404 02:38:46.550253 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z65xt\" (UniqueName: \"kubernetes.io/projected/17d6bc83-830a-47e3-b5c6-96ae2ecfad52-kube-api-access-z65xt\") pod \"17d6bc83-830a-47e3-b5c6-96ae2ecfad52\" (UID: \"17d6bc83-830a-47e3-b5c6-96ae2ecfad52\") " Apr 04 02:38:46 crc kubenswrapper[4681]: I0404 02:38:46.555294 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17d6bc83-830a-47e3-b5c6-96ae2ecfad52-kube-api-access-z65xt" (OuterVolumeSpecName: "kube-api-access-z65xt") pod "17d6bc83-830a-47e3-b5c6-96ae2ecfad52" (UID: "17d6bc83-830a-47e3-b5c6-96ae2ecfad52"). InnerVolumeSpecName "kube-api-access-z65xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:38:46 crc kubenswrapper[4681]: I0404 02:38:46.581377 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17d6bc83-830a-47e3-b5c6-96ae2ecfad52-inventory" (OuterVolumeSpecName: "inventory") pod "17d6bc83-830a-47e3-b5c6-96ae2ecfad52" (UID: "17d6bc83-830a-47e3-b5c6-96ae2ecfad52"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:38:46 crc kubenswrapper[4681]: I0404 02:38:46.591187 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17d6bc83-830a-47e3-b5c6-96ae2ecfad52-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "17d6bc83-830a-47e3-b5c6-96ae2ecfad52" (UID: "17d6bc83-830a-47e3-b5c6-96ae2ecfad52"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:38:46 crc kubenswrapper[4681]: I0404 02:38:46.654707 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17d6bc83-830a-47e3-b5c6-96ae2ecfad52-inventory\") on node \"crc\" DevicePath \"\"" Apr 04 02:38:46 crc kubenswrapper[4681]: I0404 02:38:46.654774 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z65xt\" (UniqueName: \"kubernetes.io/projected/17d6bc83-830a-47e3-b5c6-96ae2ecfad52-kube-api-access-z65xt\") on node \"crc\" DevicePath \"\"" Apr 04 02:38:46 crc kubenswrapper[4681]: I0404 02:38:46.654796 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/17d6bc83-830a-47e3-b5c6-96ae2ecfad52-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Apr 04 02:38:46 crc kubenswrapper[4681]: I0404 02:38:46.959487 4681 generic.go:334] "Generic (PLEG): container finished" podID="cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7" containerID="0185838d0d32e815faf81a52dd27fbb01a2f9b7ca1851421acb4feb9a82ad2a1" exitCode=0 Apr 04 02:38:46 crc kubenswrapper[4681]: I0404 02:38:46.959568 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdd8v" event={"ID":"cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7","Type":"ContainerDied","Data":"0185838d0d32e815faf81a52dd27fbb01a2f9b7ca1851421acb4feb9a82ad2a1"} Apr 04 02:38:46 crc kubenswrapper[4681]: I0404 02:38:46.961478 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cwn7s" event={"ID":"17d6bc83-830a-47e3-b5c6-96ae2ecfad52","Type":"ContainerDied","Data":"ddf3b6dc7b9776a578264d70cd0ded494854cab8e0c8beb56417d12e916a0f05"} Apr 04 02:38:46 crc kubenswrapper[4681]: I0404 02:38:46.961522 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddf3b6dc7b9776a578264d70cd0ded494854cab8e0c8beb56417d12e916a0f05" Apr 04 02:38:46 crc kubenswrapper[4681]: I0404 02:38:46.961586 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-cwn7s" Apr 04 02:38:47 crc kubenswrapper[4681]: I0404 02:38:47.052206 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h"] Apr 04 02:38:47 crc kubenswrapper[4681]: E0404 02:38:47.052818 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17d6bc83-830a-47e3-b5c6-96ae2ecfad52" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Apr 04 02:38:47 crc kubenswrapper[4681]: I0404 02:38:47.052844 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="17d6bc83-830a-47e3-b5c6-96ae2ecfad52" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Apr 04 02:38:47 crc kubenswrapper[4681]: I0404 02:38:47.053086 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="17d6bc83-830a-47e3-b5c6-96ae2ecfad52" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Apr 04 02:38:47 crc kubenswrapper[4681]: I0404 02:38:47.054525 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h" Apr 04 02:38:47 crc kubenswrapper[4681]: I0404 02:38:47.056292 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Apr 04 02:38:47 crc kubenswrapper[4681]: I0404 02:38:47.056778 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Apr 04 02:38:47 crc kubenswrapper[4681]: I0404 02:38:47.059768 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w7ttd" Apr 04 02:38:47 crc kubenswrapper[4681]: I0404 02:38:47.059884 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Apr 04 02:38:47 crc kubenswrapper[4681]: I0404 02:38:47.063520 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h"] Apr 04 02:38:47 crc kubenswrapper[4681]: I0404 02:38:47.163713 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b001b06-583d-4b8d-974e-e7cf078a514d-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h\" (UID: \"0b001b06-583d-4b8d-974e-e7cf078a514d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h" Apr 04 02:38:47 crc kubenswrapper[4681]: I0404 02:38:47.163790 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jql4r\" (UniqueName: \"kubernetes.io/projected/0b001b06-583d-4b8d-974e-e7cf078a514d-kube-api-access-jql4r\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h\" (UID: \"0b001b06-583d-4b8d-974e-e7cf078a514d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h" Apr 04 02:38:47 crc kubenswrapper[4681]: I0404 02:38:47.163902 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b001b06-583d-4b8d-974e-e7cf078a514d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h\" (UID: \"0b001b06-583d-4b8d-974e-e7cf078a514d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h" Apr 04 02:38:47 crc kubenswrapper[4681]: I0404 02:38:47.266693 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b001b06-583d-4b8d-974e-e7cf078a514d-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h\" (UID: \"0b001b06-583d-4b8d-974e-e7cf078a514d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h" Apr 04 02:38:47 crc kubenswrapper[4681]: I0404 02:38:47.266793 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jql4r\" (UniqueName: \"kubernetes.io/projected/0b001b06-583d-4b8d-974e-e7cf078a514d-kube-api-access-jql4r\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h\" (UID: \"0b001b06-583d-4b8d-974e-e7cf078a514d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h" Apr 04 02:38:47 crc kubenswrapper[4681]: I0404 02:38:47.267551 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b001b06-583d-4b8d-974e-e7cf078a514d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h\" (UID: \"0b001b06-583d-4b8d-974e-e7cf078a514d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h" Apr 04 02:38:47 crc kubenswrapper[4681]: I0404 02:38:47.272677 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b001b06-583d-4b8d-974e-e7cf078a514d-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h\" (UID: \"0b001b06-583d-4b8d-974e-e7cf078a514d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h" Apr 04 02:38:47 crc kubenswrapper[4681]: I0404 02:38:47.273085 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b001b06-583d-4b8d-974e-e7cf078a514d-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h\" (UID: \"0b001b06-583d-4b8d-974e-e7cf078a514d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h" Apr 04 02:38:47 crc kubenswrapper[4681]: I0404 02:38:47.287472 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jql4r\" (UniqueName: \"kubernetes.io/projected/0b001b06-583d-4b8d-974e-e7cf078a514d-kube-api-access-jql4r\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h\" (UID: \"0b001b06-583d-4b8d-974e-e7cf078a514d\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h" Apr 04 02:38:47 crc kubenswrapper[4681]: I0404 02:38:47.382114 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h" Apr 04 02:38:47 crc kubenswrapper[4681]: I0404 02:38:47.918404 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h"] Apr 04 02:38:47 crc kubenswrapper[4681]: I0404 02:38:47.972378 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdd8v" event={"ID":"cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7","Type":"ContainerStarted","Data":"9a5ee7eb2cca20ac6feaf4f2e3158afe3e0acdd513fdb757a36aedaf172b492c"} Apr 04 02:38:47 crc kubenswrapper[4681]: I0404 02:38:47.975068 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h" event={"ID":"0b001b06-583d-4b8d-974e-e7cf078a514d","Type":"ContainerStarted","Data":"bef055e260feecacb6cb7a76d29c78d5ddc6f8b49d6fc0afd0f2ba029a14d2f7"} Apr 04 02:38:47 crc kubenswrapper[4681]: I0404 02:38:47.999472 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qdd8v" podStartSLOduration=2.632066331 podStartE2EDuration="4.999450122s" podCreationTimestamp="2026-04-04 02:38:43 +0000 UTC" firstStartedPulling="2026-04-04 02:38:44.942206175 +0000 UTC m=+2604.607981295" lastFinishedPulling="2026-04-04 02:38:47.309589956 +0000 UTC m=+2606.975365086" observedRunningTime="2026-04-04 02:38:47.991971279 +0000 UTC m=+2607.657746399" watchObservedRunningTime="2026-04-04 02:38:47.999450122 +0000 UTC m=+2607.665225242" Apr 04 02:38:48 crc kubenswrapper[4681]: I0404 02:38:48.988510 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h" event={"ID":"0b001b06-583d-4b8d-974e-e7cf078a514d","Type":"ContainerStarted","Data":"c47f89a4aabb80c7b4a6612ab8c5c0e39e77d3fd9db549a0c88faedd4947a37c"} Apr 04 02:38:49 crc kubenswrapper[4681]: I0404 02:38:49.004488 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h" podStartSLOduration=1.6278570810000002 podStartE2EDuration="2.004473924s" podCreationTimestamp="2026-04-04 02:38:47 +0000 UTC" firstStartedPulling="2026-04-04 02:38:47.92246232 +0000 UTC m=+2607.588237440" lastFinishedPulling="2026-04-04 02:38:48.299079163 +0000 UTC m=+2607.964854283" observedRunningTime="2026-04-04 02:38:49.003844857 +0000 UTC m=+2608.669619977" watchObservedRunningTime="2026-04-04 02:38:49.004473924 +0000 UTC m=+2608.670249044" Apr 04 02:38:53 crc kubenswrapper[4681]: I0404 02:38:53.476825 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qdd8v" Apr 04 02:38:53 crc kubenswrapper[4681]: I0404 02:38:53.477353 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qdd8v" Apr 04 02:38:53 crc kubenswrapper[4681]: I0404 02:38:53.524344 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qdd8v" Apr 04 02:38:54 crc kubenswrapper[4681]: I0404 02:38:54.089213 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qdd8v" Apr 04 02:38:54 crc kubenswrapper[4681]: I0404 02:38:54.141847 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qdd8v"] Apr 04 02:38:56 crc kubenswrapper[4681]: I0404 02:38:56.055208 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qdd8v" podUID="cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7" containerName="registry-server" containerID="cri-o://9a5ee7eb2cca20ac6feaf4f2e3158afe3e0acdd513fdb757a36aedaf172b492c" gracePeriod=2 Apr 04 02:38:57 crc kubenswrapper[4681]: I0404 02:38:57.068721 4681 generic.go:334] "Generic (PLEG): container finished" podID="cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7" containerID="9a5ee7eb2cca20ac6feaf4f2e3158afe3e0acdd513fdb757a36aedaf172b492c" exitCode=0 Apr 04 02:38:57 crc kubenswrapper[4681]: I0404 02:38:57.068809 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdd8v" event={"ID":"cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7","Type":"ContainerDied","Data":"9a5ee7eb2cca20ac6feaf4f2e3158afe3e0acdd513fdb757a36aedaf172b492c"} Apr 04 02:38:57 crc kubenswrapper[4681]: I0404 02:38:57.330732 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qdd8v" Apr 04 02:38:57 crc kubenswrapper[4681]: I0404 02:38:57.394382 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7-utilities\") pod \"cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7\" (UID: \"cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7\") " Apr 04 02:38:57 crc kubenswrapper[4681]: I0404 02:38:57.394491 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7-catalog-content\") pod \"cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7\" (UID: \"cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7\") " Apr 04 02:38:57 crc kubenswrapper[4681]: I0404 02:38:57.395531 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7-utilities" (OuterVolumeSpecName: "utilities") pod "cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7" (UID: "cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:38:57 crc kubenswrapper[4681]: I0404 02:38:57.397757 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcw7p\" (UniqueName: \"kubernetes.io/projected/cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7-kube-api-access-hcw7p\") pod \"cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7\" (UID: \"cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7\") " Apr 04 02:38:57 crc kubenswrapper[4681]: I0404 02:38:57.402411 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 02:38:57 crc kubenswrapper[4681]: I0404 02:38:57.410587 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7-kube-api-access-hcw7p" (OuterVolumeSpecName: "kube-api-access-hcw7p") pod "cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7" (UID: "cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7"). InnerVolumeSpecName "kube-api-access-hcw7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:38:57 crc kubenswrapper[4681]: I0404 02:38:57.458749 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7" (UID: "cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:38:57 crc kubenswrapper[4681]: I0404 02:38:57.504650 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcw7p\" (UniqueName: \"kubernetes.io/projected/cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7-kube-api-access-hcw7p\") on node \"crc\" DevicePath \"\"" Apr 04 02:38:57 crc kubenswrapper[4681]: I0404 02:38:57.504686 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 02:38:58 crc kubenswrapper[4681]: I0404 02:38:58.082981 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qdd8v" event={"ID":"cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7","Type":"ContainerDied","Data":"3ef35335fcdda1a65e60f76e44e6b625abd8f8884a08648cbc746e8765d7321b"} Apr 04 02:38:58 crc kubenswrapper[4681]: I0404 02:38:58.083004 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qdd8v" Apr 04 02:38:58 crc kubenswrapper[4681]: I0404 02:38:58.083093 4681 scope.go:117] "RemoveContainer" containerID="9a5ee7eb2cca20ac6feaf4f2e3158afe3e0acdd513fdb757a36aedaf172b492c" Apr 04 02:38:58 crc kubenswrapper[4681]: I0404 02:38:58.085599 4681 generic.go:334] "Generic (PLEG): container finished" podID="0b001b06-583d-4b8d-974e-e7cf078a514d" containerID="c47f89a4aabb80c7b4a6612ab8c5c0e39e77d3fd9db549a0c88faedd4947a37c" exitCode=0 Apr 04 02:38:58 crc kubenswrapper[4681]: I0404 02:38:58.085665 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h" event={"ID":"0b001b06-583d-4b8d-974e-e7cf078a514d","Type":"ContainerDied","Data":"c47f89a4aabb80c7b4a6612ab8c5c0e39e77d3fd9db549a0c88faedd4947a37c"} Apr 04 02:38:58 crc kubenswrapper[4681]: I0404 02:38:58.104318 4681 scope.go:117] "RemoveContainer" containerID="0185838d0d32e815faf81a52dd27fbb01a2f9b7ca1851421acb4feb9a82ad2a1" Apr 04 02:38:58 crc kubenswrapper[4681]: I0404 02:38:58.137836 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qdd8v"] Apr 04 02:38:58 crc kubenswrapper[4681]: I0404 02:38:58.147291 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qdd8v"] Apr 04 02:38:58 crc kubenswrapper[4681]: I0404 02:38:58.147857 4681 scope.go:117] "RemoveContainer" containerID="0cffff52057722914ae771550d24a28e8660f0c2f9be7ed9675d080f8348d29a" Apr 04 02:38:58 crc kubenswrapper[4681]: I0404 02:38:58.201953 4681 scope.go:117] "RemoveContainer" containerID="d0a6982dc73391bfcdbea37affa340ab6c2c8bfe22d8dc69ff3f47913a1861ee" Apr 04 02:38:58 crc kubenswrapper[4681]: E0404 02:38:58.202513 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:38:59 crc kubenswrapper[4681]: I0404 02:38:59.212602 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7" path="/var/lib/kubelet/pods/cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7/volumes" Apr 04 02:38:59 crc kubenswrapper[4681]: I0404 02:38:59.554711 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h" Apr 04 02:38:59 crc kubenswrapper[4681]: I0404 02:38:59.652359 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jql4r\" (UniqueName: \"kubernetes.io/projected/0b001b06-583d-4b8d-974e-e7cf078a514d-kube-api-access-jql4r\") pod \"0b001b06-583d-4b8d-974e-e7cf078a514d\" (UID: \"0b001b06-583d-4b8d-974e-e7cf078a514d\") " Apr 04 02:38:59 crc kubenswrapper[4681]: I0404 02:38:59.652505 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b001b06-583d-4b8d-974e-e7cf078a514d-inventory\") pod \"0b001b06-583d-4b8d-974e-e7cf078a514d\" (UID: \"0b001b06-583d-4b8d-974e-e7cf078a514d\") " Apr 04 02:38:59 crc kubenswrapper[4681]: I0404 02:38:59.652667 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b001b06-583d-4b8d-974e-e7cf078a514d-ssh-key-openstack-edpm-ipam\") pod \"0b001b06-583d-4b8d-974e-e7cf078a514d\" (UID: \"0b001b06-583d-4b8d-974e-e7cf078a514d\") " Apr 04 02:38:59 crc kubenswrapper[4681]: I0404 02:38:59.659220 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b001b06-583d-4b8d-974e-e7cf078a514d-kube-api-access-jql4r" (OuterVolumeSpecName: "kube-api-access-jql4r") pod "0b001b06-583d-4b8d-974e-e7cf078a514d" (UID: "0b001b06-583d-4b8d-974e-e7cf078a514d"). InnerVolumeSpecName "kube-api-access-jql4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:38:59 crc kubenswrapper[4681]: I0404 02:38:59.680213 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b001b06-583d-4b8d-974e-e7cf078a514d-inventory" (OuterVolumeSpecName: "inventory") pod "0b001b06-583d-4b8d-974e-e7cf078a514d" (UID: "0b001b06-583d-4b8d-974e-e7cf078a514d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:38:59 crc kubenswrapper[4681]: I0404 02:38:59.690448 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b001b06-583d-4b8d-974e-e7cf078a514d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0b001b06-583d-4b8d-974e-e7cf078a514d" (UID: "0b001b06-583d-4b8d-974e-e7cf078a514d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:38:59 crc kubenswrapper[4681]: I0404 02:38:59.754816 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jql4r\" (UniqueName: \"kubernetes.io/projected/0b001b06-583d-4b8d-974e-e7cf078a514d-kube-api-access-jql4r\") on node \"crc\" DevicePath \"\"" Apr 04 02:38:59 crc kubenswrapper[4681]: I0404 02:38:59.754851 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b001b06-583d-4b8d-974e-e7cf078a514d-inventory\") on node \"crc\" DevicePath \"\"" Apr 04 02:38:59 crc kubenswrapper[4681]: I0404 02:38:59.754865 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0b001b06-583d-4b8d-974e-e7cf078a514d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.112358 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h" event={"ID":"0b001b06-583d-4b8d-974e-e7cf078a514d","Type":"ContainerDied","Data":"bef055e260feecacb6cb7a76d29c78d5ddc6f8b49d6fc0afd0f2ba029a14d2f7"} Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.112395 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bef055e260feecacb6cb7a76d29c78d5ddc6f8b49d6fc0afd0f2ba029a14d2f7" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.112448 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.216662 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq"] Apr 04 02:39:00 crc kubenswrapper[4681]: E0404 02:39:00.217166 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b001b06-583d-4b8d-974e-e7cf078a514d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.217186 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b001b06-583d-4b8d-974e-e7cf078a514d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Apr 04 02:39:00 crc kubenswrapper[4681]: E0404 02:39:00.217222 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7" containerName="extract-utilities" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.217231 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7" containerName="extract-utilities" Apr 04 02:39:00 crc kubenswrapper[4681]: E0404 02:39:00.217241 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7" containerName="registry-server" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.217249 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7" containerName="registry-server" Apr 04 02:39:00 crc kubenswrapper[4681]: E0404 02:39:00.217407 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7" containerName="extract-content" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.217419 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7" containerName="extract-content" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.217670 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b001b06-583d-4b8d-974e-e7cf078a514d" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.217689 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd591f84-5ba5-43d2-93bc-6a7f43d8b0c7" containerName="registry-server" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.218570 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.226548 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.226902 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.227060 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.227562 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.227680 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w7ttd" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.227789 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.227910 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.228016 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.232637 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq"] Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.379494 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.379548 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.379590 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76d7d624-1948-4ecc-ae72-3e40c03ec267-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.379646 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76d7d624-1948-4ecc-ae72-3e40c03ec267-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.379674 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.379771 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.379818 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.379864 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.379896 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.379959 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.379988 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76d7d624-1948-4ecc-ae72-3e40c03ec267-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.380033 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpntv\" (UniqueName: \"kubernetes.io/projected/76d7d624-1948-4ecc-ae72-3e40c03ec267-kube-api-access-wpntv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.380057 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.380085 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76d7d624-1948-4ecc-ae72-3e40c03ec267-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.482111 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.482175 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.482218 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.482293 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.482329 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76d7d624-1948-4ecc-ae72-3e40c03ec267-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.482383 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpntv\" (UniqueName: \"kubernetes.io/projected/76d7d624-1948-4ecc-ae72-3e40c03ec267-kube-api-access-wpntv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.482443 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.482478 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76d7d624-1948-4ecc-ae72-3e40c03ec267-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.482543 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.482574 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.482614 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76d7d624-1948-4ecc-ae72-3e40c03ec267-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.482671 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76d7d624-1948-4ecc-ae72-3e40c03ec267-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.482706 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.482767 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.488073 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.488674 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76d7d624-1948-4ecc-ae72-3e40c03ec267-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.488970 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.489435 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.489662 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76d7d624-1948-4ecc-ae72-3e40c03ec267-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.489714 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.489793 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.490310 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.490344 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76d7d624-1948-4ecc-ae72-3e40c03ec267-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.491238 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76d7d624-1948-4ecc-ae72-3e40c03ec267-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.491493 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.493423 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.500815 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.504846 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpntv\" (UniqueName: \"kubernetes.io/projected/76d7d624-1948-4ecc-ae72-3e40c03ec267-kube-api-access-wpntv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:00 crc kubenswrapper[4681]: I0404 02:39:00.579004 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:01 crc kubenswrapper[4681]: I0404 02:39:01.145249 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq"] Apr 04 02:39:02 crc kubenswrapper[4681]: I0404 02:39:02.131907 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" event={"ID":"76d7d624-1948-4ecc-ae72-3e40c03ec267","Type":"ContainerStarted","Data":"d8bbb9237be3bd2ed6b7ec549fefb4af4d45042cb2ed11e729e80ce687d07434"} Apr 04 02:39:02 crc kubenswrapper[4681]: I0404 02:39:02.131971 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" event={"ID":"76d7d624-1948-4ecc-ae72-3e40c03ec267","Type":"ContainerStarted","Data":"43ba490a022dbd97c7eeed1004a5038aedb32c70df60d177e762984af43f87e3"} Apr 04 02:39:02 crc kubenswrapper[4681]: I0404 02:39:02.160890 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" podStartSLOduration=1.7258174880000001 podStartE2EDuration="2.160873277s" podCreationTimestamp="2026-04-04 02:39:00 +0000 UTC" firstStartedPulling="2026-04-04 02:39:01.147349784 +0000 UTC m=+2620.813124904" lastFinishedPulling="2026-04-04 02:39:01.582405573 +0000 UTC m=+2621.248180693" observedRunningTime="2026-04-04 02:39:02.15399406 +0000 UTC m=+2621.819769180" watchObservedRunningTime="2026-04-04 02:39:02.160873277 +0000 UTC m=+2621.826648397" Apr 04 02:39:04 crc kubenswrapper[4681]: I0404 02:39:04.051967 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-n7nn9"] Apr 04 02:39:04 crc kubenswrapper[4681]: I0404 02:39:04.059954 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-n7nn9"] Apr 04 02:39:05 crc kubenswrapper[4681]: I0404 02:39:05.210829 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2487907-8ce5-4e62-a772-66ac674c64e0" path="/var/lib/kubelet/pods/f2487907-8ce5-4e62-a772-66ac674c64e0/volumes" Apr 04 02:39:10 crc kubenswrapper[4681]: I0404 02:39:10.201376 4681 scope.go:117] "RemoveContainer" containerID="d0a6982dc73391bfcdbea37affa340ab6c2c8bfe22d8dc69ff3f47913a1861ee" Apr 04 02:39:10 crc kubenswrapper[4681]: E0404 02:39:10.202404 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:39:21 crc kubenswrapper[4681]: I0404 02:39:21.207617 4681 scope.go:117] "RemoveContainer" containerID="d0a6982dc73391bfcdbea37affa340ab6c2c8bfe22d8dc69ff3f47913a1861ee" Apr 04 02:39:21 crc kubenswrapper[4681]: E0404 02:39:21.210444 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:39:29 crc kubenswrapper[4681]: I0404 02:39:29.929456 4681 scope.go:117] "RemoveContainer" containerID="effb887815aa24bd5625259c978ee550f13d88ff5100d9350b737ed32a1eae8d" Apr 04 02:39:35 crc kubenswrapper[4681]: I0404 02:39:35.201730 4681 scope.go:117] "RemoveContainer" containerID="d0a6982dc73391bfcdbea37affa340ab6c2c8bfe22d8dc69ff3f47913a1861ee" Apr 04 02:39:35 crc kubenswrapper[4681]: E0404 02:39:35.202827 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:39:37 crc kubenswrapper[4681]: I0404 02:39:37.521105 4681 generic.go:334] "Generic (PLEG): container finished" podID="76d7d624-1948-4ecc-ae72-3e40c03ec267" containerID="d8bbb9237be3bd2ed6b7ec549fefb4af4d45042cb2ed11e729e80ce687d07434" exitCode=0 Apr 04 02:39:37 crc kubenswrapper[4681]: I0404 02:39:37.522439 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" event={"ID":"76d7d624-1948-4ecc-ae72-3e40c03ec267","Type":"ContainerDied","Data":"d8bbb9237be3bd2ed6b7ec549fefb4af4d45042cb2ed11e729e80ce687d07434"} Apr 04 02:39:38 crc kubenswrapper[4681]: I0404 02:39:38.947125 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.003703 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-telemetry-combined-ca-bundle\") pod \"76d7d624-1948-4ecc-ae72-3e40c03ec267\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.003802 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-ovn-combined-ca-bundle\") pod \"76d7d624-1948-4ecc-ae72-3e40c03ec267\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.003863 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76d7d624-1948-4ecc-ae72-3e40c03ec267-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"76d7d624-1948-4ecc-ae72-3e40c03ec267\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.004833 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-neutron-metadata-combined-ca-bundle\") pod \"76d7d624-1948-4ecc-ae72-3e40c03ec267\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.004891 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-ssh-key-openstack-edpm-ipam\") pod \"76d7d624-1948-4ecc-ae72-3e40c03ec267\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.004948 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-bootstrap-combined-ca-bundle\") pod \"76d7d624-1948-4ecc-ae72-3e40c03ec267\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.004984 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76d7d624-1948-4ecc-ae72-3e40c03ec267-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"76d7d624-1948-4ecc-ae72-3e40c03ec267\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.010828 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "76d7d624-1948-4ecc-ae72-3e40c03ec267" (UID: "76d7d624-1948-4ecc-ae72-3e40c03ec267"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.010839 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76d7d624-1948-4ecc-ae72-3e40c03ec267-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "76d7d624-1948-4ecc-ae72-3e40c03ec267" (UID: "76d7d624-1948-4ecc-ae72-3e40c03ec267"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.010922 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "76d7d624-1948-4ecc-ae72-3e40c03ec267" (UID: "76d7d624-1948-4ecc-ae72-3e40c03ec267"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.010939 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "76d7d624-1948-4ecc-ae72-3e40c03ec267" (UID: "76d7d624-1948-4ecc-ae72-3e40c03ec267"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.011815 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76d7d624-1948-4ecc-ae72-3e40c03ec267-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "76d7d624-1948-4ecc-ae72-3e40c03ec267" (UID: "76d7d624-1948-4ecc-ae72-3e40c03ec267"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.012918 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "76d7d624-1948-4ecc-ae72-3e40c03ec267" (UID: "76d7d624-1948-4ecc-ae72-3e40c03ec267"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.034355 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "76d7d624-1948-4ecc-ae72-3e40c03ec267" (UID: "76d7d624-1948-4ecc-ae72-3e40c03ec267"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.106101 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-libvirt-combined-ca-bundle\") pod \"76d7d624-1948-4ecc-ae72-3e40c03ec267\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.106170 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpntv\" (UniqueName: \"kubernetes.io/projected/76d7d624-1948-4ecc-ae72-3e40c03ec267-kube-api-access-wpntv\") pod \"76d7d624-1948-4ecc-ae72-3e40c03ec267\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.106190 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76d7d624-1948-4ecc-ae72-3e40c03ec267-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"76d7d624-1948-4ecc-ae72-3e40c03ec267\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.106215 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-repo-setup-combined-ca-bundle\") pod \"76d7d624-1948-4ecc-ae72-3e40c03ec267\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.106752 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76d7d624-1948-4ecc-ae72-3e40c03ec267-openstack-edpm-ipam-ovn-default-certs-0\") pod \"76d7d624-1948-4ecc-ae72-3e40c03ec267\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.106789 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-inventory\") pod \"76d7d624-1948-4ecc-ae72-3e40c03ec267\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.106925 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-nova-combined-ca-bundle\") pod \"76d7d624-1948-4ecc-ae72-3e40c03ec267\" (UID: \"76d7d624-1948-4ecc-ae72-3e40c03ec267\") " Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.109540 4681 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.109774 4681 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.109789 4681 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76d7d624-1948-4ecc-ae72-3e40c03ec267-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.109875 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "76d7d624-1948-4ecc-ae72-3e40c03ec267" (UID: "76d7d624-1948-4ecc-ae72-3e40c03ec267"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.109805 4681 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.109921 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.109936 4681 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.109952 4681 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76d7d624-1948-4ecc-ae72-3e40c03ec267-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.110448 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "76d7d624-1948-4ecc-ae72-3e40c03ec267" (UID: "76d7d624-1948-4ecc-ae72-3e40c03ec267"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.110514 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76d7d624-1948-4ecc-ae72-3e40c03ec267-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "76d7d624-1948-4ecc-ae72-3e40c03ec267" (UID: "76d7d624-1948-4ecc-ae72-3e40c03ec267"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.110907 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "76d7d624-1948-4ecc-ae72-3e40c03ec267" (UID: "76d7d624-1948-4ecc-ae72-3e40c03ec267"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.113066 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76d7d624-1948-4ecc-ae72-3e40c03ec267-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "76d7d624-1948-4ecc-ae72-3e40c03ec267" (UID: "76d7d624-1948-4ecc-ae72-3e40c03ec267"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.117242 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76d7d624-1948-4ecc-ae72-3e40c03ec267-kube-api-access-wpntv" (OuterVolumeSpecName: "kube-api-access-wpntv") pod "76d7d624-1948-4ecc-ae72-3e40c03ec267" (UID: "76d7d624-1948-4ecc-ae72-3e40c03ec267"). InnerVolumeSpecName "kube-api-access-wpntv". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.135362 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-inventory" (OuterVolumeSpecName: "inventory") pod "76d7d624-1948-4ecc-ae72-3e40c03ec267" (UID: "76d7d624-1948-4ecc-ae72-3e40c03ec267"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.211917 4681 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.211945 4681 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.211957 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpntv\" (UniqueName: \"kubernetes.io/projected/76d7d624-1948-4ecc-ae72-3e40c03ec267-kube-api-access-wpntv\") on node \"crc\" DevicePath \"\"" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.211966 4681 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76d7d624-1948-4ecc-ae72-3e40c03ec267-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.211978 4681 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.211989 4681 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/76d7d624-1948-4ecc-ae72-3e40c03ec267-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.211999 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76d7d624-1948-4ecc-ae72-3e40c03ec267-inventory\") on node \"crc\" DevicePath \"\"" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.544304 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" event={"ID":"76d7d624-1948-4ecc-ae72-3e40c03ec267","Type":"ContainerDied","Data":"43ba490a022dbd97c7eeed1004a5038aedb32c70df60d177e762984af43f87e3"} Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.544341 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.544351 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43ba490a022dbd97c7eeed1004a5038aedb32c70df60d177e762984af43f87e3" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.746745 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-fh7b5"] Apr 04 02:39:39 crc kubenswrapper[4681]: E0404 02:39:39.747190 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d7d624-1948-4ecc-ae72-3e40c03ec267" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.747209 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d7d624-1948-4ecc-ae72-3e40c03ec267" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.747413 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="76d7d624-1948-4ecc-ae72-3e40c03ec267" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.748080 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fh7b5" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.760411 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.760476 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.760536 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.760576 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.761661 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w7ttd" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.762456 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-fh7b5"] Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.823491 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fh7b5\" (UID: \"bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fh7b5" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.823537 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dw47\" (UniqueName: \"kubernetes.io/projected/bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad-kube-api-access-8dw47\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fh7b5\" (UID: \"bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fh7b5" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.823591 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fh7b5\" (UID: \"bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fh7b5" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.823696 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fh7b5\" (UID: \"bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fh7b5" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.823775 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fh7b5\" (UID: \"bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fh7b5" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.925265 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fh7b5\" (UID: \"bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fh7b5" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.925354 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dw47\" (UniqueName: \"kubernetes.io/projected/bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad-kube-api-access-8dw47\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fh7b5\" (UID: \"bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fh7b5" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.925405 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fh7b5\" (UID: \"bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fh7b5" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.925445 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fh7b5\" (UID: \"bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fh7b5" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.925510 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fh7b5\" (UID: \"bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fh7b5" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.926345 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fh7b5\" (UID: \"bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fh7b5" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.930546 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fh7b5\" (UID: \"bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fh7b5" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.931440 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fh7b5\" (UID: \"bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fh7b5" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.936268 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fh7b5\" (UID: \"bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fh7b5" Apr 04 02:39:39 crc kubenswrapper[4681]: I0404 02:39:39.942774 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dw47\" (UniqueName: \"kubernetes.io/projected/bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad-kube-api-access-8dw47\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-fh7b5\" (UID: \"bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fh7b5" Apr 04 02:39:40 crc kubenswrapper[4681]: I0404 02:39:40.064045 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fh7b5" Apr 04 02:39:40 crc kubenswrapper[4681]: I0404 02:39:40.621315 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-fh7b5"] Apr 04 02:39:41 crc kubenswrapper[4681]: I0404 02:39:41.568072 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fh7b5" event={"ID":"bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad","Type":"ContainerStarted","Data":"36e909cba0cea7a83f34bed2844eb23d97bfe3a0ac44ef236787b0aa218acfa3"} Apr 04 02:39:42 crc kubenswrapper[4681]: I0404 02:39:42.581034 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fh7b5" event={"ID":"bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad","Type":"ContainerStarted","Data":"4bec6321309ad6a05b11d2cf0e8780cfdcc7d127826c998a56d5fd2d43e42e86"} Apr 04 02:39:42 crc kubenswrapper[4681]: I0404 02:39:42.607095 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fh7b5" podStartSLOduration=2.943620731 podStartE2EDuration="3.607072226s" podCreationTimestamp="2026-04-04 02:39:39 +0000 UTC" firstStartedPulling="2026-04-04 02:39:40.623879675 +0000 UTC m=+2660.289654795" lastFinishedPulling="2026-04-04 02:39:41.28733117 +0000 UTC m=+2660.953106290" observedRunningTime="2026-04-04 02:39:42.600742613 +0000 UTC m=+2662.266517733" watchObservedRunningTime="2026-04-04 02:39:42.607072226 +0000 UTC m=+2662.272847346" Apr 04 02:39:50 crc kubenswrapper[4681]: I0404 02:39:50.201196 4681 scope.go:117] "RemoveContainer" containerID="d0a6982dc73391bfcdbea37affa340ab6c2c8bfe22d8dc69ff3f47913a1861ee" Apr 04 02:39:50 crc kubenswrapper[4681]: E0404 02:39:50.202360 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:40:00 crc kubenswrapper[4681]: I0404 02:40:00.155604 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587840-f7mjw"] Apr 04 02:40:00 crc kubenswrapper[4681]: I0404 02:40:00.158347 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587840-f7mjw" Apr 04 02:40:00 crc kubenswrapper[4681]: I0404 02:40:00.160692 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 02:40:00 crc kubenswrapper[4681]: I0404 02:40:00.160819 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 02:40:00 crc kubenswrapper[4681]: I0404 02:40:00.161109 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 02:40:00 crc kubenswrapper[4681]: I0404 02:40:00.169338 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587840-f7mjw"] Apr 04 02:40:00 crc kubenswrapper[4681]: I0404 02:40:00.358339 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79rx7\" (UniqueName: \"kubernetes.io/projected/4995802e-736b-427c-8252-e2da74db085c-kube-api-access-79rx7\") pod \"auto-csr-approver-29587840-f7mjw\" (UID: \"4995802e-736b-427c-8252-e2da74db085c\") " pod="openshift-infra/auto-csr-approver-29587840-f7mjw" Apr 04 02:40:00 crc kubenswrapper[4681]: I0404 02:40:00.460721 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79rx7\" (UniqueName: \"kubernetes.io/projected/4995802e-736b-427c-8252-e2da74db085c-kube-api-access-79rx7\") pod \"auto-csr-approver-29587840-f7mjw\" (UID: \"4995802e-736b-427c-8252-e2da74db085c\") " pod="openshift-infra/auto-csr-approver-29587840-f7mjw" Apr 04 02:40:00 crc kubenswrapper[4681]: I0404 02:40:00.498955 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79rx7\" (UniqueName: \"kubernetes.io/projected/4995802e-736b-427c-8252-e2da74db085c-kube-api-access-79rx7\") pod \"auto-csr-approver-29587840-f7mjw\" (UID: \"4995802e-736b-427c-8252-e2da74db085c\") " pod="openshift-infra/auto-csr-approver-29587840-f7mjw" Apr 04 02:40:00 crc kubenswrapper[4681]: I0404 02:40:00.787670 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587840-f7mjw" Apr 04 02:40:01 crc kubenswrapper[4681]: I0404 02:40:01.258205 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587840-f7mjw"] Apr 04 02:40:01 crc kubenswrapper[4681]: W0404 02:40:01.262624 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4995802e_736b_427c_8252_e2da74db085c.slice/crio-84fb64b265999dbf79305771171a0f8555c695349b24ecd5434839c6b33bca15 WatchSource:0}: Error finding container 84fb64b265999dbf79305771171a0f8555c695349b24ecd5434839c6b33bca15: Status 404 returned error can't find the container with id 84fb64b265999dbf79305771171a0f8555c695349b24ecd5434839c6b33bca15 Apr 04 02:40:01 crc kubenswrapper[4681]: I0404 02:40:01.767381 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587840-f7mjw" event={"ID":"4995802e-736b-427c-8252-e2da74db085c","Type":"ContainerStarted","Data":"84fb64b265999dbf79305771171a0f8555c695349b24ecd5434839c6b33bca15"} Apr 04 02:40:02 crc kubenswrapper[4681]: I0404 02:40:02.776935 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587840-f7mjw" event={"ID":"4995802e-736b-427c-8252-e2da74db085c","Type":"ContainerStarted","Data":"fe521e60da101fef15c34d1b9f52d6395f5cbc797304d0de06be9e715465581b"} Apr 04 02:40:02 crc kubenswrapper[4681]: I0404 02:40:02.794986 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29587840-f7mjw" podStartSLOduration=1.782378804 podStartE2EDuration="2.794967804s" podCreationTimestamp="2026-04-04 02:40:00 +0000 UTC" firstStartedPulling="2026-04-04 02:40:01.26438137 +0000 UTC m=+2680.930156490" lastFinishedPulling="2026-04-04 02:40:02.27697037 +0000 UTC m=+2681.942745490" observedRunningTime="2026-04-04 02:40:02.788190808 +0000 UTC m=+2682.453965928" watchObservedRunningTime="2026-04-04 02:40:02.794967804 +0000 UTC m=+2682.460742924" Apr 04 02:40:03 crc kubenswrapper[4681]: I0404 02:40:03.202823 4681 scope.go:117] "RemoveContainer" containerID="d0a6982dc73391bfcdbea37affa340ab6c2c8bfe22d8dc69ff3f47913a1861ee" Apr 04 02:40:03 crc kubenswrapper[4681]: E0404 02:40:03.203428 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:40:03 crc kubenswrapper[4681]: I0404 02:40:03.787422 4681 generic.go:334] "Generic (PLEG): container finished" podID="4995802e-736b-427c-8252-e2da74db085c" containerID="fe521e60da101fef15c34d1b9f52d6395f5cbc797304d0de06be9e715465581b" exitCode=0 Apr 04 02:40:03 crc kubenswrapper[4681]: I0404 02:40:03.787478 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587840-f7mjw" event={"ID":"4995802e-736b-427c-8252-e2da74db085c","Type":"ContainerDied","Data":"fe521e60da101fef15c34d1b9f52d6395f5cbc797304d0de06be9e715465581b"} Apr 04 02:40:05 crc kubenswrapper[4681]: I0404 02:40:05.133234 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587840-f7mjw" Apr 04 02:40:05 crc kubenswrapper[4681]: I0404 02:40:05.259293 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79rx7\" (UniqueName: \"kubernetes.io/projected/4995802e-736b-427c-8252-e2da74db085c-kube-api-access-79rx7\") pod \"4995802e-736b-427c-8252-e2da74db085c\" (UID: \"4995802e-736b-427c-8252-e2da74db085c\") " Apr 04 02:40:05 crc kubenswrapper[4681]: I0404 02:40:05.264920 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4995802e-736b-427c-8252-e2da74db085c-kube-api-access-79rx7" (OuterVolumeSpecName: "kube-api-access-79rx7") pod "4995802e-736b-427c-8252-e2da74db085c" (UID: "4995802e-736b-427c-8252-e2da74db085c"). InnerVolumeSpecName "kube-api-access-79rx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:40:05 crc kubenswrapper[4681]: I0404 02:40:05.366802 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79rx7\" (UniqueName: \"kubernetes.io/projected/4995802e-736b-427c-8252-e2da74db085c-kube-api-access-79rx7\") on node \"crc\" DevicePath \"\"" Apr 04 02:40:05 crc kubenswrapper[4681]: I0404 02:40:05.805280 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587840-f7mjw" event={"ID":"4995802e-736b-427c-8252-e2da74db085c","Type":"ContainerDied","Data":"84fb64b265999dbf79305771171a0f8555c695349b24ecd5434839c6b33bca15"} Apr 04 02:40:05 crc kubenswrapper[4681]: I0404 02:40:05.805653 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84fb64b265999dbf79305771171a0f8555c695349b24ecd5434839c6b33bca15" Apr 04 02:40:05 crc kubenswrapper[4681]: I0404 02:40:05.805319 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587840-f7mjw" Apr 04 02:40:05 crc kubenswrapper[4681]: I0404 02:40:05.871482 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587834-t97x5"] Apr 04 02:40:05 crc kubenswrapper[4681]: I0404 02:40:05.884729 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587834-t97x5"] Apr 04 02:40:07 crc kubenswrapper[4681]: I0404 02:40:07.214376 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad3faf5c-9f9c-4abb-9fec-ecf6960105ab" path="/var/lib/kubelet/pods/ad3faf5c-9f9c-4abb-9fec-ecf6960105ab/volumes" Apr 04 02:40:12 crc kubenswrapper[4681]: I0404 02:40:12.071680 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pkd9f"] Apr 04 02:40:12 crc kubenswrapper[4681]: E0404 02:40:12.072748 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4995802e-736b-427c-8252-e2da74db085c" containerName="oc" Apr 04 02:40:12 crc kubenswrapper[4681]: I0404 02:40:12.072848 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4995802e-736b-427c-8252-e2da74db085c" containerName="oc" Apr 04 02:40:12 crc kubenswrapper[4681]: I0404 02:40:12.073120 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4995802e-736b-427c-8252-e2da74db085c" containerName="oc" Apr 04 02:40:12 crc kubenswrapper[4681]: I0404 02:40:12.074811 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pkd9f" Apr 04 02:40:12 crc kubenswrapper[4681]: I0404 02:40:12.094713 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pkd9f"] Apr 04 02:40:12 crc kubenswrapper[4681]: I0404 02:40:12.103585 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30b278b0-28a3-4bbb-beac-abae03d60094-catalog-content\") pod \"redhat-marketplace-pkd9f\" (UID: \"30b278b0-28a3-4bbb-beac-abae03d60094\") " pod="openshift-marketplace/redhat-marketplace-pkd9f" Apr 04 02:40:12 crc kubenswrapper[4681]: I0404 02:40:12.105042 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30b278b0-28a3-4bbb-beac-abae03d60094-utilities\") pod \"redhat-marketplace-pkd9f\" (UID: \"30b278b0-28a3-4bbb-beac-abae03d60094\") " pod="openshift-marketplace/redhat-marketplace-pkd9f" Apr 04 02:40:12 crc kubenswrapper[4681]: I0404 02:40:12.105140 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nw4rz\" (UniqueName: \"kubernetes.io/projected/30b278b0-28a3-4bbb-beac-abae03d60094-kube-api-access-nw4rz\") pod \"redhat-marketplace-pkd9f\" (UID: \"30b278b0-28a3-4bbb-beac-abae03d60094\") " pod="openshift-marketplace/redhat-marketplace-pkd9f" Apr 04 02:40:12 crc kubenswrapper[4681]: I0404 02:40:12.207418 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30b278b0-28a3-4bbb-beac-abae03d60094-utilities\") pod \"redhat-marketplace-pkd9f\" (UID: \"30b278b0-28a3-4bbb-beac-abae03d60094\") " pod="openshift-marketplace/redhat-marketplace-pkd9f" Apr 04 02:40:12 crc kubenswrapper[4681]: I0404 02:40:12.207485 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nw4rz\" (UniqueName: \"kubernetes.io/projected/30b278b0-28a3-4bbb-beac-abae03d60094-kube-api-access-nw4rz\") pod \"redhat-marketplace-pkd9f\" (UID: \"30b278b0-28a3-4bbb-beac-abae03d60094\") " pod="openshift-marketplace/redhat-marketplace-pkd9f" Apr 04 02:40:12 crc kubenswrapper[4681]: I0404 02:40:12.207597 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30b278b0-28a3-4bbb-beac-abae03d60094-catalog-content\") pod \"redhat-marketplace-pkd9f\" (UID: \"30b278b0-28a3-4bbb-beac-abae03d60094\") " pod="openshift-marketplace/redhat-marketplace-pkd9f" Apr 04 02:40:12 crc kubenswrapper[4681]: I0404 02:40:12.208003 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30b278b0-28a3-4bbb-beac-abae03d60094-utilities\") pod \"redhat-marketplace-pkd9f\" (UID: \"30b278b0-28a3-4bbb-beac-abae03d60094\") " pod="openshift-marketplace/redhat-marketplace-pkd9f" Apr 04 02:40:12 crc kubenswrapper[4681]: I0404 02:40:12.208113 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30b278b0-28a3-4bbb-beac-abae03d60094-catalog-content\") pod \"redhat-marketplace-pkd9f\" (UID: \"30b278b0-28a3-4bbb-beac-abae03d60094\") " pod="openshift-marketplace/redhat-marketplace-pkd9f" Apr 04 02:40:12 crc kubenswrapper[4681]: I0404 02:40:12.228544 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nw4rz\" (UniqueName: \"kubernetes.io/projected/30b278b0-28a3-4bbb-beac-abae03d60094-kube-api-access-nw4rz\") pod \"redhat-marketplace-pkd9f\" (UID: \"30b278b0-28a3-4bbb-beac-abae03d60094\") " pod="openshift-marketplace/redhat-marketplace-pkd9f" Apr 04 02:40:12 crc kubenswrapper[4681]: I0404 02:40:12.420793 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pkd9f" Apr 04 02:40:12 crc kubenswrapper[4681]: I0404 02:40:12.869559 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pkd9f"] Apr 04 02:40:12 crc kubenswrapper[4681]: I0404 02:40:12.891859 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pkd9f" event={"ID":"30b278b0-28a3-4bbb-beac-abae03d60094","Type":"ContainerStarted","Data":"9818be75f201e84265fce4312dae44a7ca8f971d1a905c15ec0f87aeb1a2d4b1"} Apr 04 02:40:13 crc kubenswrapper[4681]: I0404 02:40:13.909692 4681 generic.go:334] "Generic (PLEG): container finished" podID="30b278b0-28a3-4bbb-beac-abae03d60094" containerID="97102a0df58008757cb25ee9f6f52e0a505242a44bd7797452d2efe432cde408" exitCode=0 Apr 04 02:40:13 crc kubenswrapper[4681]: I0404 02:40:13.909843 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pkd9f" event={"ID":"30b278b0-28a3-4bbb-beac-abae03d60094","Type":"ContainerDied","Data":"97102a0df58008757cb25ee9f6f52e0a505242a44bd7797452d2efe432cde408"} Apr 04 02:40:14 crc kubenswrapper[4681]: I0404 02:40:14.923172 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pkd9f" event={"ID":"30b278b0-28a3-4bbb-beac-abae03d60094","Type":"ContainerStarted","Data":"99c10a1bef4abf602886a1c64d460cc2e4b9e1847cc917d817f9f37a7168f427"} Apr 04 02:40:15 crc kubenswrapper[4681]: I0404 02:40:15.936328 4681 generic.go:334] "Generic (PLEG): container finished" podID="30b278b0-28a3-4bbb-beac-abae03d60094" containerID="99c10a1bef4abf602886a1c64d460cc2e4b9e1847cc917d817f9f37a7168f427" exitCode=0 Apr 04 02:40:15 crc kubenswrapper[4681]: I0404 02:40:15.936385 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pkd9f" event={"ID":"30b278b0-28a3-4bbb-beac-abae03d60094","Type":"ContainerDied","Data":"99c10a1bef4abf602886a1c64d460cc2e4b9e1847cc917d817f9f37a7168f427"} Apr 04 02:40:15 crc kubenswrapper[4681]: I0404 02:40:15.936689 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pkd9f" event={"ID":"30b278b0-28a3-4bbb-beac-abae03d60094","Type":"ContainerStarted","Data":"534ee420f8b830636249523df34972a67284bb1a84cab3b8ef7cfd1ae4dd3aa9"} Apr 04 02:40:15 crc kubenswrapper[4681]: I0404 02:40:15.969391 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pkd9f" podStartSLOduration=2.570263706 podStartE2EDuration="3.969370759s" podCreationTimestamp="2026-04-04 02:40:12 +0000 UTC" firstStartedPulling="2026-04-04 02:40:13.917307397 +0000 UTC m=+2693.583082537" lastFinishedPulling="2026-04-04 02:40:15.31641445 +0000 UTC m=+2694.982189590" observedRunningTime="2026-04-04 02:40:15.96171973 +0000 UTC m=+2695.627494850" watchObservedRunningTime="2026-04-04 02:40:15.969370759 +0000 UTC m=+2695.635145879" Apr 04 02:40:16 crc kubenswrapper[4681]: I0404 02:40:16.200962 4681 scope.go:117] "RemoveContainer" containerID="d0a6982dc73391bfcdbea37affa340ab6c2c8bfe22d8dc69ff3f47913a1861ee" Apr 04 02:40:16 crc kubenswrapper[4681]: E0404 02:40:16.201365 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:40:22 crc kubenswrapper[4681]: I0404 02:40:22.421349 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pkd9f" Apr 04 02:40:22 crc kubenswrapper[4681]: I0404 02:40:22.422163 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pkd9f" Apr 04 02:40:22 crc kubenswrapper[4681]: I0404 02:40:22.470363 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pkd9f" Apr 04 02:40:23 crc kubenswrapper[4681]: I0404 02:40:23.051709 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pkd9f" Apr 04 02:40:23 crc kubenswrapper[4681]: I0404 02:40:23.126920 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pkd9f"] Apr 04 02:40:25 crc kubenswrapper[4681]: I0404 02:40:25.062303 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pkd9f" podUID="30b278b0-28a3-4bbb-beac-abae03d60094" containerName="registry-server" containerID="cri-o://534ee420f8b830636249523df34972a67284bb1a84cab3b8ef7cfd1ae4dd3aa9" gracePeriod=2 Apr 04 02:40:26 crc kubenswrapper[4681]: I0404 02:40:26.075286 4681 generic.go:334] "Generic (PLEG): container finished" podID="30b278b0-28a3-4bbb-beac-abae03d60094" containerID="534ee420f8b830636249523df34972a67284bb1a84cab3b8ef7cfd1ae4dd3aa9" exitCode=0 Apr 04 02:40:26 crc kubenswrapper[4681]: I0404 02:40:26.075434 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pkd9f" event={"ID":"30b278b0-28a3-4bbb-beac-abae03d60094","Type":"ContainerDied","Data":"534ee420f8b830636249523df34972a67284bb1a84cab3b8ef7cfd1ae4dd3aa9"} Apr 04 02:40:26 crc kubenswrapper[4681]: I0404 02:40:26.075779 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pkd9f" event={"ID":"30b278b0-28a3-4bbb-beac-abae03d60094","Type":"ContainerDied","Data":"9818be75f201e84265fce4312dae44a7ca8f971d1a905c15ec0f87aeb1a2d4b1"} Apr 04 02:40:26 crc kubenswrapper[4681]: I0404 02:40:26.075799 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9818be75f201e84265fce4312dae44a7ca8f971d1a905c15ec0f87aeb1a2d4b1" Apr 04 02:40:26 crc kubenswrapper[4681]: I0404 02:40:26.106214 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pkd9f" Apr 04 02:40:26 crc kubenswrapper[4681]: I0404 02:40:26.188870 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30b278b0-28a3-4bbb-beac-abae03d60094-catalog-content\") pod \"30b278b0-28a3-4bbb-beac-abae03d60094\" (UID: \"30b278b0-28a3-4bbb-beac-abae03d60094\") " Apr 04 02:40:26 crc kubenswrapper[4681]: I0404 02:40:26.188955 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30b278b0-28a3-4bbb-beac-abae03d60094-utilities\") pod \"30b278b0-28a3-4bbb-beac-abae03d60094\" (UID: \"30b278b0-28a3-4bbb-beac-abae03d60094\") " Apr 04 02:40:26 crc kubenswrapper[4681]: I0404 02:40:26.189198 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nw4rz\" (UniqueName: \"kubernetes.io/projected/30b278b0-28a3-4bbb-beac-abae03d60094-kube-api-access-nw4rz\") pod \"30b278b0-28a3-4bbb-beac-abae03d60094\" (UID: \"30b278b0-28a3-4bbb-beac-abae03d60094\") " Apr 04 02:40:26 crc kubenswrapper[4681]: I0404 02:40:26.190200 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30b278b0-28a3-4bbb-beac-abae03d60094-utilities" (OuterVolumeSpecName: "utilities") pod "30b278b0-28a3-4bbb-beac-abae03d60094" (UID: "30b278b0-28a3-4bbb-beac-abae03d60094"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:40:26 crc kubenswrapper[4681]: I0404 02:40:26.199584 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30b278b0-28a3-4bbb-beac-abae03d60094-kube-api-access-nw4rz" (OuterVolumeSpecName: "kube-api-access-nw4rz") pod "30b278b0-28a3-4bbb-beac-abae03d60094" (UID: "30b278b0-28a3-4bbb-beac-abae03d60094"). InnerVolumeSpecName "kube-api-access-nw4rz". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:40:26 crc kubenswrapper[4681]: I0404 02:40:26.221507 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30b278b0-28a3-4bbb-beac-abae03d60094-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30b278b0-28a3-4bbb-beac-abae03d60094" (UID: "30b278b0-28a3-4bbb-beac-abae03d60094"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:40:26 crc kubenswrapper[4681]: I0404 02:40:26.292928 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30b278b0-28a3-4bbb-beac-abae03d60094-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 02:40:26 crc kubenswrapper[4681]: I0404 02:40:26.292967 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30b278b0-28a3-4bbb-beac-abae03d60094-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 02:40:26 crc kubenswrapper[4681]: I0404 02:40:26.292988 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nw4rz\" (UniqueName: \"kubernetes.io/projected/30b278b0-28a3-4bbb-beac-abae03d60094-kube-api-access-nw4rz\") on node \"crc\" DevicePath \"\"" Apr 04 02:40:27 crc kubenswrapper[4681]: I0404 02:40:27.085120 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pkd9f" Apr 04 02:40:27 crc kubenswrapper[4681]: I0404 02:40:27.122356 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pkd9f"] Apr 04 02:40:27 crc kubenswrapper[4681]: I0404 02:40:27.133799 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pkd9f"] Apr 04 02:40:27 crc kubenswrapper[4681]: I0404 02:40:27.201250 4681 scope.go:117] "RemoveContainer" containerID="d0a6982dc73391bfcdbea37affa340ab6c2c8bfe22d8dc69ff3f47913a1861ee" Apr 04 02:40:27 crc kubenswrapper[4681]: E0404 02:40:27.201607 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:40:27 crc kubenswrapper[4681]: I0404 02:40:27.214534 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30b278b0-28a3-4bbb-beac-abae03d60094" path="/var/lib/kubelet/pods/30b278b0-28a3-4bbb-beac-abae03d60094/volumes" Apr 04 02:40:30 crc kubenswrapper[4681]: I0404 02:40:30.030958 4681 scope.go:117] "RemoveContainer" containerID="bff0ae772352cc97eb4be6c1f87de25ba85314992d0cb8fb6a1107e40e4b9ad6" Apr 04 02:40:42 crc kubenswrapper[4681]: I0404 02:40:42.201401 4681 scope.go:117] "RemoveContainer" containerID="d0a6982dc73391bfcdbea37affa340ab6c2c8bfe22d8dc69ff3f47913a1861ee" Apr 04 02:40:42 crc kubenswrapper[4681]: E0404 02:40:42.202192 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:40:43 crc kubenswrapper[4681]: I0404 02:40:43.255092 4681 generic.go:334] "Generic (PLEG): container finished" podID="bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad" containerID="4bec6321309ad6a05b11d2cf0e8780cfdcc7d127826c998a56d5fd2d43e42e86" exitCode=0 Apr 04 02:40:43 crc kubenswrapper[4681]: I0404 02:40:43.255188 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fh7b5" event={"ID":"bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad","Type":"ContainerDied","Data":"4bec6321309ad6a05b11d2cf0e8780cfdcc7d127826c998a56d5fd2d43e42e86"} Apr 04 02:40:44 crc kubenswrapper[4681]: I0404 02:40:44.735688 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fh7b5" Apr 04 02:40:44 crc kubenswrapper[4681]: I0404 02:40:44.874567 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad-ovncontroller-config-0\") pod \"bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad\" (UID: \"bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad\") " Apr 04 02:40:44 crc kubenswrapper[4681]: I0404 02:40:44.874621 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad-inventory\") pod \"bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad\" (UID: \"bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad\") " Apr 04 02:40:44 crc kubenswrapper[4681]: I0404 02:40:44.874859 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad-ssh-key-openstack-edpm-ipam\") pod \"bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad\" (UID: \"bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad\") " Apr 04 02:40:44 crc kubenswrapper[4681]: I0404 02:40:44.874923 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dw47\" (UniqueName: \"kubernetes.io/projected/bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad-kube-api-access-8dw47\") pod \"bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad\" (UID: \"bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad\") " Apr 04 02:40:44 crc kubenswrapper[4681]: I0404 02:40:44.874996 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad-ovn-combined-ca-bundle\") pod \"bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad\" (UID: \"bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad\") " Apr 04 02:40:44 crc kubenswrapper[4681]: I0404 02:40:44.880410 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad-kube-api-access-8dw47" (OuterVolumeSpecName: "kube-api-access-8dw47") pod "bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad" (UID: "bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad"). InnerVolumeSpecName "kube-api-access-8dw47". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:40:44 crc kubenswrapper[4681]: I0404 02:40:44.881032 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad" (UID: "bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:40:44 crc kubenswrapper[4681]: I0404 02:40:44.903369 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad" (UID: "bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:40:44 crc kubenswrapper[4681]: I0404 02:40:44.905822 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad" (UID: "bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:40:44 crc kubenswrapper[4681]: I0404 02:40:44.915617 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad-inventory" (OuterVolumeSpecName: "inventory") pod "bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad" (UID: "bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:40:44 crc kubenswrapper[4681]: I0404 02:40:44.976791 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Apr 04 02:40:44 crc kubenswrapper[4681]: I0404 02:40:44.976823 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dw47\" (UniqueName: \"kubernetes.io/projected/bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad-kube-api-access-8dw47\") on node \"crc\" DevicePath \"\"" Apr 04 02:40:44 crc kubenswrapper[4681]: I0404 02:40:44.976832 4681 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:40:44 crc kubenswrapper[4681]: I0404 02:40:44.976841 4681 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Apr 04 02:40:44 crc kubenswrapper[4681]: I0404 02:40:44.976850 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad-inventory\") on node \"crc\" DevicePath \"\"" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.273883 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fh7b5" event={"ID":"bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad","Type":"ContainerDied","Data":"36e909cba0cea7a83f34bed2844eb23d97bfe3a0ac44ef236787b0aa218acfa3"} Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.273919 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36e909cba0cea7a83f34bed2844eb23d97bfe3a0ac44ef236787b0aa218acfa3" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.273970 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-fh7b5" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.362905 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl"] Apr 04 02:40:45 crc kubenswrapper[4681]: E0404 02:40:45.363655 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.363681 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Apr 04 02:40:45 crc kubenswrapper[4681]: E0404 02:40:45.363710 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30b278b0-28a3-4bbb-beac-abae03d60094" containerName="extract-utilities" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.363718 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="30b278b0-28a3-4bbb-beac-abae03d60094" containerName="extract-utilities" Apr 04 02:40:45 crc kubenswrapper[4681]: E0404 02:40:45.363744 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30b278b0-28a3-4bbb-beac-abae03d60094" containerName="registry-server" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.363754 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="30b278b0-28a3-4bbb-beac-abae03d60094" containerName="registry-server" Apr 04 02:40:45 crc kubenswrapper[4681]: E0404 02:40:45.363767 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30b278b0-28a3-4bbb-beac-abae03d60094" containerName="extract-content" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.363775 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="30b278b0-28a3-4bbb-beac-abae03d60094" containerName="extract-content" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.364001 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.364034 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="30b278b0-28a3-4bbb-beac-abae03d60094" containerName="registry-server" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.364887 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.367991 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.368545 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w7ttd" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.368782 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.369075 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.369293 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.369464 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.373364 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl"] Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.486553 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c4ac822-458d-449c-b7e9-16ce85e56b63-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl\" (UID: \"5c4ac822-458d-449c-b7e9-16ce85e56b63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.486610 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5c4ac822-458d-449c-b7e9-16ce85e56b63-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl\" (UID: \"5c4ac822-458d-449c-b7e9-16ce85e56b63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.486637 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7bj2\" (UniqueName: \"kubernetes.io/projected/5c4ac822-458d-449c-b7e9-16ce85e56b63-kube-api-access-x7bj2\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl\" (UID: \"5c4ac822-458d-449c-b7e9-16ce85e56b63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.486727 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c4ac822-458d-449c-b7e9-16ce85e56b63-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl\" (UID: \"5c4ac822-458d-449c-b7e9-16ce85e56b63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.486797 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5c4ac822-458d-449c-b7e9-16ce85e56b63-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl\" (UID: \"5c4ac822-458d-449c-b7e9-16ce85e56b63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.486885 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c4ac822-458d-449c-b7e9-16ce85e56b63-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl\" (UID: \"5c4ac822-458d-449c-b7e9-16ce85e56b63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.588240 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c4ac822-458d-449c-b7e9-16ce85e56b63-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl\" (UID: \"5c4ac822-458d-449c-b7e9-16ce85e56b63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.588335 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5c4ac822-458d-449c-b7e9-16ce85e56b63-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl\" (UID: \"5c4ac822-458d-449c-b7e9-16ce85e56b63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.588357 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7bj2\" (UniqueName: \"kubernetes.io/projected/5c4ac822-458d-449c-b7e9-16ce85e56b63-kube-api-access-x7bj2\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl\" (UID: \"5c4ac822-458d-449c-b7e9-16ce85e56b63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.588398 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c4ac822-458d-449c-b7e9-16ce85e56b63-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl\" (UID: \"5c4ac822-458d-449c-b7e9-16ce85e56b63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.588438 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5c4ac822-458d-449c-b7e9-16ce85e56b63-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl\" (UID: \"5c4ac822-458d-449c-b7e9-16ce85e56b63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.588522 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c4ac822-458d-449c-b7e9-16ce85e56b63-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl\" (UID: \"5c4ac822-458d-449c-b7e9-16ce85e56b63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.592430 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5c4ac822-458d-449c-b7e9-16ce85e56b63-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl\" (UID: \"5c4ac822-458d-449c-b7e9-16ce85e56b63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.592715 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c4ac822-458d-449c-b7e9-16ce85e56b63-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl\" (UID: \"5c4ac822-458d-449c-b7e9-16ce85e56b63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.592872 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c4ac822-458d-449c-b7e9-16ce85e56b63-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl\" (UID: \"5c4ac822-458d-449c-b7e9-16ce85e56b63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.594120 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5c4ac822-458d-449c-b7e9-16ce85e56b63-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl\" (UID: \"5c4ac822-458d-449c-b7e9-16ce85e56b63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.594842 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c4ac822-458d-449c-b7e9-16ce85e56b63-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl\" (UID: \"5c4ac822-458d-449c-b7e9-16ce85e56b63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.608221 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7bj2\" (UniqueName: \"kubernetes.io/projected/5c4ac822-458d-449c-b7e9-16ce85e56b63-kube-api-access-x7bj2\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl\" (UID: \"5c4ac822-458d-449c-b7e9-16ce85e56b63\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl" Apr 04 02:40:45 crc kubenswrapper[4681]: I0404 02:40:45.690591 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl" Apr 04 02:40:46 crc kubenswrapper[4681]: I0404 02:40:46.331832 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl"] Apr 04 02:40:47 crc kubenswrapper[4681]: I0404 02:40:47.293303 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl" event={"ID":"5c4ac822-458d-449c-b7e9-16ce85e56b63","Type":"ContainerStarted","Data":"3038467c060f4a6a84b822310f846e9043df03fbe5c1573256231fc38729662b"} Apr 04 02:40:48 crc kubenswrapper[4681]: I0404 02:40:48.304848 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl" event={"ID":"5c4ac822-458d-449c-b7e9-16ce85e56b63","Type":"ContainerStarted","Data":"7237ce19c402e106d16717c6f1b5ee6f66759e777e9be815841ee5014042bd79"} Apr 04 02:40:48 crc kubenswrapper[4681]: I0404 02:40:48.325953 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl" podStartSLOduration=2.30212577 podStartE2EDuration="3.325929389s" podCreationTimestamp="2026-04-04 02:40:45 +0000 UTC" firstStartedPulling="2026-04-04 02:40:46.337968367 +0000 UTC m=+2726.003743487" lastFinishedPulling="2026-04-04 02:40:47.361771986 +0000 UTC m=+2727.027547106" observedRunningTime="2026-04-04 02:40:48.322217617 +0000 UTC m=+2727.987992767" watchObservedRunningTime="2026-04-04 02:40:48.325929389 +0000 UTC m=+2727.991704539" Apr 04 02:40:57 crc kubenswrapper[4681]: I0404 02:40:57.200897 4681 scope.go:117] "RemoveContainer" containerID="d0a6982dc73391bfcdbea37affa340ab6c2c8bfe22d8dc69ff3f47913a1861ee" Apr 04 02:40:57 crc kubenswrapper[4681]: E0404 02:40:57.201793 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:41:10 crc kubenswrapper[4681]: I0404 02:41:10.200654 4681 scope.go:117] "RemoveContainer" containerID="d0a6982dc73391bfcdbea37affa340ab6c2c8bfe22d8dc69ff3f47913a1861ee" Apr 04 02:41:10 crc kubenswrapper[4681]: E0404 02:41:10.201420 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:41:24 crc kubenswrapper[4681]: I0404 02:41:24.201119 4681 scope.go:117] "RemoveContainer" containerID="d0a6982dc73391bfcdbea37affa340ab6c2c8bfe22d8dc69ff3f47913a1861ee" Apr 04 02:41:24 crc kubenswrapper[4681]: E0404 02:41:24.201994 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:41:34 crc kubenswrapper[4681]: I0404 02:41:34.745426 4681 generic.go:334] "Generic (PLEG): container finished" podID="5c4ac822-458d-449c-b7e9-16ce85e56b63" containerID="7237ce19c402e106d16717c6f1b5ee6f66759e777e9be815841ee5014042bd79" exitCode=0 Apr 04 02:41:34 crc kubenswrapper[4681]: I0404 02:41:34.745548 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl" event={"ID":"5c4ac822-458d-449c-b7e9-16ce85e56b63","Type":"ContainerDied","Data":"7237ce19c402e106d16717c6f1b5ee6f66759e777e9be815841ee5014042bd79"} Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.159389 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.167795 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c4ac822-458d-449c-b7e9-16ce85e56b63-inventory\") pod \"5c4ac822-458d-449c-b7e9-16ce85e56b63\" (UID: \"5c4ac822-458d-449c-b7e9-16ce85e56b63\") " Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.167913 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5c4ac822-458d-449c-b7e9-16ce85e56b63-nova-metadata-neutron-config-0\") pod \"5c4ac822-458d-449c-b7e9-16ce85e56b63\" (UID: \"5c4ac822-458d-449c-b7e9-16ce85e56b63\") " Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.167958 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c4ac822-458d-449c-b7e9-16ce85e56b63-ssh-key-openstack-edpm-ipam\") pod \"5c4ac822-458d-449c-b7e9-16ce85e56b63\" (UID: \"5c4ac822-458d-449c-b7e9-16ce85e56b63\") " Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.167985 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7bj2\" (UniqueName: \"kubernetes.io/projected/5c4ac822-458d-449c-b7e9-16ce85e56b63-kube-api-access-x7bj2\") pod \"5c4ac822-458d-449c-b7e9-16ce85e56b63\" (UID: \"5c4ac822-458d-449c-b7e9-16ce85e56b63\") " Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.168035 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5c4ac822-458d-449c-b7e9-16ce85e56b63-neutron-ovn-metadata-agent-neutron-config-0\") pod \"5c4ac822-458d-449c-b7e9-16ce85e56b63\" (UID: \"5c4ac822-458d-449c-b7e9-16ce85e56b63\") " Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.168760 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c4ac822-458d-449c-b7e9-16ce85e56b63-neutron-metadata-combined-ca-bundle\") pod \"5c4ac822-458d-449c-b7e9-16ce85e56b63\" (UID: \"5c4ac822-458d-449c-b7e9-16ce85e56b63\") " Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.174333 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c4ac822-458d-449c-b7e9-16ce85e56b63-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "5c4ac822-458d-449c-b7e9-16ce85e56b63" (UID: "5c4ac822-458d-449c-b7e9-16ce85e56b63"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.174778 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c4ac822-458d-449c-b7e9-16ce85e56b63-kube-api-access-x7bj2" (OuterVolumeSpecName: "kube-api-access-x7bj2") pod "5c4ac822-458d-449c-b7e9-16ce85e56b63" (UID: "5c4ac822-458d-449c-b7e9-16ce85e56b63"). InnerVolumeSpecName "kube-api-access-x7bj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.217541 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c4ac822-458d-449c-b7e9-16ce85e56b63-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "5c4ac822-458d-449c-b7e9-16ce85e56b63" (UID: "5c4ac822-458d-449c-b7e9-16ce85e56b63"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.218428 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c4ac822-458d-449c-b7e9-16ce85e56b63-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5c4ac822-458d-449c-b7e9-16ce85e56b63" (UID: "5c4ac822-458d-449c-b7e9-16ce85e56b63"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.221077 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c4ac822-458d-449c-b7e9-16ce85e56b63-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "5c4ac822-458d-449c-b7e9-16ce85e56b63" (UID: "5c4ac822-458d-449c-b7e9-16ce85e56b63"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.224601 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c4ac822-458d-449c-b7e9-16ce85e56b63-inventory" (OuterVolumeSpecName: "inventory") pod "5c4ac822-458d-449c-b7e9-16ce85e56b63" (UID: "5c4ac822-458d-449c-b7e9-16ce85e56b63"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.272580 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5c4ac822-458d-449c-b7e9-16ce85e56b63-inventory\") on node \"crc\" DevicePath \"\"" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.272626 4681 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5c4ac822-458d-449c-b7e9-16ce85e56b63-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.272637 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5c4ac822-458d-449c-b7e9-16ce85e56b63-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.272648 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7bj2\" (UniqueName: \"kubernetes.io/projected/5c4ac822-458d-449c-b7e9-16ce85e56b63-kube-api-access-x7bj2\") on node \"crc\" DevicePath \"\"" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.272657 4681 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/5c4ac822-458d-449c-b7e9-16ce85e56b63-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.272669 4681 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c4ac822-458d-449c-b7e9-16ce85e56b63-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.770383 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl" event={"ID":"5c4ac822-458d-449c-b7e9-16ce85e56b63","Type":"ContainerDied","Data":"3038467c060f4a6a84b822310f846e9043df03fbe5c1573256231fc38729662b"} Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.770424 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3038467c060f4a6a84b822310f846e9043df03fbe5c1573256231fc38729662b" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.770463 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.871140 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2"] Apr 04 02:41:36 crc kubenswrapper[4681]: E0404 02:41:36.871616 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c4ac822-458d-449c-b7e9-16ce85e56b63" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.871633 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c4ac822-458d-449c-b7e9-16ce85e56b63" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.871817 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c4ac822-458d-449c-b7e9-16ce85e56b63" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.872516 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.874555 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.874928 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.874961 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.874967 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.877538 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w7ttd" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.883142 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2"] Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.893967 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1784fc32-2907-4203-a7cd-0053cfe1d338-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2\" (UID: \"1784fc32-2907-4203-a7cd-0053cfe1d338\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.894210 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1784fc32-2907-4203-a7cd-0053cfe1d338-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2\" (UID: \"1784fc32-2907-4203-a7cd-0053cfe1d338\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.894484 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1784fc32-2907-4203-a7cd-0053cfe1d338-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2\" (UID: \"1784fc32-2907-4203-a7cd-0053cfe1d338\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.894630 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9s4w\" (UniqueName: \"kubernetes.io/projected/1784fc32-2907-4203-a7cd-0053cfe1d338-kube-api-access-l9s4w\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2\" (UID: \"1784fc32-2907-4203-a7cd-0053cfe1d338\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.894758 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1784fc32-2907-4203-a7cd-0053cfe1d338-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2\" (UID: \"1784fc32-2907-4203-a7cd-0053cfe1d338\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.997176 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1784fc32-2907-4203-a7cd-0053cfe1d338-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2\" (UID: \"1784fc32-2907-4203-a7cd-0053cfe1d338\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.997503 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9s4w\" (UniqueName: \"kubernetes.io/projected/1784fc32-2907-4203-a7cd-0053cfe1d338-kube-api-access-l9s4w\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2\" (UID: \"1784fc32-2907-4203-a7cd-0053cfe1d338\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.997682 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1784fc32-2907-4203-a7cd-0053cfe1d338-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2\" (UID: \"1784fc32-2907-4203-a7cd-0053cfe1d338\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.997847 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1784fc32-2907-4203-a7cd-0053cfe1d338-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2\" (UID: \"1784fc32-2907-4203-a7cd-0053cfe1d338\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2" Apr 04 02:41:36 crc kubenswrapper[4681]: I0404 02:41:36.997959 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1784fc32-2907-4203-a7cd-0053cfe1d338-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2\" (UID: \"1784fc32-2907-4203-a7cd-0053cfe1d338\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2" Apr 04 02:41:37 crc kubenswrapper[4681]: I0404 02:41:37.002331 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1784fc32-2907-4203-a7cd-0053cfe1d338-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2\" (UID: \"1784fc32-2907-4203-a7cd-0053cfe1d338\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2" Apr 04 02:41:37 crc kubenswrapper[4681]: I0404 02:41:37.002541 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1784fc32-2907-4203-a7cd-0053cfe1d338-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2\" (UID: \"1784fc32-2907-4203-a7cd-0053cfe1d338\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2" Apr 04 02:41:37 crc kubenswrapper[4681]: I0404 02:41:37.003249 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1784fc32-2907-4203-a7cd-0053cfe1d338-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2\" (UID: \"1784fc32-2907-4203-a7cd-0053cfe1d338\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2" Apr 04 02:41:37 crc kubenswrapper[4681]: I0404 02:41:37.008484 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1784fc32-2907-4203-a7cd-0053cfe1d338-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2\" (UID: \"1784fc32-2907-4203-a7cd-0053cfe1d338\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2" Apr 04 02:41:37 crc kubenswrapper[4681]: I0404 02:41:37.013395 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9s4w\" (UniqueName: \"kubernetes.io/projected/1784fc32-2907-4203-a7cd-0053cfe1d338-kube-api-access-l9s4w\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2\" (UID: \"1784fc32-2907-4203-a7cd-0053cfe1d338\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2" Apr 04 02:41:37 crc kubenswrapper[4681]: I0404 02:41:37.188293 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2" Apr 04 02:41:37 crc kubenswrapper[4681]: I0404 02:41:37.754084 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2"] Apr 04 02:41:38 crc kubenswrapper[4681]: I0404 02:41:38.200729 4681 scope.go:117] "RemoveContainer" containerID="d0a6982dc73391bfcdbea37affa340ab6c2c8bfe22d8dc69ff3f47913a1861ee" Apr 04 02:41:38 crc kubenswrapper[4681]: E0404 02:41:38.201028 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:41:38 crc kubenswrapper[4681]: I0404 02:41:38.794337 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2" event={"ID":"1784fc32-2907-4203-a7cd-0053cfe1d338","Type":"ContainerStarted","Data":"1e6de3ab4f97de70550bc6111ab84a813b778eebc3aa2e50f8a9fa2649ae9829"} Apr 04 02:41:38 crc kubenswrapper[4681]: I0404 02:41:38.794675 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2" event={"ID":"1784fc32-2907-4203-a7cd-0053cfe1d338","Type":"ContainerStarted","Data":"a82d647557b4a270539fee8ec28fd8d7730af51d9fe9e99f6e03e8f0f1c8a22e"} Apr 04 02:41:38 crc kubenswrapper[4681]: I0404 02:41:38.821199 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2" podStartSLOduration=2.344569786 podStartE2EDuration="2.821181312s" podCreationTimestamp="2026-04-04 02:41:36 +0000 UTC" firstStartedPulling="2026-04-04 02:41:37.78003034 +0000 UTC m=+2777.445805470" lastFinishedPulling="2026-04-04 02:41:38.256641866 +0000 UTC m=+2777.922416996" observedRunningTime="2026-04-04 02:41:38.816024791 +0000 UTC m=+2778.481799911" watchObservedRunningTime="2026-04-04 02:41:38.821181312 +0000 UTC m=+2778.486956432" Apr 04 02:41:53 crc kubenswrapper[4681]: I0404 02:41:53.201530 4681 scope.go:117] "RemoveContainer" containerID="d0a6982dc73391bfcdbea37affa340ab6c2c8bfe22d8dc69ff3f47913a1861ee" Apr 04 02:41:53 crc kubenswrapper[4681]: E0404 02:41:53.202582 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:42:00 crc kubenswrapper[4681]: I0404 02:42:00.166762 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587842-7mwm6"] Apr 04 02:42:00 crc kubenswrapper[4681]: I0404 02:42:00.170320 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587842-7mwm6" Apr 04 02:42:00 crc kubenswrapper[4681]: I0404 02:42:00.172828 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 02:42:00 crc kubenswrapper[4681]: I0404 02:42:00.173598 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 02:42:00 crc kubenswrapper[4681]: I0404 02:42:00.185160 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 02:42:00 crc kubenswrapper[4681]: I0404 02:42:00.187365 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587842-7mwm6"] Apr 04 02:42:00 crc kubenswrapper[4681]: I0404 02:42:00.317574 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z84rl\" (UniqueName: \"kubernetes.io/projected/70f0d175-8971-4b5b-b162-2ea220728686-kube-api-access-z84rl\") pod \"auto-csr-approver-29587842-7mwm6\" (UID: \"70f0d175-8971-4b5b-b162-2ea220728686\") " pod="openshift-infra/auto-csr-approver-29587842-7mwm6" Apr 04 02:42:00 crc kubenswrapper[4681]: I0404 02:42:00.419243 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z84rl\" (UniqueName: \"kubernetes.io/projected/70f0d175-8971-4b5b-b162-2ea220728686-kube-api-access-z84rl\") pod \"auto-csr-approver-29587842-7mwm6\" (UID: \"70f0d175-8971-4b5b-b162-2ea220728686\") " pod="openshift-infra/auto-csr-approver-29587842-7mwm6" Apr 04 02:42:00 crc kubenswrapper[4681]: I0404 02:42:00.438588 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z84rl\" (UniqueName: \"kubernetes.io/projected/70f0d175-8971-4b5b-b162-2ea220728686-kube-api-access-z84rl\") pod \"auto-csr-approver-29587842-7mwm6\" (UID: \"70f0d175-8971-4b5b-b162-2ea220728686\") " pod="openshift-infra/auto-csr-approver-29587842-7mwm6" Apr 04 02:42:00 crc kubenswrapper[4681]: I0404 02:42:00.499587 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587842-7mwm6" Apr 04 02:42:00 crc kubenswrapper[4681]: I0404 02:42:00.946822 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587842-7mwm6"] Apr 04 02:42:00 crc kubenswrapper[4681]: I0404 02:42:00.949712 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 04 02:42:01 crc kubenswrapper[4681]: I0404 02:42:01.013085 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587842-7mwm6" event={"ID":"70f0d175-8971-4b5b-b162-2ea220728686","Type":"ContainerStarted","Data":"3352c82dcb589d57a066779f3b2cfe7434c2dd030f7f0c112087f443e61a602b"} Apr 04 02:42:03 crc kubenswrapper[4681]: I0404 02:42:03.035685 4681 generic.go:334] "Generic (PLEG): container finished" podID="70f0d175-8971-4b5b-b162-2ea220728686" containerID="bb5bb5303df40ead72ea7625c398b7b4787a61e7437517cd6eb649eefde1249f" exitCode=0 Apr 04 02:42:03 crc kubenswrapper[4681]: I0404 02:42:03.035764 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587842-7mwm6" event={"ID":"70f0d175-8971-4b5b-b162-2ea220728686","Type":"ContainerDied","Data":"bb5bb5303df40ead72ea7625c398b7b4787a61e7437517cd6eb649eefde1249f"} Apr 04 02:42:04 crc kubenswrapper[4681]: I0404 02:42:04.367715 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587842-7mwm6" Apr 04 02:42:04 crc kubenswrapper[4681]: I0404 02:42:04.499439 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z84rl\" (UniqueName: \"kubernetes.io/projected/70f0d175-8971-4b5b-b162-2ea220728686-kube-api-access-z84rl\") pod \"70f0d175-8971-4b5b-b162-2ea220728686\" (UID: \"70f0d175-8971-4b5b-b162-2ea220728686\") " Apr 04 02:42:04 crc kubenswrapper[4681]: I0404 02:42:04.504721 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70f0d175-8971-4b5b-b162-2ea220728686-kube-api-access-z84rl" (OuterVolumeSpecName: "kube-api-access-z84rl") pod "70f0d175-8971-4b5b-b162-2ea220728686" (UID: "70f0d175-8971-4b5b-b162-2ea220728686"). InnerVolumeSpecName "kube-api-access-z84rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:42:04 crc kubenswrapper[4681]: I0404 02:42:04.601535 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z84rl\" (UniqueName: \"kubernetes.io/projected/70f0d175-8971-4b5b-b162-2ea220728686-kube-api-access-z84rl\") on node \"crc\" DevicePath \"\"" Apr 04 02:42:05 crc kubenswrapper[4681]: I0404 02:42:05.057887 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587842-7mwm6" event={"ID":"70f0d175-8971-4b5b-b162-2ea220728686","Type":"ContainerDied","Data":"3352c82dcb589d57a066779f3b2cfe7434c2dd030f7f0c112087f443e61a602b"} Apr 04 02:42:05 crc kubenswrapper[4681]: I0404 02:42:05.057940 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587842-7mwm6" Apr 04 02:42:05 crc kubenswrapper[4681]: I0404 02:42:05.057943 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3352c82dcb589d57a066779f3b2cfe7434c2dd030f7f0c112087f443e61a602b" Apr 04 02:42:05 crc kubenswrapper[4681]: I0404 02:42:05.442027 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587836-zh5fv"] Apr 04 02:42:05 crc kubenswrapper[4681]: I0404 02:42:05.450392 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587836-zh5fv"] Apr 04 02:42:07 crc kubenswrapper[4681]: I0404 02:42:07.201755 4681 scope.go:117] "RemoveContainer" containerID="d0a6982dc73391bfcdbea37affa340ab6c2c8bfe22d8dc69ff3f47913a1861ee" Apr 04 02:42:07 crc kubenswrapper[4681]: I0404 02:42:07.216627 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5b33547-7be2-4182-878e-f992e13e6c86" path="/var/lib/kubelet/pods/d5b33547-7be2-4182-878e-f992e13e6c86/volumes" Apr 04 02:42:08 crc kubenswrapper[4681]: I0404 02:42:08.090095 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerStarted","Data":"43e5a1e85819a357ad1bc32395360d171d4e27ab96b40ed80b78ca51550fe534"} Apr 04 02:42:30 crc kubenswrapper[4681]: I0404 02:42:30.150470 4681 scope.go:117] "RemoveContainer" containerID="912bd90cbb25f57ca00e377f51fcabe6692ce58f9244d17a84cb89a4f08dfa6e" Apr 04 02:42:42 crc kubenswrapper[4681]: I0404 02:42:42.977700 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mhp4k"] Apr 04 02:42:42 crc kubenswrapper[4681]: E0404 02:42:42.978929 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70f0d175-8971-4b5b-b162-2ea220728686" containerName="oc" Apr 04 02:42:42 crc kubenswrapper[4681]: I0404 02:42:42.978952 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="70f0d175-8971-4b5b-b162-2ea220728686" containerName="oc" Apr 04 02:42:42 crc kubenswrapper[4681]: I0404 02:42:42.979382 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="70f0d175-8971-4b5b-b162-2ea220728686" containerName="oc" Apr 04 02:42:42 crc kubenswrapper[4681]: I0404 02:42:42.981809 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhp4k" Apr 04 02:42:42 crc kubenswrapper[4681]: I0404 02:42:42.994597 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mhp4k"] Apr 04 02:42:43 crc kubenswrapper[4681]: I0404 02:42:43.136529 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30059dca-36f2-41be-a925-895e7c0d8e84-catalog-content\") pod \"certified-operators-mhp4k\" (UID: \"30059dca-36f2-41be-a925-895e7c0d8e84\") " pod="openshift-marketplace/certified-operators-mhp4k" Apr 04 02:42:43 crc kubenswrapper[4681]: I0404 02:42:43.136637 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2lrd\" (UniqueName: \"kubernetes.io/projected/30059dca-36f2-41be-a925-895e7c0d8e84-kube-api-access-k2lrd\") pod \"certified-operators-mhp4k\" (UID: \"30059dca-36f2-41be-a925-895e7c0d8e84\") " pod="openshift-marketplace/certified-operators-mhp4k" Apr 04 02:42:43 crc kubenswrapper[4681]: I0404 02:42:43.136669 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30059dca-36f2-41be-a925-895e7c0d8e84-utilities\") pod \"certified-operators-mhp4k\" (UID: \"30059dca-36f2-41be-a925-895e7c0d8e84\") " pod="openshift-marketplace/certified-operators-mhp4k" Apr 04 02:42:43 crc kubenswrapper[4681]: I0404 02:42:43.238803 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30059dca-36f2-41be-a925-895e7c0d8e84-catalog-content\") pod \"certified-operators-mhp4k\" (UID: \"30059dca-36f2-41be-a925-895e7c0d8e84\") " pod="openshift-marketplace/certified-operators-mhp4k" Apr 04 02:42:43 crc kubenswrapper[4681]: I0404 02:42:43.238949 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2lrd\" (UniqueName: \"kubernetes.io/projected/30059dca-36f2-41be-a925-895e7c0d8e84-kube-api-access-k2lrd\") pod \"certified-operators-mhp4k\" (UID: \"30059dca-36f2-41be-a925-895e7c0d8e84\") " pod="openshift-marketplace/certified-operators-mhp4k" Apr 04 02:42:43 crc kubenswrapper[4681]: I0404 02:42:43.238994 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30059dca-36f2-41be-a925-895e7c0d8e84-utilities\") pod \"certified-operators-mhp4k\" (UID: \"30059dca-36f2-41be-a925-895e7c0d8e84\") " pod="openshift-marketplace/certified-operators-mhp4k" Apr 04 02:42:43 crc kubenswrapper[4681]: I0404 02:42:43.239810 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30059dca-36f2-41be-a925-895e7c0d8e84-utilities\") pod \"certified-operators-mhp4k\" (UID: \"30059dca-36f2-41be-a925-895e7c0d8e84\") " pod="openshift-marketplace/certified-operators-mhp4k" Apr 04 02:42:43 crc kubenswrapper[4681]: I0404 02:42:43.239884 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30059dca-36f2-41be-a925-895e7c0d8e84-catalog-content\") pod \"certified-operators-mhp4k\" (UID: \"30059dca-36f2-41be-a925-895e7c0d8e84\") " pod="openshift-marketplace/certified-operators-mhp4k" Apr 04 02:42:43 crc kubenswrapper[4681]: I0404 02:42:43.258765 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2lrd\" (UniqueName: \"kubernetes.io/projected/30059dca-36f2-41be-a925-895e7c0d8e84-kube-api-access-k2lrd\") pod \"certified-operators-mhp4k\" (UID: \"30059dca-36f2-41be-a925-895e7c0d8e84\") " pod="openshift-marketplace/certified-operators-mhp4k" Apr 04 02:42:43 crc kubenswrapper[4681]: I0404 02:42:43.322370 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhp4k" Apr 04 02:42:43 crc kubenswrapper[4681]: I0404 02:42:43.867516 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mhp4k"] Apr 04 02:42:44 crc kubenswrapper[4681]: I0404 02:42:44.503748 4681 generic.go:334] "Generic (PLEG): container finished" podID="30059dca-36f2-41be-a925-895e7c0d8e84" containerID="2338f65d58b184a028bffbcb9503d999b435e2c402e887cdb3c51ed06b68da10" exitCode=0 Apr 04 02:42:44 crc kubenswrapper[4681]: I0404 02:42:44.503817 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhp4k" event={"ID":"30059dca-36f2-41be-a925-895e7c0d8e84","Type":"ContainerDied","Data":"2338f65d58b184a028bffbcb9503d999b435e2c402e887cdb3c51ed06b68da10"} Apr 04 02:42:44 crc kubenswrapper[4681]: I0404 02:42:44.504198 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhp4k" event={"ID":"30059dca-36f2-41be-a925-895e7c0d8e84","Type":"ContainerStarted","Data":"a7412ab70fea0bf11dca8e5443114ac9ca39a3612b84a100dba2aa89f77f9e2c"} Apr 04 02:42:45 crc kubenswrapper[4681]: I0404 02:42:45.515537 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhp4k" event={"ID":"30059dca-36f2-41be-a925-895e7c0d8e84","Type":"ContainerStarted","Data":"8fc4c287ce94e9606557108433cb4d79fb09e23d7fae321f07f5e3566b8ed796"} Apr 04 02:42:46 crc kubenswrapper[4681]: I0404 02:42:46.545095 4681 generic.go:334] "Generic (PLEG): container finished" podID="30059dca-36f2-41be-a925-895e7c0d8e84" containerID="8fc4c287ce94e9606557108433cb4d79fb09e23d7fae321f07f5e3566b8ed796" exitCode=0 Apr 04 02:42:46 crc kubenswrapper[4681]: I0404 02:42:46.545158 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhp4k" event={"ID":"30059dca-36f2-41be-a925-895e7c0d8e84","Type":"ContainerDied","Data":"8fc4c287ce94e9606557108433cb4d79fb09e23d7fae321f07f5e3566b8ed796"} Apr 04 02:42:47 crc kubenswrapper[4681]: I0404 02:42:47.556232 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhp4k" event={"ID":"30059dca-36f2-41be-a925-895e7c0d8e84","Type":"ContainerStarted","Data":"f85b3ed6f4865d37af43bc22f795df37b48b264862b2095e8fbef2e2561cb0e5"} Apr 04 02:42:47 crc kubenswrapper[4681]: I0404 02:42:47.572767 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mhp4k" podStartSLOduration=3.094515576 podStartE2EDuration="5.572743347s" podCreationTimestamp="2026-04-04 02:42:42 +0000 UTC" firstStartedPulling="2026-04-04 02:42:44.506766138 +0000 UTC m=+2844.172541258" lastFinishedPulling="2026-04-04 02:42:46.984993909 +0000 UTC m=+2846.650769029" observedRunningTime="2026-04-04 02:42:47.570598779 +0000 UTC m=+2847.236373929" watchObservedRunningTime="2026-04-04 02:42:47.572743347 +0000 UTC m=+2847.238518467" Apr 04 02:42:53 crc kubenswrapper[4681]: I0404 02:42:53.323506 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mhp4k" Apr 04 02:42:53 crc kubenswrapper[4681]: I0404 02:42:53.323990 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mhp4k" Apr 04 02:42:53 crc kubenswrapper[4681]: I0404 02:42:53.371540 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mhp4k" Apr 04 02:42:53 crc kubenswrapper[4681]: I0404 02:42:53.692495 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mhp4k" Apr 04 02:42:53 crc kubenswrapper[4681]: I0404 02:42:53.747048 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mhp4k"] Apr 04 02:42:55 crc kubenswrapper[4681]: I0404 02:42:55.664846 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mhp4k" podUID="30059dca-36f2-41be-a925-895e7c0d8e84" containerName="registry-server" containerID="cri-o://f85b3ed6f4865d37af43bc22f795df37b48b264862b2095e8fbef2e2561cb0e5" gracePeriod=2 Apr 04 02:42:56 crc kubenswrapper[4681]: I0404 02:42:56.156668 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhp4k" Apr 04 02:42:56 crc kubenswrapper[4681]: I0404 02:42:56.313212 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30059dca-36f2-41be-a925-895e7c0d8e84-utilities\") pod \"30059dca-36f2-41be-a925-895e7c0d8e84\" (UID: \"30059dca-36f2-41be-a925-895e7c0d8e84\") " Apr 04 02:42:56 crc kubenswrapper[4681]: I0404 02:42:56.313330 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k2lrd\" (UniqueName: \"kubernetes.io/projected/30059dca-36f2-41be-a925-895e7c0d8e84-kube-api-access-k2lrd\") pod \"30059dca-36f2-41be-a925-895e7c0d8e84\" (UID: \"30059dca-36f2-41be-a925-895e7c0d8e84\") " Apr 04 02:42:56 crc kubenswrapper[4681]: I0404 02:42:56.313360 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30059dca-36f2-41be-a925-895e7c0d8e84-catalog-content\") pod \"30059dca-36f2-41be-a925-895e7c0d8e84\" (UID: \"30059dca-36f2-41be-a925-895e7c0d8e84\") " Apr 04 02:42:56 crc kubenswrapper[4681]: I0404 02:42:56.314692 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30059dca-36f2-41be-a925-895e7c0d8e84-utilities" (OuterVolumeSpecName: "utilities") pod "30059dca-36f2-41be-a925-895e7c0d8e84" (UID: "30059dca-36f2-41be-a925-895e7c0d8e84"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:42:56 crc kubenswrapper[4681]: I0404 02:42:56.318874 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30059dca-36f2-41be-a925-895e7c0d8e84-kube-api-access-k2lrd" (OuterVolumeSpecName: "kube-api-access-k2lrd") pod "30059dca-36f2-41be-a925-895e7c0d8e84" (UID: "30059dca-36f2-41be-a925-895e7c0d8e84"). InnerVolumeSpecName "kube-api-access-k2lrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:42:56 crc kubenswrapper[4681]: I0404 02:42:56.416380 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/30059dca-36f2-41be-a925-895e7c0d8e84-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 02:42:56 crc kubenswrapper[4681]: I0404 02:42:56.416412 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k2lrd\" (UniqueName: \"kubernetes.io/projected/30059dca-36f2-41be-a925-895e7c0d8e84-kube-api-access-k2lrd\") on node \"crc\" DevicePath \"\"" Apr 04 02:42:56 crc kubenswrapper[4681]: I0404 02:42:56.685491 4681 generic.go:334] "Generic (PLEG): container finished" podID="30059dca-36f2-41be-a925-895e7c0d8e84" containerID="f85b3ed6f4865d37af43bc22f795df37b48b264862b2095e8fbef2e2561cb0e5" exitCode=0 Apr 04 02:42:56 crc kubenswrapper[4681]: I0404 02:42:56.685545 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhp4k" event={"ID":"30059dca-36f2-41be-a925-895e7c0d8e84","Type":"ContainerDied","Data":"f85b3ed6f4865d37af43bc22f795df37b48b264862b2095e8fbef2e2561cb0e5"} Apr 04 02:42:56 crc kubenswrapper[4681]: I0404 02:42:56.685602 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhp4k" Apr 04 02:42:56 crc kubenswrapper[4681]: I0404 02:42:56.685621 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhp4k" event={"ID":"30059dca-36f2-41be-a925-895e7c0d8e84","Type":"ContainerDied","Data":"a7412ab70fea0bf11dca8e5443114ac9ca39a3612b84a100dba2aa89f77f9e2c"} Apr 04 02:42:56 crc kubenswrapper[4681]: I0404 02:42:56.685660 4681 scope.go:117] "RemoveContainer" containerID="f85b3ed6f4865d37af43bc22f795df37b48b264862b2095e8fbef2e2561cb0e5" Apr 04 02:42:56 crc kubenswrapper[4681]: I0404 02:42:56.708548 4681 scope.go:117] "RemoveContainer" containerID="8fc4c287ce94e9606557108433cb4d79fb09e23d7fae321f07f5e3566b8ed796" Apr 04 02:42:56 crc kubenswrapper[4681]: I0404 02:42:56.733995 4681 scope.go:117] "RemoveContainer" containerID="2338f65d58b184a028bffbcb9503d999b435e2c402e887cdb3c51ed06b68da10" Apr 04 02:42:56 crc kubenswrapper[4681]: I0404 02:42:56.801093 4681 scope.go:117] "RemoveContainer" containerID="f85b3ed6f4865d37af43bc22f795df37b48b264862b2095e8fbef2e2561cb0e5" Apr 04 02:42:56 crc kubenswrapper[4681]: E0404 02:42:56.801634 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f85b3ed6f4865d37af43bc22f795df37b48b264862b2095e8fbef2e2561cb0e5\": container with ID starting with f85b3ed6f4865d37af43bc22f795df37b48b264862b2095e8fbef2e2561cb0e5 not found: ID does not exist" containerID="f85b3ed6f4865d37af43bc22f795df37b48b264862b2095e8fbef2e2561cb0e5" Apr 04 02:42:56 crc kubenswrapper[4681]: I0404 02:42:56.801680 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f85b3ed6f4865d37af43bc22f795df37b48b264862b2095e8fbef2e2561cb0e5"} err="failed to get container status \"f85b3ed6f4865d37af43bc22f795df37b48b264862b2095e8fbef2e2561cb0e5\": rpc error: code = NotFound desc = could not find container \"f85b3ed6f4865d37af43bc22f795df37b48b264862b2095e8fbef2e2561cb0e5\": container with ID starting with f85b3ed6f4865d37af43bc22f795df37b48b264862b2095e8fbef2e2561cb0e5 not found: ID does not exist" Apr 04 02:42:56 crc kubenswrapper[4681]: I0404 02:42:56.801707 4681 scope.go:117] "RemoveContainer" containerID="8fc4c287ce94e9606557108433cb4d79fb09e23d7fae321f07f5e3566b8ed796" Apr 04 02:42:56 crc kubenswrapper[4681]: E0404 02:42:56.802098 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fc4c287ce94e9606557108433cb4d79fb09e23d7fae321f07f5e3566b8ed796\": container with ID starting with 8fc4c287ce94e9606557108433cb4d79fb09e23d7fae321f07f5e3566b8ed796 not found: ID does not exist" containerID="8fc4c287ce94e9606557108433cb4d79fb09e23d7fae321f07f5e3566b8ed796" Apr 04 02:42:56 crc kubenswrapper[4681]: I0404 02:42:56.802148 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fc4c287ce94e9606557108433cb4d79fb09e23d7fae321f07f5e3566b8ed796"} err="failed to get container status \"8fc4c287ce94e9606557108433cb4d79fb09e23d7fae321f07f5e3566b8ed796\": rpc error: code = NotFound desc = could not find container \"8fc4c287ce94e9606557108433cb4d79fb09e23d7fae321f07f5e3566b8ed796\": container with ID starting with 8fc4c287ce94e9606557108433cb4d79fb09e23d7fae321f07f5e3566b8ed796 not found: ID does not exist" Apr 04 02:42:56 crc kubenswrapper[4681]: I0404 02:42:56.802180 4681 scope.go:117] "RemoveContainer" containerID="2338f65d58b184a028bffbcb9503d999b435e2c402e887cdb3c51ed06b68da10" Apr 04 02:42:56 crc kubenswrapper[4681]: E0404 02:42:56.802498 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2338f65d58b184a028bffbcb9503d999b435e2c402e887cdb3c51ed06b68da10\": container with ID starting with 2338f65d58b184a028bffbcb9503d999b435e2c402e887cdb3c51ed06b68da10 not found: ID does not exist" containerID="2338f65d58b184a028bffbcb9503d999b435e2c402e887cdb3c51ed06b68da10" Apr 04 02:42:56 crc kubenswrapper[4681]: I0404 02:42:56.802529 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2338f65d58b184a028bffbcb9503d999b435e2c402e887cdb3c51ed06b68da10"} err="failed to get container status \"2338f65d58b184a028bffbcb9503d999b435e2c402e887cdb3c51ed06b68da10\": rpc error: code = NotFound desc = could not find container \"2338f65d58b184a028bffbcb9503d999b435e2c402e887cdb3c51ed06b68da10\": container with ID starting with 2338f65d58b184a028bffbcb9503d999b435e2c402e887cdb3c51ed06b68da10 not found: ID does not exist" Apr 04 02:42:57 crc kubenswrapper[4681]: I0404 02:42:57.623375 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30059dca-36f2-41be-a925-895e7c0d8e84-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "30059dca-36f2-41be-a925-895e7c0d8e84" (UID: "30059dca-36f2-41be-a925-895e7c0d8e84"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:42:57 crc kubenswrapper[4681]: I0404 02:42:57.648345 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/30059dca-36f2-41be-a925-895e7c0d8e84-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 02:42:57 crc kubenswrapper[4681]: I0404 02:42:57.929151 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mhp4k"] Apr 04 02:42:57 crc kubenswrapper[4681]: I0404 02:42:57.942473 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mhp4k"] Apr 04 02:42:59 crc kubenswrapper[4681]: I0404 02:42:59.215050 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30059dca-36f2-41be-a925-895e7c0d8e84" path="/var/lib/kubelet/pods/30059dca-36f2-41be-a925-895e7c0d8e84/volumes" Apr 04 02:44:00 crc kubenswrapper[4681]: I0404 02:44:00.145499 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587844-rtm7n"] Apr 04 02:44:00 crc kubenswrapper[4681]: E0404 02:44:00.146693 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30059dca-36f2-41be-a925-895e7c0d8e84" containerName="extract-content" Apr 04 02:44:00 crc kubenswrapper[4681]: I0404 02:44:00.146714 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="30059dca-36f2-41be-a925-895e7c0d8e84" containerName="extract-content" Apr 04 02:44:00 crc kubenswrapper[4681]: E0404 02:44:00.146754 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30059dca-36f2-41be-a925-895e7c0d8e84" containerName="extract-utilities" Apr 04 02:44:00 crc kubenswrapper[4681]: I0404 02:44:00.146767 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="30059dca-36f2-41be-a925-895e7c0d8e84" containerName="extract-utilities" Apr 04 02:44:00 crc kubenswrapper[4681]: E0404 02:44:00.146812 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30059dca-36f2-41be-a925-895e7c0d8e84" containerName="registry-server" Apr 04 02:44:00 crc kubenswrapper[4681]: I0404 02:44:00.146826 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="30059dca-36f2-41be-a925-895e7c0d8e84" containerName="registry-server" Apr 04 02:44:00 crc kubenswrapper[4681]: I0404 02:44:00.147178 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="30059dca-36f2-41be-a925-895e7c0d8e84" containerName="registry-server" Apr 04 02:44:00 crc kubenswrapper[4681]: I0404 02:44:00.148220 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587844-rtm7n" Apr 04 02:44:00 crc kubenswrapper[4681]: I0404 02:44:00.150192 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 02:44:00 crc kubenswrapper[4681]: I0404 02:44:00.151059 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 02:44:00 crc kubenswrapper[4681]: I0404 02:44:00.151071 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 02:44:00 crc kubenswrapper[4681]: I0404 02:44:00.183831 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587844-rtm7n"] Apr 04 02:44:00 crc kubenswrapper[4681]: I0404 02:44:00.267048 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brp95\" (UniqueName: \"kubernetes.io/projected/bd7d2abb-280f-400b-ac78-1befdefa6c9c-kube-api-access-brp95\") pod \"auto-csr-approver-29587844-rtm7n\" (UID: \"bd7d2abb-280f-400b-ac78-1befdefa6c9c\") " pod="openshift-infra/auto-csr-approver-29587844-rtm7n" Apr 04 02:44:00 crc kubenswrapper[4681]: I0404 02:44:00.369329 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brp95\" (UniqueName: \"kubernetes.io/projected/bd7d2abb-280f-400b-ac78-1befdefa6c9c-kube-api-access-brp95\") pod \"auto-csr-approver-29587844-rtm7n\" (UID: \"bd7d2abb-280f-400b-ac78-1befdefa6c9c\") " pod="openshift-infra/auto-csr-approver-29587844-rtm7n" Apr 04 02:44:00 crc kubenswrapper[4681]: I0404 02:44:00.399171 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brp95\" (UniqueName: \"kubernetes.io/projected/bd7d2abb-280f-400b-ac78-1befdefa6c9c-kube-api-access-brp95\") pod \"auto-csr-approver-29587844-rtm7n\" (UID: \"bd7d2abb-280f-400b-ac78-1befdefa6c9c\") " pod="openshift-infra/auto-csr-approver-29587844-rtm7n" Apr 04 02:44:00 crc kubenswrapper[4681]: I0404 02:44:00.484778 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587844-rtm7n" Apr 04 02:44:00 crc kubenswrapper[4681]: I0404 02:44:00.970712 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587844-rtm7n"] Apr 04 02:44:01 crc kubenswrapper[4681]: I0404 02:44:01.352720 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587844-rtm7n" event={"ID":"bd7d2abb-280f-400b-ac78-1befdefa6c9c","Type":"ContainerStarted","Data":"780f6a98658476c5765945a321f056acb4a41f617f348d875e29c1171d3fad72"} Apr 04 02:44:02 crc kubenswrapper[4681]: I0404 02:44:02.364102 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587844-rtm7n" event={"ID":"bd7d2abb-280f-400b-ac78-1befdefa6c9c","Type":"ContainerStarted","Data":"c2b6ca826731d1140be1d2fd7ee6d692d7a8be750b2cc06ef0b6343bccbc80dc"} Apr 04 02:44:03 crc kubenswrapper[4681]: I0404 02:44:03.375973 4681 generic.go:334] "Generic (PLEG): container finished" podID="bd7d2abb-280f-400b-ac78-1befdefa6c9c" containerID="c2b6ca826731d1140be1d2fd7ee6d692d7a8be750b2cc06ef0b6343bccbc80dc" exitCode=0 Apr 04 02:44:03 crc kubenswrapper[4681]: I0404 02:44:03.376081 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587844-rtm7n" event={"ID":"bd7d2abb-280f-400b-ac78-1befdefa6c9c","Type":"ContainerDied","Data":"c2b6ca826731d1140be1d2fd7ee6d692d7a8be750b2cc06ef0b6343bccbc80dc"} Apr 04 02:44:04 crc kubenswrapper[4681]: I0404 02:44:04.787670 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587844-rtm7n" Apr 04 02:44:04 crc kubenswrapper[4681]: I0404 02:44:04.970667 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brp95\" (UniqueName: \"kubernetes.io/projected/bd7d2abb-280f-400b-ac78-1befdefa6c9c-kube-api-access-brp95\") pod \"bd7d2abb-280f-400b-ac78-1befdefa6c9c\" (UID: \"bd7d2abb-280f-400b-ac78-1befdefa6c9c\") " Apr 04 02:44:04 crc kubenswrapper[4681]: I0404 02:44:04.976547 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd7d2abb-280f-400b-ac78-1befdefa6c9c-kube-api-access-brp95" (OuterVolumeSpecName: "kube-api-access-brp95") pod "bd7d2abb-280f-400b-ac78-1befdefa6c9c" (UID: "bd7d2abb-280f-400b-ac78-1befdefa6c9c"). InnerVolumeSpecName "kube-api-access-brp95". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:44:05 crc kubenswrapper[4681]: I0404 02:44:05.073897 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brp95\" (UniqueName: \"kubernetes.io/projected/bd7d2abb-280f-400b-ac78-1befdefa6c9c-kube-api-access-brp95\") on node \"crc\" DevicePath \"\"" Apr 04 02:44:05 crc kubenswrapper[4681]: I0404 02:44:05.398862 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587844-rtm7n" event={"ID":"bd7d2abb-280f-400b-ac78-1befdefa6c9c","Type":"ContainerDied","Data":"780f6a98658476c5765945a321f056acb4a41f617f348d875e29c1171d3fad72"} Apr 04 02:44:05 crc kubenswrapper[4681]: I0404 02:44:05.398937 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="780f6a98658476c5765945a321f056acb4a41f617f348d875e29c1171d3fad72" Apr 04 02:44:05 crc kubenswrapper[4681]: I0404 02:44:05.398952 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587844-rtm7n" Apr 04 02:44:05 crc kubenswrapper[4681]: I0404 02:44:05.444688 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587838-rgp5g"] Apr 04 02:44:05 crc kubenswrapper[4681]: I0404 02:44:05.454332 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587838-rgp5g"] Apr 04 02:44:07 crc kubenswrapper[4681]: I0404 02:44:07.214934 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c52306af-2657-4a6c-b9c3-b902bfc18de5" path="/var/lib/kubelet/pods/c52306af-2657-4a6c-b9c3-b902bfc18de5/volumes" Apr 04 02:44:26 crc kubenswrapper[4681]: I0404 02:44:26.524840 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:44:26 crc kubenswrapper[4681]: I0404 02:44:26.525473 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:44:30 crc kubenswrapper[4681]: I0404 02:44:30.293146 4681 scope.go:117] "RemoveContainer" containerID="3ef0632f9358875838dc8e66eae889b5b88ce9fb0a6a7ae2cdfdc5a936583931" Apr 04 02:44:56 crc kubenswrapper[4681]: I0404 02:44:56.524755 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:44:56 crc kubenswrapper[4681]: I0404 02:44:56.525579 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:45:00 crc kubenswrapper[4681]: I0404 02:45:00.154516 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587845-2dmx5"] Apr 04 02:45:00 crc kubenswrapper[4681]: E0404 02:45:00.155662 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7d2abb-280f-400b-ac78-1befdefa6c9c" containerName="oc" Apr 04 02:45:00 crc kubenswrapper[4681]: I0404 02:45:00.155683 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7d2abb-280f-400b-ac78-1befdefa6c9c" containerName="oc" Apr 04 02:45:00 crc kubenswrapper[4681]: I0404 02:45:00.155911 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7d2abb-280f-400b-ac78-1befdefa6c9c" containerName="oc" Apr 04 02:45:00 crc kubenswrapper[4681]: I0404 02:45:00.156791 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587845-2dmx5" Apr 04 02:45:00 crc kubenswrapper[4681]: I0404 02:45:00.170965 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587845-2dmx5"] Apr 04 02:45:00 crc kubenswrapper[4681]: I0404 02:45:00.173654 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Apr 04 02:45:00 crc kubenswrapper[4681]: I0404 02:45:00.174379 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Apr 04 02:45:00 crc kubenswrapper[4681]: I0404 02:45:00.203447 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6342da4d-517c-485b-8d88-5fc59e542232-secret-volume\") pod \"collect-profiles-29587845-2dmx5\" (UID: \"6342da4d-517c-485b-8d88-5fc59e542232\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587845-2dmx5" Apr 04 02:45:00 crc kubenswrapper[4681]: I0404 02:45:00.203515 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttbp6\" (UniqueName: \"kubernetes.io/projected/6342da4d-517c-485b-8d88-5fc59e542232-kube-api-access-ttbp6\") pod \"collect-profiles-29587845-2dmx5\" (UID: \"6342da4d-517c-485b-8d88-5fc59e542232\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587845-2dmx5" Apr 04 02:45:00 crc kubenswrapper[4681]: I0404 02:45:00.203582 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6342da4d-517c-485b-8d88-5fc59e542232-config-volume\") pod \"collect-profiles-29587845-2dmx5\" (UID: \"6342da4d-517c-485b-8d88-5fc59e542232\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587845-2dmx5" Apr 04 02:45:00 crc kubenswrapper[4681]: I0404 02:45:00.305186 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6342da4d-517c-485b-8d88-5fc59e542232-config-volume\") pod \"collect-profiles-29587845-2dmx5\" (UID: \"6342da4d-517c-485b-8d88-5fc59e542232\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587845-2dmx5" Apr 04 02:45:00 crc kubenswrapper[4681]: I0404 02:45:00.305400 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6342da4d-517c-485b-8d88-5fc59e542232-secret-volume\") pod \"collect-profiles-29587845-2dmx5\" (UID: \"6342da4d-517c-485b-8d88-5fc59e542232\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587845-2dmx5" Apr 04 02:45:00 crc kubenswrapper[4681]: I0404 02:45:00.305457 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttbp6\" (UniqueName: \"kubernetes.io/projected/6342da4d-517c-485b-8d88-5fc59e542232-kube-api-access-ttbp6\") pod \"collect-profiles-29587845-2dmx5\" (UID: \"6342da4d-517c-485b-8d88-5fc59e542232\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587845-2dmx5" Apr 04 02:45:00 crc kubenswrapper[4681]: I0404 02:45:00.306983 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6342da4d-517c-485b-8d88-5fc59e542232-config-volume\") pod \"collect-profiles-29587845-2dmx5\" (UID: \"6342da4d-517c-485b-8d88-5fc59e542232\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587845-2dmx5" Apr 04 02:45:00 crc kubenswrapper[4681]: I0404 02:45:00.314935 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6342da4d-517c-485b-8d88-5fc59e542232-secret-volume\") pod \"collect-profiles-29587845-2dmx5\" (UID: \"6342da4d-517c-485b-8d88-5fc59e542232\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587845-2dmx5" Apr 04 02:45:00 crc kubenswrapper[4681]: I0404 02:45:00.324484 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttbp6\" (UniqueName: \"kubernetes.io/projected/6342da4d-517c-485b-8d88-5fc59e542232-kube-api-access-ttbp6\") pod \"collect-profiles-29587845-2dmx5\" (UID: \"6342da4d-517c-485b-8d88-5fc59e542232\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587845-2dmx5" Apr 04 02:45:00 crc kubenswrapper[4681]: I0404 02:45:00.511602 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587845-2dmx5" Apr 04 02:45:00 crc kubenswrapper[4681]: I0404 02:45:00.983979 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587845-2dmx5"] Apr 04 02:45:01 crc kubenswrapper[4681]: I0404 02:45:01.010727 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587845-2dmx5" event={"ID":"6342da4d-517c-485b-8d88-5fc59e542232","Type":"ContainerStarted","Data":"4646a20dd3891a0af702c3001b02905ebc137efe589e3cc9c06deb4cc0b87807"} Apr 04 02:45:02 crc kubenswrapper[4681]: I0404 02:45:02.026634 4681 generic.go:334] "Generic (PLEG): container finished" podID="6342da4d-517c-485b-8d88-5fc59e542232" containerID="3af678ce9b7c15c9c7b23fd447f9427f707a59e0bd727d8fe1ddb8d611854f49" exitCode=0 Apr 04 02:45:02 crc kubenswrapper[4681]: I0404 02:45:02.026763 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587845-2dmx5" event={"ID":"6342da4d-517c-485b-8d88-5fc59e542232","Type":"ContainerDied","Data":"3af678ce9b7c15c9c7b23fd447f9427f707a59e0bd727d8fe1ddb8d611854f49"} Apr 04 02:45:03 crc kubenswrapper[4681]: I0404 02:45:03.390465 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587845-2dmx5" Apr 04 02:45:03 crc kubenswrapper[4681]: I0404 02:45:03.466845 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6342da4d-517c-485b-8d88-5fc59e542232-secret-volume\") pod \"6342da4d-517c-485b-8d88-5fc59e542232\" (UID: \"6342da4d-517c-485b-8d88-5fc59e542232\") " Apr 04 02:45:03 crc kubenswrapper[4681]: I0404 02:45:03.467245 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6342da4d-517c-485b-8d88-5fc59e542232-config-volume\") pod \"6342da4d-517c-485b-8d88-5fc59e542232\" (UID: \"6342da4d-517c-485b-8d88-5fc59e542232\") " Apr 04 02:45:03 crc kubenswrapper[4681]: I0404 02:45:03.467334 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttbp6\" (UniqueName: \"kubernetes.io/projected/6342da4d-517c-485b-8d88-5fc59e542232-kube-api-access-ttbp6\") pod \"6342da4d-517c-485b-8d88-5fc59e542232\" (UID: \"6342da4d-517c-485b-8d88-5fc59e542232\") " Apr 04 02:45:03 crc kubenswrapper[4681]: I0404 02:45:03.467860 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6342da4d-517c-485b-8d88-5fc59e542232-config-volume" (OuterVolumeSpecName: "config-volume") pod "6342da4d-517c-485b-8d88-5fc59e542232" (UID: "6342da4d-517c-485b-8d88-5fc59e542232"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:45:03 crc kubenswrapper[4681]: I0404 02:45:03.472436 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6342da4d-517c-485b-8d88-5fc59e542232-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6342da4d-517c-485b-8d88-5fc59e542232" (UID: "6342da4d-517c-485b-8d88-5fc59e542232"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:45:03 crc kubenswrapper[4681]: I0404 02:45:03.472921 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6342da4d-517c-485b-8d88-5fc59e542232-kube-api-access-ttbp6" (OuterVolumeSpecName: "kube-api-access-ttbp6") pod "6342da4d-517c-485b-8d88-5fc59e542232" (UID: "6342da4d-517c-485b-8d88-5fc59e542232"). InnerVolumeSpecName "kube-api-access-ttbp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:45:03 crc kubenswrapper[4681]: I0404 02:45:03.569918 4681 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6342da4d-517c-485b-8d88-5fc59e542232-secret-volume\") on node \"crc\" DevicePath \"\"" Apr 04 02:45:03 crc kubenswrapper[4681]: I0404 02:45:03.569955 4681 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6342da4d-517c-485b-8d88-5fc59e542232-config-volume\") on node \"crc\" DevicePath \"\"" Apr 04 02:45:03 crc kubenswrapper[4681]: I0404 02:45:03.569970 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttbp6\" (UniqueName: \"kubernetes.io/projected/6342da4d-517c-485b-8d88-5fc59e542232-kube-api-access-ttbp6\") on node \"crc\" DevicePath \"\"" Apr 04 02:45:04 crc kubenswrapper[4681]: I0404 02:45:04.049241 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587845-2dmx5" event={"ID":"6342da4d-517c-485b-8d88-5fc59e542232","Type":"ContainerDied","Data":"4646a20dd3891a0af702c3001b02905ebc137efe589e3cc9c06deb4cc0b87807"} Apr 04 02:45:04 crc kubenswrapper[4681]: I0404 02:45:04.049317 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587845-2dmx5" Apr 04 02:45:04 crc kubenswrapper[4681]: I0404 02:45:04.049319 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4646a20dd3891a0af702c3001b02905ebc137efe589e3cc9c06deb4cc0b87807" Apr 04 02:45:04 crc kubenswrapper[4681]: I0404 02:45:04.477609 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587800-g4n9q"] Apr 04 02:45:04 crc kubenswrapper[4681]: I0404 02:45:04.488071 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587800-g4n9q"] Apr 04 02:45:05 crc kubenswrapper[4681]: I0404 02:45:05.219102 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d26f8dc-36cc-47b4-9729-60177c3ca6e1" path="/var/lib/kubelet/pods/6d26f8dc-36cc-47b4-9729-60177c3ca6e1/volumes" Apr 04 02:45:26 crc kubenswrapper[4681]: I0404 02:45:26.524794 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:45:26 crc kubenswrapper[4681]: I0404 02:45:26.525516 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:45:26 crc kubenswrapper[4681]: I0404 02:45:26.525582 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 02:45:26 crc kubenswrapper[4681]: I0404 02:45:26.526586 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"43e5a1e85819a357ad1bc32395360d171d4e27ab96b40ed80b78ca51550fe534"} pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Apr 04 02:45:26 crc kubenswrapper[4681]: I0404 02:45:26.526683 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" containerID="cri-o://43e5a1e85819a357ad1bc32395360d171d4e27ab96b40ed80b78ca51550fe534" gracePeriod=600 Apr 04 02:45:27 crc kubenswrapper[4681]: I0404 02:45:27.285330 4681 generic.go:334] "Generic (PLEG): container finished" podID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerID="43e5a1e85819a357ad1bc32395360d171d4e27ab96b40ed80b78ca51550fe534" exitCode=0 Apr 04 02:45:27 crc kubenswrapper[4681]: I0404 02:45:27.285371 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerDied","Data":"43e5a1e85819a357ad1bc32395360d171d4e27ab96b40ed80b78ca51550fe534"} Apr 04 02:45:27 crc kubenswrapper[4681]: I0404 02:45:27.285906 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerStarted","Data":"690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01"} Apr 04 02:45:27 crc kubenswrapper[4681]: I0404 02:45:27.285931 4681 scope.go:117] "RemoveContainer" containerID="d0a6982dc73391bfcdbea37affa340ab6c2c8bfe22d8dc69ff3f47913a1861ee" Apr 04 02:45:30 crc kubenswrapper[4681]: I0404 02:45:30.326069 4681 generic.go:334] "Generic (PLEG): container finished" podID="1784fc32-2907-4203-a7cd-0053cfe1d338" containerID="1e6de3ab4f97de70550bc6111ab84a813b778eebc3aa2e50f8a9fa2649ae9829" exitCode=0 Apr 04 02:45:30 crc kubenswrapper[4681]: I0404 02:45:30.326174 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2" event={"ID":"1784fc32-2907-4203-a7cd-0053cfe1d338","Type":"ContainerDied","Data":"1e6de3ab4f97de70550bc6111ab84a813b778eebc3aa2e50f8a9fa2649ae9829"} Apr 04 02:45:30 crc kubenswrapper[4681]: I0404 02:45:30.375941 4681 scope.go:117] "RemoveContainer" containerID="ad003577de2480f7f45393bef52ea2d1a876e1270c88a5d57053223c2df309f7" Apr 04 02:45:31 crc kubenswrapper[4681]: I0404 02:45:31.820239 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2" Apr 04 02:45:31 crc kubenswrapper[4681]: I0404 02:45:31.990744 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9s4w\" (UniqueName: \"kubernetes.io/projected/1784fc32-2907-4203-a7cd-0053cfe1d338-kube-api-access-l9s4w\") pod \"1784fc32-2907-4203-a7cd-0053cfe1d338\" (UID: \"1784fc32-2907-4203-a7cd-0053cfe1d338\") " Apr 04 02:45:31 crc kubenswrapper[4681]: I0404 02:45:31.991084 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1784fc32-2907-4203-a7cd-0053cfe1d338-inventory\") pod \"1784fc32-2907-4203-a7cd-0053cfe1d338\" (UID: \"1784fc32-2907-4203-a7cd-0053cfe1d338\") " Apr 04 02:45:31 crc kubenswrapper[4681]: I0404 02:45:31.991170 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1784fc32-2907-4203-a7cd-0053cfe1d338-libvirt-secret-0\") pod \"1784fc32-2907-4203-a7cd-0053cfe1d338\" (UID: \"1784fc32-2907-4203-a7cd-0053cfe1d338\") " Apr 04 02:45:31 crc kubenswrapper[4681]: I0404 02:45:31.991229 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1784fc32-2907-4203-a7cd-0053cfe1d338-ssh-key-openstack-edpm-ipam\") pod \"1784fc32-2907-4203-a7cd-0053cfe1d338\" (UID: \"1784fc32-2907-4203-a7cd-0053cfe1d338\") " Apr 04 02:45:31 crc kubenswrapper[4681]: I0404 02:45:31.991445 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1784fc32-2907-4203-a7cd-0053cfe1d338-libvirt-combined-ca-bundle\") pod \"1784fc32-2907-4203-a7cd-0053cfe1d338\" (UID: \"1784fc32-2907-4203-a7cd-0053cfe1d338\") " Apr 04 02:45:31 crc kubenswrapper[4681]: I0404 02:45:31.996860 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1784fc32-2907-4203-a7cd-0053cfe1d338-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "1784fc32-2907-4203-a7cd-0053cfe1d338" (UID: "1784fc32-2907-4203-a7cd-0053cfe1d338"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:31.999967 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1784fc32-2907-4203-a7cd-0053cfe1d338-kube-api-access-l9s4w" (OuterVolumeSpecName: "kube-api-access-l9s4w") pod "1784fc32-2907-4203-a7cd-0053cfe1d338" (UID: "1784fc32-2907-4203-a7cd-0053cfe1d338"). InnerVolumeSpecName "kube-api-access-l9s4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.025760 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1784fc32-2907-4203-a7cd-0053cfe1d338-inventory" (OuterVolumeSpecName: "inventory") pod "1784fc32-2907-4203-a7cd-0053cfe1d338" (UID: "1784fc32-2907-4203-a7cd-0053cfe1d338"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.027575 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1784fc32-2907-4203-a7cd-0053cfe1d338-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1784fc32-2907-4203-a7cd-0053cfe1d338" (UID: "1784fc32-2907-4203-a7cd-0053cfe1d338"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.041682 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1784fc32-2907-4203-a7cd-0053cfe1d338-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "1784fc32-2907-4203-a7cd-0053cfe1d338" (UID: "1784fc32-2907-4203-a7cd-0053cfe1d338"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.093632 4681 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1784fc32-2907-4203-a7cd-0053cfe1d338-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.093667 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9s4w\" (UniqueName: \"kubernetes.io/projected/1784fc32-2907-4203-a7cd-0053cfe1d338-kube-api-access-l9s4w\") on node \"crc\" DevicePath \"\"" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.093678 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1784fc32-2907-4203-a7cd-0053cfe1d338-inventory\") on node \"crc\" DevicePath \"\"" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.093687 4681 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1784fc32-2907-4203-a7cd-0053cfe1d338-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.093696 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1784fc32-2907-4203-a7cd-0053cfe1d338-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.354194 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2" event={"ID":"1784fc32-2907-4203-a7cd-0053cfe1d338","Type":"ContainerDied","Data":"a82d647557b4a270539fee8ec28fd8d7730af51d9fe9e99f6e03e8f0f1c8a22e"} Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.354556 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a82d647557b4a270539fee8ec28fd8d7730af51d9fe9e99f6e03e8f0f1c8a22e" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.354311 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.554598 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx"] Apr 04 02:45:32 crc kubenswrapper[4681]: E0404 02:45:32.555207 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1784fc32-2907-4203-a7cd-0053cfe1d338" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.555239 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1784fc32-2907-4203-a7cd-0053cfe1d338" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Apr 04 02:45:32 crc kubenswrapper[4681]: E0404 02:45:32.555287 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6342da4d-517c-485b-8d88-5fc59e542232" containerName="collect-profiles" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.555301 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6342da4d-517c-485b-8d88-5fc59e542232" containerName="collect-profiles" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.555701 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="6342da4d-517c-485b-8d88-5fc59e542232" containerName="collect-profiles" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.555762 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="1784fc32-2907-4203-a7cd-0053cfe1d338" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.556827 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.561181 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.562294 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.564162 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.564553 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.564712 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.565037 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w7ttd" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.565222 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.580391 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx"] Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.703860 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.703934 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.704170 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jb9p\" (UniqueName: \"kubernetes.io/projected/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-kube-api-access-5jb9p\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.704549 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.704718 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.704859 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.704996 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.705050 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.705107 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.705247 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.705441 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.807151 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.807245 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.807397 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jb9p\" (UniqueName: \"kubernetes.io/projected/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-kube-api-access-5jb9p\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.807553 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.807716 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.807790 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.807896 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.807959 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.808023 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.808089 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.808171 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.811980 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.813184 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.814326 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.814449 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.814733 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.815050 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.815134 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.816420 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.816936 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.818254 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.825415 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jb9p\" (UniqueName: \"kubernetes.io/projected/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-kube-api-access-5jb9p\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lf7rx\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:32 crc kubenswrapper[4681]: I0404 02:45:32.884787 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:45:33 crc kubenswrapper[4681]: I0404 02:45:33.459718 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx"] Apr 04 02:45:34 crc kubenswrapper[4681]: I0404 02:45:34.381386 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" event={"ID":"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378","Type":"ContainerStarted","Data":"4a982404b698ba3388a3e8a05defc11cd0b12efddc3f948786688c15fd5777e6"} Apr 04 02:45:35 crc kubenswrapper[4681]: I0404 02:45:35.398388 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" event={"ID":"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378","Type":"ContainerStarted","Data":"c0ccab08286ff3c4eeabca272ef1735454469c52171e8f530d4632cc02e8a944"} Apr 04 02:45:35 crc kubenswrapper[4681]: I0404 02:45:35.430658 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" podStartSLOduration=2.7211731500000003 podStartE2EDuration="3.430641277s" podCreationTimestamp="2026-04-04 02:45:32 +0000 UTC" firstStartedPulling="2026-04-04 02:45:33.464709827 +0000 UTC m=+3013.130484947" lastFinishedPulling="2026-04-04 02:45:34.174177954 +0000 UTC m=+3013.839953074" observedRunningTime="2026-04-04 02:45:35.423623166 +0000 UTC m=+3015.089398286" watchObservedRunningTime="2026-04-04 02:45:35.430641277 +0000 UTC m=+3015.096416397" Apr 04 02:46:00 crc kubenswrapper[4681]: I0404 02:46:00.145882 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587846-85pnp"] Apr 04 02:46:00 crc kubenswrapper[4681]: I0404 02:46:00.148327 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587846-85pnp" Apr 04 02:46:00 crc kubenswrapper[4681]: I0404 02:46:00.151523 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 02:46:00 crc kubenswrapper[4681]: I0404 02:46:00.151691 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 02:46:00 crc kubenswrapper[4681]: I0404 02:46:00.152277 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 02:46:00 crc kubenswrapper[4681]: I0404 02:46:00.161830 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587846-85pnp"] Apr 04 02:46:00 crc kubenswrapper[4681]: I0404 02:46:00.266781 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s46dd\" (UniqueName: \"kubernetes.io/projected/a53b3215-4a81-4fae-9cc0-db1d56d865de-kube-api-access-s46dd\") pod \"auto-csr-approver-29587846-85pnp\" (UID: \"a53b3215-4a81-4fae-9cc0-db1d56d865de\") " pod="openshift-infra/auto-csr-approver-29587846-85pnp" Apr 04 02:46:00 crc kubenswrapper[4681]: I0404 02:46:00.368669 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s46dd\" (UniqueName: \"kubernetes.io/projected/a53b3215-4a81-4fae-9cc0-db1d56d865de-kube-api-access-s46dd\") pod \"auto-csr-approver-29587846-85pnp\" (UID: \"a53b3215-4a81-4fae-9cc0-db1d56d865de\") " pod="openshift-infra/auto-csr-approver-29587846-85pnp" Apr 04 02:46:00 crc kubenswrapper[4681]: I0404 02:46:00.390490 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s46dd\" (UniqueName: \"kubernetes.io/projected/a53b3215-4a81-4fae-9cc0-db1d56d865de-kube-api-access-s46dd\") pod \"auto-csr-approver-29587846-85pnp\" (UID: \"a53b3215-4a81-4fae-9cc0-db1d56d865de\") " pod="openshift-infra/auto-csr-approver-29587846-85pnp" Apr 04 02:46:00 crc kubenswrapper[4681]: I0404 02:46:00.467978 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587846-85pnp" Apr 04 02:46:00 crc kubenswrapper[4681]: I0404 02:46:00.992935 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587846-85pnp"] Apr 04 02:46:01 crc kubenswrapper[4681]: I0404 02:46:01.672066 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587846-85pnp" event={"ID":"a53b3215-4a81-4fae-9cc0-db1d56d865de","Type":"ContainerStarted","Data":"1c7adc96387dd108f208658ea949e1fc154aba727a3cc829f43d947f631efa70"} Apr 04 02:46:02 crc kubenswrapper[4681]: I0404 02:46:02.691827 4681 generic.go:334] "Generic (PLEG): container finished" podID="a53b3215-4a81-4fae-9cc0-db1d56d865de" containerID="9ad3a9b9d20882825766e7bafab29ae5d038ee27339aa68b40d58085f61120f2" exitCode=0 Apr 04 02:46:02 crc kubenswrapper[4681]: I0404 02:46:02.692307 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587846-85pnp" event={"ID":"a53b3215-4a81-4fae-9cc0-db1d56d865de","Type":"ContainerDied","Data":"9ad3a9b9d20882825766e7bafab29ae5d038ee27339aa68b40d58085f61120f2"} Apr 04 02:46:04 crc kubenswrapper[4681]: I0404 02:46:04.025759 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587846-85pnp" Apr 04 02:46:04 crc kubenswrapper[4681]: I0404 02:46:04.143239 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s46dd\" (UniqueName: \"kubernetes.io/projected/a53b3215-4a81-4fae-9cc0-db1d56d865de-kube-api-access-s46dd\") pod \"a53b3215-4a81-4fae-9cc0-db1d56d865de\" (UID: \"a53b3215-4a81-4fae-9cc0-db1d56d865de\") " Apr 04 02:46:04 crc kubenswrapper[4681]: I0404 02:46:04.160577 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a53b3215-4a81-4fae-9cc0-db1d56d865de-kube-api-access-s46dd" (OuterVolumeSpecName: "kube-api-access-s46dd") pod "a53b3215-4a81-4fae-9cc0-db1d56d865de" (UID: "a53b3215-4a81-4fae-9cc0-db1d56d865de"). InnerVolumeSpecName "kube-api-access-s46dd". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:46:04 crc kubenswrapper[4681]: I0404 02:46:04.245620 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s46dd\" (UniqueName: \"kubernetes.io/projected/a53b3215-4a81-4fae-9cc0-db1d56d865de-kube-api-access-s46dd\") on node \"crc\" DevicePath \"\"" Apr 04 02:46:04 crc kubenswrapper[4681]: I0404 02:46:04.712081 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587846-85pnp" event={"ID":"a53b3215-4a81-4fae-9cc0-db1d56d865de","Type":"ContainerDied","Data":"1c7adc96387dd108f208658ea949e1fc154aba727a3cc829f43d947f631efa70"} Apr 04 02:46:04 crc kubenswrapper[4681]: I0404 02:46:04.712121 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c7adc96387dd108f208658ea949e1fc154aba727a3cc829f43d947f631efa70" Apr 04 02:46:04 crc kubenswrapper[4681]: I0404 02:46:04.712171 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587846-85pnp" Apr 04 02:46:05 crc kubenswrapper[4681]: I0404 02:46:05.096906 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587840-f7mjw"] Apr 04 02:46:05 crc kubenswrapper[4681]: I0404 02:46:05.106557 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587840-f7mjw"] Apr 04 02:46:05 crc kubenswrapper[4681]: I0404 02:46:05.210771 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4995802e-736b-427c-8252-e2da74db085c" path="/var/lib/kubelet/pods/4995802e-736b-427c-8252-e2da74db085c/volumes" Apr 04 02:46:08 crc kubenswrapper[4681]: I0404 02:46:08.816894 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-49wg4"] Apr 04 02:46:08 crc kubenswrapper[4681]: E0404 02:46:08.817766 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a53b3215-4a81-4fae-9cc0-db1d56d865de" containerName="oc" Apr 04 02:46:08 crc kubenswrapper[4681]: I0404 02:46:08.817779 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="a53b3215-4a81-4fae-9cc0-db1d56d865de" containerName="oc" Apr 04 02:46:08 crc kubenswrapper[4681]: I0404 02:46:08.817977 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="a53b3215-4a81-4fae-9cc0-db1d56d865de" containerName="oc" Apr 04 02:46:08 crc kubenswrapper[4681]: I0404 02:46:08.820603 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-49wg4" Apr 04 02:46:08 crc kubenswrapper[4681]: I0404 02:46:08.841704 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-49wg4"] Apr 04 02:46:08 crc kubenswrapper[4681]: I0404 02:46:08.931975 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v5cw\" (UniqueName: \"kubernetes.io/projected/96db466d-3623-4cd5-920d-4b5495dcfb46-kube-api-access-5v5cw\") pod \"redhat-operators-49wg4\" (UID: \"96db466d-3623-4cd5-920d-4b5495dcfb46\") " pod="openshift-marketplace/redhat-operators-49wg4" Apr 04 02:46:08 crc kubenswrapper[4681]: I0404 02:46:08.932200 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96db466d-3623-4cd5-920d-4b5495dcfb46-utilities\") pod \"redhat-operators-49wg4\" (UID: \"96db466d-3623-4cd5-920d-4b5495dcfb46\") " pod="openshift-marketplace/redhat-operators-49wg4" Apr 04 02:46:08 crc kubenswrapper[4681]: I0404 02:46:08.932326 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96db466d-3623-4cd5-920d-4b5495dcfb46-catalog-content\") pod \"redhat-operators-49wg4\" (UID: \"96db466d-3623-4cd5-920d-4b5495dcfb46\") " pod="openshift-marketplace/redhat-operators-49wg4" Apr 04 02:46:09 crc kubenswrapper[4681]: I0404 02:46:09.034869 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96db466d-3623-4cd5-920d-4b5495dcfb46-utilities\") pod \"redhat-operators-49wg4\" (UID: \"96db466d-3623-4cd5-920d-4b5495dcfb46\") " pod="openshift-marketplace/redhat-operators-49wg4" Apr 04 02:46:09 crc kubenswrapper[4681]: I0404 02:46:09.034954 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96db466d-3623-4cd5-920d-4b5495dcfb46-catalog-content\") pod \"redhat-operators-49wg4\" (UID: \"96db466d-3623-4cd5-920d-4b5495dcfb46\") " pod="openshift-marketplace/redhat-operators-49wg4" Apr 04 02:46:09 crc kubenswrapper[4681]: I0404 02:46:09.035087 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v5cw\" (UniqueName: \"kubernetes.io/projected/96db466d-3623-4cd5-920d-4b5495dcfb46-kube-api-access-5v5cw\") pod \"redhat-operators-49wg4\" (UID: \"96db466d-3623-4cd5-920d-4b5495dcfb46\") " pod="openshift-marketplace/redhat-operators-49wg4" Apr 04 02:46:09 crc kubenswrapper[4681]: I0404 02:46:09.035573 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96db466d-3623-4cd5-920d-4b5495dcfb46-catalog-content\") pod \"redhat-operators-49wg4\" (UID: \"96db466d-3623-4cd5-920d-4b5495dcfb46\") " pod="openshift-marketplace/redhat-operators-49wg4" Apr 04 02:46:09 crc kubenswrapper[4681]: I0404 02:46:09.035572 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96db466d-3623-4cd5-920d-4b5495dcfb46-utilities\") pod \"redhat-operators-49wg4\" (UID: \"96db466d-3623-4cd5-920d-4b5495dcfb46\") " pod="openshift-marketplace/redhat-operators-49wg4" Apr 04 02:46:09 crc kubenswrapper[4681]: I0404 02:46:09.070376 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v5cw\" (UniqueName: \"kubernetes.io/projected/96db466d-3623-4cd5-920d-4b5495dcfb46-kube-api-access-5v5cw\") pod \"redhat-operators-49wg4\" (UID: \"96db466d-3623-4cd5-920d-4b5495dcfb46\") " pod="openshift-marketplace/redhat-operators-49wg4" Apr 04 02:46:09 crc kubenswrapper[4681]: I0404 02:46:09.185521 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-49wg4" Apr 04 02:46:09 crc kubenswrapper[4681]: I0404 02:46:09.673636 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-49wg4"] Apr 04 02:46:09 crc kubenswrapper[4681]: I0404 02:46:09.765734 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49wg4" event={"ID":"96db466d-3623-4cd5-920d-4b5495dcfb46","Type":"ContainerStarted","Data":"5e344722fa190700d1543005e9cbfc080e862cfcbd80b6566c3676c65fe8d977"} Apr 04 02:46:10 crc kubenswrapper[4681]: I0404 02:46:10.799946 4681 generic.go:334] "Generic (PLEG): container finished" podID="96db466d-3623-4cd5-920d-4b5495dcfb46" containerID="d3e2383db78bf40e3351b3d642c1b1a77deef88d6bbc17c2106f73972e0687d1" exitCode=0 Apr 04 02:46:10 crc kubenswrapper[4681]: I0404 02:46:10.800075 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49wg4" event={"ID":"96db466d-3623-4cd5-920d-4b5495dcfb46","Type":"ContainerDied","Data":"d3e2383db78bf40e3351b3d642c1b1a77deef88d6bbc17c2106f73972e0687d1"} Apr 04 02:46:12 crc kubenswrapper[4681]: I0404 02:46:12.825889 4681 generic.go:334] "Generic (PLEG): container finished" podID="96db466d-3623-4cd5-920d-4b5495dcfb46" containerID="6133fb3cd45c6a987ebcd2861dbaa701e31653a2ab6b690ce2900ed8936b7915" exitCode=0 Apr 04 02:46:12 crc kubenswrapper[4681]: I0404 02:46:12.825941 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49wg4" event={"ID":"96db466d-3623-4cd5-920d-4b5495dcfb46","Type":"ContainerDied","Data":"6133fb3cd45c6a987ebcd2861dbaa701e31653a2ab6b690ce2900ed8936b7915"} Apr 04 02:46:13 crc kubenswrapper[4681]: I0404 02:46:13.837532 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49wg4" event={"ID":"96db466d-3623-4cd5-920d-4b5495dcfb46","Type":"ContainerStarted","Data":"282a0b602b43b52e48c151e6d05d1e41eb5625d5ed70a45e585b69f504b8368d"} Apr 04 02:46:13 crc kubenswrapper[4681]: I0404 02:46:13.866875 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-49wg4" podStartSLOduration=3.261167518 podStartE2EDuration="5.866853154s" podCreationTimestamp="2026-04-04 02:46:08 +0000 UTC" firstStartedPulling="2026-04-04 02:46:10.803290931 +0000 UTC m=+3050.469066051" lastFinishedPulling="2026-04-04 02:46:13.408976557 +0000 UTC m=+3053.074751687" observedRunningTime="2026-04-04 02:46:13.854396984 +0000 UTC m=+3053.520172104" watchObservedRunningTime="2026-04-04 02:46:13.866853154 +0000 UTC m=+3053.532628274" Apr 04 02:46:19 crc kubenswrapper[4681]: I0404 02:46:19.185908 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-49wg4" Apr 04 02:46:19 crc kubenswrapper[4681]: I0404 02:46:19.186484 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-49wg4" Apr 04 02:46:20 crc kubenswrapper[4681]: I0404 02:46:20.265633 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-49wg4" podUID="96db466d-3623-4cd5-920d-4b5495dcfb46" containerName="registry-server" probeResult="failure" output=< Apr 04 02:46:20 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Apr 04 02:46:20 crc kubenswrapper[4681]: > Apr 04 02:46:29 crc kubenswrapper[4681]: I0404 02:46:29.254225 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-49wg4" Apr 04 02:46:29 crc kubenswrapper[4681]: I0404 02:46:29.327771 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-49wg4" Apr 04 02:46:30 crc kubenswrapper[4681]: I0404 02:46:30.461842 4681 scope.go:117] "RemoveContainer" containerID="99c10a1bef4abf602886a1c64d460cc2e4b9e1847cc917d817f9f37a7168f427" Apr 04 02:46:30 crc kubenswrapper[4681]: I0404 02:46:30.499928 4681 scope.go:117] "RemoveContainer" containerID="97102a0df58008757cb25ee9f6f52e0a505242a44bd7797452d2efe432cde408" Apr 04 02:46:30 crc kubenswrapper[4681]: I0404 02:46:30.558203 4681 scope.go:117] "RemoveContainer" containerID="534ee420f8b830636249523df34972a67284bb1a84cab3b8ef7cfd1ae4dd3aa9" Apr 04 02:46:30 crc kubenswrapper[4681]: I0404 02:46:30.602917 4681 scope.go:117] "RemoveContainer" containerID="fe521e60da101fef15c34d1b9f52d6395f5cbc797304d0de06be9e715465581b" Apr 04 02:46:31 crc kubenswrapper[4681]: I0404 02:46:31.317583 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-49wg4"] Apr 04 02:46:31 crc kubenswrapper[4681]: I0404 02:46:31.317856 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-49wg4" podUID="96db466d-3623-4cd5-920d-4b5495dcfb46" containerName="registry-server" containerID="cri-o://282a0b602b43b52e48c151e6d05d1e41eb5625d5ed70a45e585b69f504b8368d" gracePeriod=2 Apr 04 02:46:31 crc kubenswrapper[4681]: I0404 02:46:31.968044 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-49wg4" Apr 04 02:46:32 crc kubenswrapper[4681]: I0404 02:46:32.020199 4681 generic.go:334] "Generic (PLEG): container finished" podID="96db466d-3623-4cd5-920d-4b5495dcfb46" containerID="282a0b602b43b52e48c151e6d05d1e41eb5625d5ed70a45e585b69f504b8368d" exitCode=0 Apr 04 02:46:32 crc kubenswrapper[4681]: I0404 02:46:32.020251 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49wg4" event={"ID":"96db466d-3623-4cd5-920d-4b5495dcfb46","Type":"ContainerDied","Data":"282a0b602b43b52e48c151e6d05d1e41eb5625d5ed70a45e585b69f504b8368d"} Apr 04 02:46:32 crc kubenswrapper[4681]: I0404 02:46:32.020362 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-49wg4" event={"ID":"96db466d-3623-4cd5-920d-4b5495dcfb46","Type":"ContainerDied","Data":"5e344722fa190700d1543005e9cbfc080e862cfcbd80b6566c3676c65fe8d977"} Apr 04 02:46:32 crc kubenswrapper[4681]: I0404 02:46:32.020400 4681 scope.go:117] "RemoveContainer" containerID="282a0b602b43b52e48c151e6d05d1e41eb5625d5ed70a45e585b69f504b8368d" Apr 04 02:46:32 crc kubenswrapper[4681]: I0404 02:46:32.020571 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-49wg4" Apr 04 02:46:32 crc kubenswrapper[4681]: I0404 02:46:32.044868 4681 scope.go:117] "RemoveContainer" containerID="6133fb3cd45c6a987ebcd2861dbaa701e31653a2ab6b690ce2900ed8936b7915" Apr 04 02:46:32 crc kubenswrapper[4681]: I0404 02:46:32.069841 4681 scope.go:117] "RemoveContainer" containerID="d3e2383db78bf40e3351b3d642c1b1a77deef88d6bbc17c2106f73972e0687d1" Apr 04 02:46:32 crc kubenswrapper[4681]: I0404 02:46:32.116537 4681 scope.go:117] "RemoveContainer" containerID="282a0b602b43b52e48c151e6d05d1e41eb5625d5ed70a45e585b69f504b8368d" Apr 04 02:46:32 crc kubenswrapper[4681]: E0404 02:46:32.117161 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"282a0b602b43b52e48c151e6d05d1e41eb5625d5ed70a45e585b69f504b8368d\": container with ID starting with 282a0b602b43b52e48c151e6d05d1e41eb5625d5ed70a45e585b69f504b8368d not found: ID does not exist" containerID="282a0b602b43b52e48c151e6d05d1e41eb5625d5ed70a45e585b69f504b8368d" Apr 04 02:46:32 crc kubenswrapper[4681]: I0404 02:46:32.117248 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"282a0b602b43b52e48c151e6d05d1e41eb5625d5ed70a45e585b69f504b8368d"} err="failed to get container status \"282a0b602b43b52e48c151e6d05d1e41eb5625d5ed70a45e585b69f504b8368d\": rpc error: code = NotFound desc = could not find container \"282a0b602b43b52e48c151e6d05d1e41eb5625d5ed70a45e585b69f504b8368d\": container with ID starting with 282a0b602b43b52e48c151e6d05d1e41eb5625d5ed70a45e585b69f504b8368d not found: ID does not exist" Apr 04 02:46:32 crc kubenswrapper[4681]: I0404 02:46:32.117329 4681 scope.go:117] "RemoveContainer" containerID="6133fb3cd45c6a987ebcd2861dbaa701e31653a2ab6b690ce2900ed8936b7915" Apr 04 02:46:32 crc kubenswrapper[4681]: E0404 02:46:32.118134 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6133fb3cd45c6a987ebcd2861dbaa701e31653a2ab6b690ce2900ed8936b7915\": container with ID starting with 6133fb3cd45c6a987ebcd2861dbaa701e31653a2ab6b690ce2900ed8936b7915 not found: ID does not exist" containerID="6133fb3cd45c6a987ebcd2861dbaa701e31653a2ab6b690ce2900ed8936b7915" Apr 04 02:46:32 crc kubenswrapper[4681]: I0404 02:46:32.118190 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6133fb3cd45c6a987ebcd2861dbaa701e31653a2ab6b690ce2900ed8936b7915"} err="failed to get container status \"6133fb3cd45c6a987ebcd2861dbaa701e31653a2ab6b690ce2900ed8936b7915\": rpc error: code = NotFound desc = could not find container \"6133fb3cd45c6a987ebcd2861dbaa701e31653a2ab6b690ce2900ed8936b7915\": container with ID starting with 6133fb3cd45c6a987ebcd2861dbaa701e31653a2ab6b690ce2900ed8936b7915 not found: ID does not exist" Apr 04 02:46:32 crc kubenswrapper[4681]: I0404 02:46:32.118217 4681 scope.go:117] "RemoveContainer" containerID="d3e2383db78bf40e3351b3d642c1b1a77deef88d6bbc17c2106f73972e0687d1" Apr 04 02:46:32 crc kubenswrapper[4681]: E0404 02:46:32.118600 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3e2383db78bf40e3351b3d642c1b1a77deef88d6bbc17c2106f73972e0687d1\": container with ID starting with d3e2383db78bf40e3351b3d642c1b1a77deef88d6bbc17c2106f73972e0687d1 not found: ID does not exist" containerID="d3e2383db78bf40e3351b3d642c1b1a77deef88d6bbc17c2106f73972e0687d1" Apr 04 02:46:32 crc kubenswrapper[4681]: I0404 02:46:32.118655 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3e2383db78bf40e3351b3d642c1b1a77deef88d6bbc17c2106f73972e0687d1"} err="failed to get container status \"d3e2383db78bf40e3351b3d642c1b1a77deef88d6bbc17c2106f73972e0687d1\": rpc error: code = NotFound desc = could not find container \"d3e2383db78bf40e3351b3d642c1b1a77deef88d6bbc17c2106f73972e0687d1\": container with ID starting with d3e2383db78bf40e3351b3d642c1b1a77deef88d6bbc17c2106f73972e0687d1 not found: ID does not exist" Apr 04 02:46:32 crc kubenswrapper[4681]: I0404 02:46:32.162690 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96db466d-3623-4cd5-920d-4b5495dcfb46-catalog-content\") pod \"96db466d-3623-4cd5-920d-4b5495dcfb46\" (UID: \"96db466d-3623-4cd5-920d-4b5495dcfb46\") " Apr 04 02:46:32 crc kubenswrapper[4681]: I0404 02:46:32.162777 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96db466d-3623-4cd5-920d-4b5495dcfb46-utilities\") pod \"96db466d-3623-4cd5-920d-4b5495dcfb46\" (UID: \"96db466d-3623-4cd5-920d-4b5495dcfb46\") " Apr 04 02:46:32 crc kubenswrapper[4681]: I0404 02:46:32.162817 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v5cw\" (UniqueName: \"kubernetes.io/projected/96db466d-3623-4cd5-920d-4b5495dcfb46-kube-api-access-5v5cw\") pod \"96db466d-3623-4cd5-920d-4b5495dcfb46\" (UID: \"96db466d-3623-4cd5-920d-4b5495dcfb46\") " Apr 04 02:46:32 crc kubenswrapper[4681]: I0404 02:46:32.164549 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96db466d-3623-4cd5-920d-4b5495dcfb46-utilities" (OuterVolumeSpecName: "utilities") pod "96db466d-3623-4cd5-920d-4b5495dcfb46" (UID: "96db466d-3623-4cd5-920d-4b5495dcfb46"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:46:32 crc kubenswrapper[4681]: I0404 02:46:32.168671 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96db466d-3623-4cd5-920d-4b5495dcfb46-kube-api-access-5v5cw" (OuterVolumeSpecName: "kube-api-access-5v5cw") pod "96db466d-3623-4cd5-920d-4b5495dcfb46" (UID: "96db466d-3623-4cd5-920d-4b5495dcfb46"). InnerVolumeSpecName "kube-api-access-5v5cw". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:46:32 crc kubenswrapper[4681]: I0404 02:46:32.265455 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96db466d-3623-4cd5-920d-4b5495dcfb46-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 02:46:32 crc kubenswrapper[4681]: I0404 02:46:32.265693 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5v5cw\" (UniqueName: \"kubernetes.io/projected/96db466d-3623-4cd5-920d-4b5495dcfb46-kube-api-access-5v5cw\") on node \"crc\" DevicePath \"\"" Apr 04 02:46:32 crc kubenswrapper[4681]: I0404 02:46:32.292631 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96db466d-3623-4cd5-920d-4b5495dcfb46-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96db466d-3623-4cd5-920d-4b5495dcfb46" (UID: "96db466d-3623-4cd5-920d-4b5495dcfb46"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:46:32 crc kubenswrapper[4681]: I0404 02:46:32.357310 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-49wg4"] Apr 04 02:46:32 crc kubenswrapper[4681]: I0404 02:46:32.371330 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96db466d-3623-4cd5-920d-4b5495dcfb46-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 02:46:32 crc kubenswrapper[4681]: I0404 02:46:32.377842 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-49wg4"] Apr 04 02:46:33 crc kubenswrapper[4681]: I0404 02:46:33.225816 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96db466d-3623-4cd5-920d-4b5495dcfb46" path="/var/lib/kubelet/pods/96db466d-3623-4cd5-920d-4b5495dcfb46/volumes" Apr 04 02:47:26 crc kubenswrapper[4681]: I0404 02:47:26.524074 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:47:26 crc kubenswrapper[4681]: I0404 02:47:26.524718 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:47:56 crc kubenswrapper[4681]: I0404 02:47:56.524510 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:47:56 crc kubenswrapper[4681]: I0404 02:47:56.525242 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:48:00 crc kubenswrapper[4681]: I0404 02:48:00.145599 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587848-ns5cp"] Apr 04 02:48:00 crc kubenswrapper[4681]: E0404 02:48:00.146384 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96db466d-3623-4cd5-920d-4b5495dcfb46" containerName="registry-server" Apr 04 02:48:00 crc kubenswrapper[4681]: I0404 02:48:00.146402 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="96db466d-3623-4cd5-920d-4b5495dcfb46" containerName="registry-server" Apr 04 02:48:00 crc kubenswrapper[4681]: E0404 02:48:00.146447 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96db466d-3623-4cd5-920d-4b5495dcfb46" containerName="extract-utilities" Apr 04 02:48:00 crc kubenswrapper[4681]: I0404 02:48:00.146458 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="96db466d-3623-4cd5-920d-4b5495dcfb46" containerName="extract-utilities" Apr 04 02:48:00 crc kubenswrapper[4681]: E0404 02:48:00.146472 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96db466d-3623-4cd5-920d-4b5495dcfb46" containerName="extract-content" Apr 04 02:48:00 crc kubenswrapper[4681]: I0404 02:48:00.146479 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="96db466d-3623-4cd5-920d-4b5495dcfb46" containerName="extract-content" Apr 04 02:48:00 crc kubenswrapper[4681]: I0404 02:48:00.146710 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="96db466d-3623-4cd5-920d-4b5495dcfb46" containerName="registry-server" Apr 04 02:48:00 crc kubenswrapper[4681]: I0404 02:48:00.147563 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587848-ns5cp" Apr 04 02:48:00 crc kubenswrapper[4681]: I0404 02:48:00.150488 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 02:48:00 crc kubenswrapper[4681]: I0404 02:48:00.151371 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 02:48:00 crc kubenswrapper[4681]: I0404 02:48:00.152207 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 02:48:00 crc kubenswrapper[4681]: I0404 02:48:00.163721 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587848-ns5cp"] Apr 04 02:48:00 crc kubenswrapper[4681]: I0404 02:48:00.310040 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqjrn\" (UniqueName: \"kubernetes.io/projected/fc439b89-de86-4c8b-a244-924e7b571fb1-kube-api-access-fqjrn\") pod \"auto-csr-approver-29587848-ns5cp\" (UID: \"fc439b89-de86-4c8b-a244-924e7b571fb1\") " pod="openshift-infra/auto-csr-approver-29587848-ns5cp" Apr 04 02:48:00 crc kubenswrapper[4681]: I0404 02:48:00.411776 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqjrn\" (UniqueName: \"kubernetes.io/projected/fc439b89-de86-4c8b-a244-924e7b571fb1-kube-api-access-fqjrn\") pod \"auto-csr-approver-29587848-ns5cp\" (UID: \"fc439b89-de86-4c8b-a244-924e7b571fb1\") " pod="openshift-infra/auto-csr-approver-29587848-ns5cp" Apr 04 02:48:00 crc kubenswrapper[4681]: I0404 02:48:00.443713 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqjrn\" (UniqueName: \"kubernetes.io/projected/fc439b89-de86-4c8b-a244-924e7b571fb1-kube-api-access-fqjrn\") pod \"auto-csr-approver-29587848-ns5cp\" (UID: \"fc439b89-de86-4c8b-a244-924e7b571fb1\") " pod="openshift-infra/auto-csr-approver-29587848-ns5cp" Apr 04 02:48:00 crc kubenswrapper[4681]: I0404 02:48:00.468520 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587848-ns5cp" Apr 04 02:48:00 crc kubenswrapper[4681]: I0404 02:48:00.913663 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587848-ns5cp"] Apr 04 02:48:00 crc kubenswrapper[4681]: I0404 02:48:00.920919 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 04 02:48:00 crc kubenswrapper[4681]: I0404 02:48:00.939709 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587848-ns5cp" event={"ID":"fc439b89-de86-4c8b-a244-924e7b571fb1","Type":"ContainerStarted","Data":"6b5fd951351fac55dc7e4077b44b82f6b40d86c7da5b57a147ec26ebb86cbc8f"} Apr 04 02:48:00 crc kubenswrapper[4681]: I0404 02:48:00.941762 4681 generic.go:334] "Generic (PLEG): container finished" podID="1c6f1a3c-3cad-4d39-8155-69c4a2ce1378" containerID="c0ccab08286ff3c4eeabca272ef1735454469c52171e8f530d4632cc02e8a944" exitCode=0 Apr 04 02:48:00 crc kubenswrapper[4681]: I0404 02:48:00.941795 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" event={"ID":"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378","Type":"ContainerDied","Data":"c0ccab08286ff3c4eeabca272ef1735454469c52171e8f530d4632cc02e8a944"} Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.406233 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.561384 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-migration-ssh-key-1\") pod \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.561531 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-cell1-compute-config-1\") pod \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.561713 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-cell1-compute-config-2\") pod \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.561815 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-inventory\") pod \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.561911 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jb9p\" (UniqueName: \"kubernetes.io/projected/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-kube-api-access-5jb9p\") pod \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.562710 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-combined-ca-bundle\") pod \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.563103 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-migration-ssh-key-0\") pod \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.563137 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-ssh-key-openstack-edpm-ipam\") pod \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.563192 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-extra-config-0\") pod \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.563253 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-cell1-compute-config-0\") pod \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.563307 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-cell1-compute-config-3\") pod \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\" (UID: \"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378\") " Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.569888 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "1c6f1a3c-3cad-4d39-8155-69c4a2ce1378" (UID: "1c6f1a3c-3cad-4d39-8155-69c4a2ce1378"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.587337 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-kube-api-access-5jb9p" (OuterVolumeSpecName: "kube-api-access-5jb9p") pod "1c6f1a3c-3cad-4d39-8155-69c4a2ce1378" (UID: "1c6f1a3c-3cad-4d39-8155-69c4a2ce1378"). InnerVolumeSpecName "kube-api-access-5jb9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.604393 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "1c6f1a3c-3cad-4d39-8155-69c4a2ce1378" (UID: "1c6f1a3c-3cad-4d39-8155-69c4a2ce1378"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.606205 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-inventory" (OuterVolumeSpecName: "inventory") pod "1c6f1a3c-3cad-4d39-8155-69c4a2ce1378" (UID: "1c6f1a3c-3cad-4d39-8155-69c4a2ce1378"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.618433 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "1c6f1a3c-3cad-4d39-8155-69c4a2ce1378" (UID: "1c6f1a3c-3cad-4d39-8155-69c4a2ce1378"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.626085 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "1c6f1a3c-3cad-4d39-8155-69c4a2ce1378" (UID: "1c6f1a3c-3cad-4d39-8155-69c4a2ce1378"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.628859 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1c6f1a3c-3cad-4d39-8155-69c4a2ce1378" (UID: "1c6f1a3c-3cad-4d39-8155-69c4a2ce1378"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.631950 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "1c6f1a3c-3cad-4d39-8155-69c4a2ce1378" (UID: "1c6f1a3c-3cad-4d39-8155-69c4a2ce1378"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.639381 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "1c6f1a3c-3cad-4d39-8155-69c4a2ce1378" (UID: "1c6f1a3c-3cad-4d39-8155-69c4a2ce1378"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.641942 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "1c6f1a3c-3cad-4d39-8155-69c4a2ce1378" (UID: "1c6f1a3c-3cad-4d39-8155-69c4a2ce1378"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.645034 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "1c6f1a3c-3cad-4d39-8155-69c4a2ce1378" (UID: "1c6f1a3c-3cad-4d39-8155-69c4a2ce1378"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.665989 4681 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.666036 4681 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.666051 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.666067 4681 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.666078 4681 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.666090 4681 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.666100 4681 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.666111 4681 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.666120 4681 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.666134 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-inventory\") on node \"crc\" DevicePath \"\"" Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.666146 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jb9p\" (UniqueName: \"kubernetes.io/projected/1c6f1a3c-3cad-4d39-8155-69c4a2ce1378-kube-api-access-5jb9p\") on node \"crc\" DevicePath \"\"" Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.968547 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" event={"ID":"1c6f1a3c-3cad-4d39-8155-69c4a2ce1378","Type":"ContainerDied","Data":"4a982404b698ba3388a3e8a05defc11cd0b12efddc3f948786688c15fd5777e6"} Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.968632 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a982404b698ba3388a3e8a05defc11cd0b12efddc3f948786688c15fd5777e6" Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.968564 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lf7rx" Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.970247 4681 generic.go:334] "Generic (PLEG): container finished" podID="fc439b89-de86-4c8b-a244-924e7b571fb1" containerID="b850dcd9333add2cbc5c47e52f33122674d144df0f92c32d7ff61095b55cf0d5" exitCode=0 Apr 04 02:48:02 crc kubenswrapper[4681]: I0404 02:48:02.970315 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587848-ns5cp" event={"ID":"fc439b89-de86-4c8b-a244-924e7b571fb1","Type":"ContainerDied","Data":"b850dcd9333add2cbc5c47e52f33122674d144df0f92c32d7ff61095b55cf0d5"} Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.145561 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn"] Apr 04 02:48:03 crc kubenswrapper[4681]: E0404 02:48:03.146291 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c6f1a3c-3cad-4d39-8155-69c4a2ce1378" containerName="nova-edpm-deployment-openstack-edpm-ipam" Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.146320 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c6f1a3c-3cad-4d39-8155-69c4a2ce1378" containerName="nova-edpm-deployment-openstack-edpm-ipam" Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.146680 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c6f1a3c-3cad-4d39-8155-69c4a2ce1378" containerName="nova-edpm-deployment-openstack-edpm-ipam" Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.147830 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn" Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.158783 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn"] Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.195866 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.196281 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.196351 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.196365 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-w7ttd" Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.196063 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.196703 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx5hb\" (UniqueName: \"kubernetes.io/projected/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-kube-api-access-dx5hb\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn\" (UID: \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn" Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.196763 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn\" (UID: \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn" Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.196791 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn\" (UID: \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn" Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.196825 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn\" (UID: \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn" Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.196985 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn\" (UID: \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn" Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.197045 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn\" (UID: \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn" Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.197088 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn\" (UID: \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn" Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.298454 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx5hb\" (UniqueName: \"kubernetes.io/projected/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-kube-api-access-dx5hb\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn\" (UID: \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn" Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.298690 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn\" (UID: \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn" Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.299118 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn\" (UID: \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn" Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.299184 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn\" (UID: \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn" Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.299332 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn\" (UID: \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn" Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.299386 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn\" (UID: \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn" Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.299427 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn\" (UID: \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn" Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.303010 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn\" (UID: \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn" Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.303348 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn\" (UID: \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn" Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.303876 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn\" (UID: \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn" Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.304539 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn\" (UID: \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn" Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.305202 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn\" (UID: \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn" Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.305286 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn\" (UID: \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn" Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.321636 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx5hb\" (UniqueName: \"kubernetes.io/projected/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-kube-api-access-dx5hb\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn\" (UID: \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn" Apr 04 02:48:03 crc kubenswrapper[4681]: I0404 02:48:03.530881 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn" Apr 04 02:48:04 crc kubenswrapper[4681]: I0404 02:48:04.067522 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn"] Apr 04 02:48:04 crc kubenswrapper[4681]: W0404 02:48:04.080212 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7ef2b80_e8d5_4f17_8617_d3a88ef35137.slice/crio-03cc045b2cfd8db3831674562989efec5ff32b8699a6a6b93231a62ffe5d227c WatchSource:0}: Error finding container 03cc045b2cfd8db3831674562989efec5ff32b8699a6a6b93231a62ffe5d227c: Status 404 returned error can't find the container with id 03cc045b2cfd8db3831674562989efec5ff32b8699a6a6b93231a62ffe5d227c Apr 04 02:48:04 crc kubenswrapper[4681]: I0404 02:48:04.272239 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587848-ns5cp" Apr 04 02:48:04 crc kubenswrapper[4681]: I0404 02:48:04.421392 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqjrn\" (UniqueName: \"kubernetes.io/projected/fc439b89-de86-4c8b-a244-924e7b571fb1-kube-api-access-fqjrn\") pod \"fc439b89-de86-4c8b-a244-924e7b571fb1\" (UID: \"fc439b89-de86-4c8b-a244-924e7b571fb1\") " Apr 04 02:48:04 crc kubenswrapper[4681]: I0404 02:48:04.426623 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc439b89-de86-4c8b-a244-924e7b571fb1-kube-api-access-fqjrn" (OuterVolumeSpecName: "kube-api-access-fqjrn") pod "fc439b89-de86-4c8b-a244-924e7b571fb1" (UID: "fc439b89-de86-4c8b-a244-924e7b571fb1"). InnerVolumeSpecName "kube-api-access-fqjrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:48:04 crc kubenswrapper[4681]: I0404 02:48:04.524610 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqjrn\" (UniqueName: \"kubernetes.io/projected/fc439b89-de86-4c8b-a244-924e7b571fb1-kube-api-access-fqjrn\") on node \"crc\" DevicePath \"\"" Apr 04 02:48:05 crc kubenswrapper[4681]: I0404 02:48:05.004190 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn" event={"ID":"d7ef2b80-e8d5-4f17-8617-d3a88ef35137","Type":"ContainerStarted","Data":"d41aae677dd610bce9abfeb3e057d94ed1707a4b5ffbbfb9438a6c9a161c82e9"} Apr 04 02:48:05 crc kubenswrapper[4681]: I0404 02:48:05.004616 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn" event={"ID":"d7ef2b80-e8d5-4f17-8617-d3a88ef35137","Type":"ContainerStarted","Data":"03cc045b2cfd8db3831674562989efec5ff32b8699a6a6b93231a62ffe5d227c"} Apr 04 02:48:05 crc kubenswrapper[4681]: I0404 02:48:05.006883 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587848-ns5cp" event={"ID":"fc439b89-de86-4c8b-a244-924e7b571fb1","Type":"ContainerDied","Data":"6b5fd951351fac55dc7e4077b44b82f6b40d86c7da5b57a147ec26ebb86cbc8f"} Apr 04 02:48:05 crc kubenswrapper[4681]: I0404 02:48:05.006953 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b5fd951351fac55dc7e4077b44b82f6b40d86c7da5b57a147ec26ebb86cbc8f" Apr 04 02:48:05 crc kubenswrapper[4681]: I0404 02:48:05.007057 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587848-ns5cp" Apr 04 02:48:05 crc kubenswrapper[4681]: I0404 02:48:05.035573 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn" podStartSLOduration=1.5600611149999999 podStartE2EDuration="2.035550512s" podCreationTimestamp="2026-04-04 02:48:03 +0000 UTC" firstStartedPulling="2026-04-04 02:48:04.083321585 +0000 UTC m=+3163.749096705" lastFinishedPulling="2026-04-04 02:48:04.558810942 +0000 UTC m=+3164.224586102" observedRunningTime="2026-04-04 02:48:05.024056659 +0000 UTC m=+3164.689831799" watchObservedRunningTime="2026-04-04 02:48:05.035550512 +0000 UTC m=+3164.701325652" Apr 04 02:48:05 crc kubenswrapper[4681]: I0404 02:48:05.345193 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587842-7mwm6"] Apr 04 02:48:05 crc kubenswrapper[4681]: I0404 02:48:05.355991 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587842-7mwm6"] Apr 04 02:48:07 crc kubenswrapper[4681]: I0404 02:48:07.218302 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70f0d175-8971-4b5b-b162-2ea220728686" path="/var/lib/kubelet/pods/70f0d175-8971-4b5b-b162-2ea220728686/volumes" Apr 04 02:48:26 crc kubenswrapper[4681]: I0404 02:48:26.523714 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:48:26 crc kubenswrapper[4681]: I0404 02:48:26.524383 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:48:26 crc kubenswrapper[4681]: I0404 02:48:26.524430 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 02:48:26 crc kubenswrapper[4681]: I0404 02:48:26.525335 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01"} pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Apr 04 02:48:26 crc kubenswrapper[4681]: I0404 02:48:26.525404 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" containerID="cri-o://690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01" gracePeriod=600 Apr 04 02:48:26 crc kubenswrapper[4681]: E0404 02:48:26.648493 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:48:27 crc kubenswrapper[4681]: I0404 02:48:27.242681 4681 generic.go:334] "Generic (PLEG): container finished" podID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerID="690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01" exitCode=0 Apr 04 02:48:27 crc kubenswrapper[4681]: I0404 02:48:27.242761 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerDied","Data":"690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01"} Apr 04 02:48:27 crc kubenswrapper[4681]: I0404 02:48:27.242853 4681 scope.go:117] "RemoveContainer" containerID="43e5a1e85819a357ad1bc32395360d171d4e27ab96b40ed80b78ca51550fe534" Apr 04 02:48:27 crc kubenswrapper[4681]: I0404 02:48:27.243646 4681 scope.go:117] "RemoveContainer" containerID="690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01" Apr 04 02:48:27 crc kubenswrapper[4681]: E0404 02:48:27.244104 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:48:30 crc kubenswrapper[4681]: I0404 02:48:30.759656 4681 scope.go:117] "RemoveContainer" containerID="bb5bb5303df40ead72ea7625c398b7b4787a61e7437517cd6eb649eefde1249f" Apr 04 02:48:38 crc kubenswrapper[4681]: I0404 02:48:38.200988 4681 scope.go:117] "RemoveContainer" containerID="690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01" Apr 04 02:48:38 crc kubenswrapper[4681]: E0404 02:48:38.201918 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:48:52 crc kubenswrapper[4681]: I0404 02:48:52.233779 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2wsf6"] Apr 04 02:48:52 crc kubenswrapper[4681]: E0404 02:48:52.235124 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc439b89-de86-4c8b-a244-924e7b571fb1" containerName="oc" Apr 04 02:48:52 crc kubenswrapper[4681]: I0404 02:48:52.235161 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc439b89-de86-4c8b-a244-924e7b571fb1" containerName="oc" Apr 04 02:48:52 crc kubenswrapper[4681]: I0404 02:48:52.235700 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc439b89-de86-4c8b-a244-924e7b571fb1" containerName="oc" Apr 04 02:48:52 crc kubenswrapper[4681]: I0404 02:48:52.238650 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2wsf6" Apr 04 02:48:52 crc kubenswrapper[4681]: I0404 02:48:52.245667 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2wsf6"] Apr 04 02:48:52 crc kubenswrapper[4681]: I0404 02:48:52.381360 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zqhk\" (UniqueName: \"kubernetes.io/projected/4044e5bc-af4d-4dd9-8159-3a98d049bdba-kube-api-access-8zqhk\") pod \"community-operators-2wsf6\" (UID: \"4044e5bc-af4d-4dd9-8159-3a98d049bdba\") " pod="openshift-marketplace/community-operators-2wsf6" Apr 04 02:48:52 crc kubenswrapper[4681]: I0404 02:48:52.381451 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4044e5bc-af4d-4dd9-8159-3a98d049bdba-catalog-content\") pod \"community-operators-2wsf6\" (UID: \"4044e5bc-af4d-4dd9-8159-3a98d049bdba\") " pod="openshift-marketplace/community-operators-2wsf6" Apr 04 02:48:52 crc kubenswrapper[4681]: I0404 02:48:52.381537 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4044e5bc-af4d-4dd9-8159-3a98d049bdba-utilities\") pod \"community-operators-2wsf6\" (UID: \"4044e5bc-af4d-4dd9-8159-3a98d049bdba\") " pod="openshift-marketplace/community-operators-2wsf6" Apr 04 02:48:52 crc kubenswrapper[4681]: I0404 02:48:52.483872 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zqhk\" (UniqueName: \"kubernetes.io/projected/4044e5bc-af4d-4dd9-8159-3a98d049bdba-kube-api-access-8zqhk\") pod \"community-operators-2wsf6\" (UID: \"4044e5bc-af4d-4dd9-8159-3a98d049bdba\") " pod="openshift-marketplace/community-operators-2wsf6" Apr 04 02:48:52 crc kubenswrapper[4681]: I0404 02:48:52.483946 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4044e5bc-af4d-4dd9-8159-3a98d049bdba-catalog-content\") pod \"community-operators-2wsf6\" (UID: \"4044e5bc-af4d-4dd9-8159-3a98d049bdba\") " pod="openshift-marketplace/community-operators-2wsf6" Apr 04 02:48:52 crc kubenswrapper[4681]: I0404 02:48:52.484046 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4044e5bc-af4d-4dd9-8159-3a98d049bdba-utilities\") pod \"community-operators-2wsf6\" (UID: \"4044e5bc-af4d-4dd9-8159-3a98d049bdba\") " pod="openshift-marketplace/community-operators-2wsf6" Apr 04 02:48:52 crc kubenswrapper[4681]: I0404 02:48:52.484675 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4044e5bc-af4d-4dd9-8159-3a98d049bdba-utilities\") pod \"community-operators-2wsf6\" (UID: \"4044e5bc-af4d-4dd9-8159-3a98d049bdba\") " pod="openshift-marketplace/community-operators-2wsf6" Apr 04 02:48:52 crc kubenswrapper[4681]: I0404 02:48:52.485337 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4044e5bc-af4d-4dd9-8159-3a98d049bdba-catalog-content\") pod \"community-operators-2wsf6\" (UID: \"4044e5bc-af4d-4dd9-8159-3a98d049bdba\") " pod="openshift-marketplace/community-operators-2wsf6" Apr 04 02:48:52 crc kubenswrapper[4681]: I0404 02:48:52.505369 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zqhk\" (UniqueName: \"kubernetes.io/projected/4044e5bc-af4d-4dd9-8159-3a98d049bdba-kube-api-access-8zqhk\") pod \"community-operators-2wsf6\" (UID: \"4044e5bc-af4d-4dd9-8159-3a98d049bdba\") " pod="openshift-marketplace/community-operators-2wsf6" Apr 04 02:48:52 crc kubenswrapper[4681]: I0404 02:48:52.574103 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2wsf6" Apr 04 02:48:53 crc kubenswrapper[4681]: I0404 02:48:53.130126 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2wsf6"] Apr 04 02:48:53 crc kubenswrapper[4681]: W0404 02:48:53.132514 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4044e5bc_af4d_4dd9_8159_3a98d049bdba.slice/crio-29880e9f29b9cc40184c44af09ac4e18e2d86b3f485ae9f5454667f70cba7192 WatchSource:0}: Error finding container 29880e9f29b9cc40184c44af09ac4e18e2d86b3f485ae9f5454667f70cba7192: Status 404 returned error can't find the container with id 29880e9f29b9cc40184c44af09ac4e18e2d86b3f485ae9f5454667f70cba7192 Apr 04 02:48:53 crc kubenswrapper[4681]: I0404 02:48:53.202215 4681 scope.go:117] "RemoveContainer" containerID="690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01" Apr 04 02:48:53 crc kubenswrapper[4681]: E0404 02:48:53.202497 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:48:53 crc kubenswrapper[4681]: I0404 02:48:53.529440 4681 generic.go:334] "Generic (PLEG): container finished" podID="4044e5bc-af4d-4dd9-8159-3a98d049bdba" containerID="0b0e2a2267ce373c66afc2e475e377371a2b06307f0d1f94bf3fdea2f577db97" exitCode=0 Apr 04 02:48:53 crc kubenswrapper[4681]: I0404 02:48:53.529450 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wsf6" event={"ID":"4044e5bc-af4d-4dd9-8159-3a98d049bdba","Type":"ContainerDied","Data":"0b0e2a2267ce373c66afc2e475e377371a2b06307f0d1f94bf3fdea2f577db97"} Apr 04 02:48:53 crc kubenswrapper[4681]: I0404 02:48:53.530489 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wsf6" event={"ID":"4044e5bc-af4d-4dd9-8159-3a98d049bdba","Type":"ContainerStarted","Data":"29880e9f29b9cc40184c44af09ac4e18e2d86b3f485ae9f5454667f70cba7192"} Apr 04 02:48:54 crc kubenswrapper[4681]: I0404 02:48:54.542729 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wsf6" event={"ID":"4044e5bc-af4d-4dd9-8159-3a98d049bdba","Type":"ContainerStarted","Data":"c3235a31101b85fb24ed18bed6c63280710a81e170cf854fed407054fbe9fb68"} Apr 04 02:48:55 crc kubenswrapper[4681]: I0404 02:48:55.557859 4681 generic.go:334] "Generic (PLEG): container finished" podID="4044e5bc-af4d-4dd9-8159-3a98d049bdba" containerID="c3235a31101b85fb24ed18bed6c63280710a81e170cf854fed407054fbe9fb68" exitCode=0 Apr 04 02:48:55 crc kubenswrapper[4681]: I0404 02:48:55.557908 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wsf6" event={"ID":"4044e5bc-af4d-4dd9-8159-3a98d049bdba","Type":"ContainerDied","Data":"c3235a31101b85fb24ed18bed6c63280710a81e170cf854fed407054fbe9fb68"} Apr 04 02:48:56 crc kubenswrapper[4681]: I0404 02:48:56.569867 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wsf6" event={"ID":"4044e5bc-af4d-4dd9-8159-3a98d049bdba","Type":"ContainerStarted","Data":"daf855abe402206841e1c7b5b8de15fb11647bd6b290dfb97db26fac65404901"} Apr 04 02:48:56 crc kubenswrapper[4681]: I0404 02:48:56.590289 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2wsf6" podStartSLOduration=2.179229546 podStartE2EDuration="4.590249045s" podCreationTimestamp="2026-04-04 02:48:52 +0000 UTC" firstStartedPulling="2026-04-04 02:48:53.532004687 +0000 UTC m=+3213.197779807" lastFinishedPulling="2026-04-04 02:48:55.943024176 +0000 UTC m=+3215.608799306" observedRunningTime="2026-04-04 02:48:56.587024017 +0000 UTC m=+3216.252799137" watchObservedRunningTime="2026-04-04 02:48:56.590249045 +0000 UTC m=+3216.256024175" Apr 04 02:49:02 crc kubenswrapper[4681]: I0404 02:49:02.574675 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2wsf6" Apr 04 02:49:02 crc kubenswrapper[4681]: I0404 02:49:02.575495 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2wsf6" Apr 04 02:49:02 crc kubenswrapper[4681]: I0404 02:49:02.622583 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2wsf6" Apr 04 02:49:02 crc kubenswrapper[4681]: I0404 02:49:02.671640 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2wsf6" Apr 04 02:49:02 crc kubenswrapper[4681]: I0404 02:49:02.865183 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2wsf6"] Apr 04 02:49:04 crc kubenswrapper[4681]: I0404 02:49:04.666147 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2wsf6" podUID="4044e5bc-af4d-4dd9-8159-3a98d049bdba" containerName="registry-server" containerID="cri-o://daf855abe402206841e1c7b5b8de15fb11647bd6b290dfb97db26fac65404901" gracePeriod=2 Apr 04 02:49:05 crc kubenswrapper[4681]: I0404 02:49:05.133470 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2wsf6" Apr 04 02:49:05 crc kubenswrapper[4681]: I0404 02:49:05.296742 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4044e5bc-af4d-4dd9-8159-3a98d049bdba-utilities\") pod \"4044e5bc-af4d-4dd9-8159-3a98d049bdba\" (UID: \"4044e5bc-af4d-4dd9-8159-3a98d049bdba\") " Apr 04 02:49:05 crc kubenswrapper[4681]: I0404 02:49:05.296810 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4044e5bc-af4d-4dd9-8159-3a98d049bdba-catalog-content\") pod \"4044e5bc-af4d-4dd9-8159-3a98d049bdba\" (UID: \"4044e5bc-af4d-4dd9-8159-3a98d049bdba\") " Apr 04 02:49:05 crc kubenswrapper[4681]: I0404 02:49:05.297354 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zqhk\" (UniqueName: \"kubernetes.io/projected/4044e5bc-af4d-4dd9-8159-3a98d049bdba-kube-api-access-8zqhk\") pod \"4044e5bc-af4d-4dd9-8159-3a98d049bdba\" (UID: \"4044e5bc-af4d-4dd9-8159-3a98d049bdba\") " Apr 04 02:49:05 crc kubenswrapper[4681]: I0404 02:49:05.297847 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4044e5bc-af4d-4dd9-8159-3a98d049bdba-utilities" (OuterVolumeSpecName: "utilities") pod "4044e5bc-af4d-4dd9-8159-3a98d049bdba" (UID: "4044e5bc-af4d-4dd9-8159-3a98d049bdba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:49:05 crc kubenswrapper[4681]: I0404 02:49:05.309337 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4044e5bc-af4d-4dd9-8159-3a98d049bdba-kube-api-access-8zqhk" (OuterVolumeSpecName: "kube-api-access-8zqhk") pod "4044e5bc-af4d-4dd9-8159-3a98d049bdba" (UID: "4044e5bc-af4d-4dd9-8159-3a98d049bdba"). InnerVolumeSpecName "kube-api-access-8zqhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:49:05 crc kubenswrapper[4681]: I0404 02:49:05.350709 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4044e5bc-af4d-4dd9-8159-3a98d049bdba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4044e5bc-af4d-4dd9-8159-3a98d049bdba" (UID: "4044e5bc-af4d-4dd9-8159-3a98d049bdba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:49:05 crc kubenswrapper[4681]: I0404 02:49:05.399310 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4044e5bc-af4d-4dd9-8159-3a98d049bdba-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 02:49:05 crc kubenswrapper[4681]: I0404 02:49:05.399538 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4044e5bc-af4d-4dd9-8159-3a98d049bdba-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 02:49:05 crc kubenswrapper[4681]: I0404 02:49:05.399598 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zqhk\" (UniqueName: \"kubernetes.io/projected/4044e5bc-af4d-4dd9-8159-3a98d049bdba-kube-api-access-8zqhk\") on node \"crc\" DevicePath \"\"" Apr 04 02:49:05 crc kubenswrapper[4681]: I0404 02:49:05.678387 4681 generic.go:334] "Generic (PLEG): container finished" podID="4044e5bc-af4d-4dd9-8159-3a98d049bdba" containerID="daf855abe402206841e1c7b5b8de15fb11647bd6b290dfb97db26fac65404901" exitCode=0 Apr 04 02:49:05 crc kubenswrapper[4681]: I0404 02:49:05.678450 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2wsf6" Apr 04 02:49:05 crc kubenswrapper[4681]: I0404 02:49:05.678471 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wsf6" event={"ID":"4044e5bc-af4d-4dd9-8159-3a98d049bdba","Type":"ContainerDied","Data":"daf855abe402206841e1c7b5b8de15fb11647bd6b290dfb97db26fac65404901"} Apr 04 02:49:05 crc kubenswrapper[4681]: I0404 02:49:05.679419 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2wsf6" event={"ID":"4044e5bc-af4d-4dd9-8159-3a98d049bdba","Type":"ContainerDied","Data":"29880e9f29b9cc40184c44af09ac4e18e2d86b3f485ae9f5454667f70cba7192"} Apr 04 02:49:05 crc kubenswrapper[4681]: I0404 02:49:05.679449 4681 scope.go:117] "RemoveContainer" containerID="daf855abe402206841e1c7b5b8de15fb11647bd6b290dfb97db26fac65404901" Apr 04 02:49:05 crc kubenswrapper[4681]: I0404 02:49:05.706136 4681 scope.go:117] "RemoveContainer" containerID="c3235a31101b85fb24ed18bed6c63280710a81e170cf854fed407054fbe9fb68" Apr 04 02:49:05 crc kubenswrapper[4681]: I0404 02:49:05.734358 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2wsf6"] Apr 04 02:49:05 crc kubenswrapper[4681]: I0404 02:49:05.745040 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2wsf6"] Apr 04 02:49:05 crc kubenswrapper[4681]: I0404 02:49:05.763286 4681 scope.go:117] "RemoveContainer" containerID="0b0e2a2267ce373c66afc2e475e377371a2b06307f0d1f94bf3fdea2f577db97" Apr 04 02:49:05 crc kubenswrapper[4681]: I0404 02:49:05.817979 4681 scope.go:117] "RemoveContainer" containerID="daf855abe402206841e1c7b5b8de15fb11647bd6b290dfb97db26fac65404901" Apr 04 02:49:05 crc kubenswrapper[4681]: E0404 02:49:05.818456 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daf855abe402206841e1c7b5b8de15fb11647bd6b290dfb97db26fac65404901\": container with ID starting with daf855abe402206841e1c7b5b8de15fb11647bd6b290dfb97db26fac65404901 not found: ID does not exist" containerID="daf855abe402206841e1c7b5b8de15fb11647bd6b290dfb97db26fac65404901" Apr 04 02:49:05 crc kubenswrapper[4681]: I0404 02:49:05.818485 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daf855abe402206841e1c7b5b8de15fb11647bd6b290dfb97db26fac65404901"} err="failed to get container status \"daf855abe402206841e1c7b5b8de15fb11647bd6b290dfb97db26fac65404901\": rpc error: code = NotFound desc = could not find container \"daf855abe402206841e1c7b5b8de15fb11647bd6b290dfb97db26fac65404901\": container with ID starting with daf855abe402206841e1c7b5b8de15fb11647bd6b290dfb97db26fac65404901 not found: ID does not exist" Apr 04 02:49:05 crc kubenswrapper[4681]: I0404 02:49:05.818505 4681 scope.go:117] "RemoveContainer" containerID="c3235a31101b85fb24ed18bed6c63280710a81e170cf854fed407054fbe9fb68" Apr 04 02:49:05 crc kubenswrapper[4681]: E0404 02:49:05.818881 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3235a31101b85fb24ed18bed6c63280710a81e170cf854fed407054fbe9fb68\": container with ID starting with c3235a31101b85fb24ed18bed6c63280710a81e170cf854fed407054fbe9fb68 not found: ID does not exist" containerID="c3235a31101b85fb24ed18bed6c63280710a81e170cf854fed407054fbe9fb68" Apr 04 02:49:05 crc kubenswrapper[4681]: I0404 02:49:05.818901 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3235a31101b85fb24ed18bed6c63280710a81e170cf854fed407054fbe9fb68"} err="failed to get container status \"c3235a31101b85fb24ed18bed6c63280710a81e170cf854fed407054fbe9fb68\": rpc error: code = NotFound desc = could not find container \"c3235a31101b85fb24ed18bed6c63280710a81e170cf854fed407054fbe9fb68\": container with ID starting with c3235a31101b85fb24ed18bed6c63280710a81e170cf854fed407054fbe9fb68 not found: ID does not exist" Apr 04 02:49:05 crc kubenswrapper[4681]: I0404 02:49:05.818914 4681 scope.go:117] "RemoveContainer" containerID="0b0e2a2267ce373c66afc2e475e377371a2b06307f0d1f94bf3fdea2f577db97" Apr 04 02:49:05 crc kubenswrapper[4681]: E0404 02:49:05.819232 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b0e2a2267ce373c66afc2e475e377371a2b06307f0d1f94bf3fdea2f577db97\": container with ID starting with 0b0e2a2267ce373c66afc2e475e377371a2b06307f0d1f94bf3fdea2f577db97 not found: ID does not exist" containerID="0b0e2a2267ce373c66afc2e475e377371a2b06307f0d1f94bf3fdea2f577db97" Apr 04 02:49:05 crc kubenswrapper[4681]: I0404 02:49:05.819252 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b0e2a2267ce373c66afc2e475e377371a2b06307f0d1f94bf3fdea2f577db97"} err="failed to get container status \"0b0e2a2267ce373c66afc2e475e377371a2b06307f0d1f94bf3fdea2f577db97\": rpc error: code = NotFound desc = could not find container \"0b0e2a2267ce373c66afc2e475e377371a2b06307f0d1f94bf3fdea2f577db97\": container with ID starting with 0b0e2a2267ce373c66afc2e475e377371a2b06307f0d1f94bf3fdea2f577db97 not found: ID does not exist" Apr 04 02:49:07 crc kubenswrapper[4681]: I0404 02:49:07.214243 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4044e5bc-af4d-4dd9-8159-3a98d049bdba" path="/var/lib/kubelet/pods/4044e5bc-af4d-4dd9-8159-3a98d049bdba/volumes" Apr 04 02:49:08 crc kubenswrapper[4681]: I0404 02:49:08.201808 4681 scope.go:117] "RemoveContainer" containerID="690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01" Apr 04 02:49:08 crc kubenswrapper[4681]: E0404 02:49:08.202235 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:49:19 crc kubenswrapper[4681]: I0404 02:49:19.201507 4681 scope.go:117] "RemoveContainer" containerID="690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01" Apr 04 02:49:19 crc kubenswrapper[4681]: E0404 02:49:19.202498 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:49:32 crc kubenswrapper[4681]: I0404 02:49:32.201397 4681 scope.go:117] "RemoveContainer" containerID="690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01" Apr 04 02:49:32 crc kubenswrapper[4681]: E0404 02:49:32.202217 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:49:43 crc kubenswrapper[4681]: I0404 02:49:43.201124 4681 scope.go:117] "RemoveContainer" containerID="690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01" Apr 04 02:49:43 crc kubenswrapper[4681]: E0404 02:49:43.201856 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:49:57 crc kubenswrapper[4681]: I0404 02:49:57.201603 4681 scope.go:117] "RemoveContainer" containerID="690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01" Apr 04 02:49:57 crc kubenswrapper[4681]: E0404 02:49:57.202316 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:50:00 crc kubenswrapper[4681]: I0404 02:50:00.143705 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587850-9pj6m"] Apr 04 02:50:00 crc kubenswrapper[4681]: E0404 02:50:00.144600 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4044e5bc-af4d-4dd9-8159-3a98d049bdba" containerName="registry-server" Apr 04 02:50:00 crc kubenswrapper[4681]: I0404 02:50:00.144614 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4044e5bc-af4d-4dd9-8159-3a98d049bdba" containerName="registry-server" Apr 04 02:50:00 crc kubenswrapper[4681]: E0404 02:50:00.144635 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4044e5bc-af4d-4dd9-8159-3a98d049bdba" containerName="extract-utilities" Apr 04 02:50:00 crc kubenswrapper[4681]: I0404 02:50:00.144641 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4044e5bc-af4d-4dd9-8159-3a98d049bdba" containerName="extract-utilities" Apr 04 02:50:00 crc kubenswrapper[4681]: E0404 02:50:00.144657 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4044e5bc-af4d-4dd9-8159-3a98d049bdba" containerName="extract-content" Apr 04 02:50:00 crc kubenswrapper[4681]: I0404 02:50:00.144665 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4044e5bc-af4d-4dd9-8159-3a98d049bdba" containerName="extract-content" Apr 04 02:50:00 crc kubenswrapper[4681]: I0404 02:50:00.144863 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4044e5bc-af4d-4dd9-8159-3a98d049bdba" containerName="registry-server" Apr 04 02:50:00 crc kubenswrapper[4681]: I0404 02:50:00.145518 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587850-9pj6m" Apr 04 02:50:00 crc kubenswrapper[4681]: I0404 02:50:00.147925 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 02:50:00 crc kubenswrapper[4681]: I0404 02:50:00.148125 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 02:50:00 crc kubenswrapper[4681]: I0404 02:50:00.148245 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 02:50:00 crc kubenswrapper[4681]: I0404 02:50:00.168422 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587850-9pj6m"] Apr 04 02:50:00 crc kubenswrapper[4681]: I0404 02:50:00.231171 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8kvn\" (UniqueName: \"kubernetes.io/projected/95508813-b747-4dbb-8b5b-f845e7044829-kube-api-access-r8kvn\") pod \"auto-csr-approver-29587850-9pj6m\" (UID: \"95508813-b747-4dbb-8b5b-f845e7044829\") " pod="openshift-infra/auto-csr-approver-29587850-9pj6m" Apr 04 02:50:00 crc kubenswrapper[4681]: I0404 02:50:00.333788 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8kvn\" (UniqueName: \"kubernetes.io/projected/95508813-b747-4dbb-8b5b-f845e7044829-kube-api-access-r8kvn\") pod \"auto-csr-approver-29587850-9pj6m\" (UID: \"95508813-b747-4dbb-8b5b-f845e7044829\") " pod="openshift-infra/auto-csr-approver-29587850-9pj6m" Apr 04 02:50:00 crc kubenswrapper[4681]: I0404 02:50:00.349913 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8kvn\" (UniqueName: \"kubernetes.io/projected/95508813-b747-4dbb-8b5b-f845e7044829-kube-api-access-r8kvn\") pod \"auto-csr-approver-29587850-9pj6m\" (UID: \"95508813-b747-4dbb-8b5b-f845e7044829\") " pod="openshift-infra/auto-csr-approver-29587850-9pj6m" Apr 04 02:50:00 crc kubenswrapper[4681]: I0404 02:50:00.474778 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587850-9pj6m" Apr 04 02:50:00 crc kubenswrapper[4681]: I0404 02:50:00.963006 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587850-9pj6m"] Apr 04 02:50:01 crc kubenswrapper[4681]: I0404 02:50:01.166367 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587850-9pj6m" event={"ID":"95508813-b747-4dbb-8b5b-f845e7044829","Type":"ContainerStarted","Data":"a8b28c88b808bfbff6c88624cd0d43ae60023c718c6d2c60a24dd4c8336276b2"} Apr 04 02:50:03 crc kubenswrapper[4681]: I0404 02:50:03.188564 4681 generic.go:334] "Generic (PLEG): container finished" podID="95508813-b747-4dbb-8b5b-f845e7044829" containerID="d386b321f444658bac254380f2c6812782087281ca92c0871e6c0aa7fe1ab957" exitCode=0 Apr 04 02:50:03 crc kubenswrapper[4681]: I0404 02:50:03.188801 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587850-9pj6m" event={"ID":"95508813-b747-4dbb-8b5b-f845e7044829","Type":"ContainerDied","Data":"d386b321f444658bac254380f2c6812782087281ca92c0871e6c0aa7fe1ab957"} Apr 04 02:50:03 crc kubenswrapper[4681]: I0404 02:50:03.192303 4681 generic.go:334] "Generic (PLEG): container finished" podID="d7ef2b80-e8d5-4f17-8617-d3a88ef35137" containerID="d41aae677dd610bce9abfeb3e057d94ed1707a4b5ffbbfb9438a6c9a161c82e9" exitCode=0 Apr 04 02:50:03 crc kubenswrapper[4681]: I0404 02:50:03.192339 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn" event={"ID":"d7ef2b80-e8d5-4f17-8617-d3a88ef35137","Type":"ContainerDied","Data":"d41aae677dd610bce9abfeb3e057d94ed1707a4b5ffbbfb9438a6c9a161c82e9"} Apr 04 02:50:04 crc kubenswrapper[4681]: I0404 02:50:04.613724 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587850-9pj6m" Apr 04 02:50:04 crc kubenswrapper[4681]: I0404 02:50:04.622578 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn" Apr 04 02:50:04 crc kubenswrapper[4681]: I0404 02:50:04.719155 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-ceilometer-compute-config-data-1\") pod \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\" (UID: \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\") " Apr 04 02:50:04 crc kubenswrapper[4681]: I0404 02:50:04.719538 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8kvn\" (UniqueName: \"kubernetes.io/projected/95508813-b747-4dbb-8b5b-f845e7044829-kube-api-access-r8kvn\") pod \"95508813-b747-4dbb-8b5b-f845e7044829\" (UID: \"95508813-b747-4dbb-8b5b-f845e7044829\") " Apr 04 02:50:04 crc kubenswrapper[4681]: I0404 02:50:04.719690 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx5hb\" (UniqueName: \"kubernetes.io/projected/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-kube-api-access-dx5hb\") pod \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\" (UID: \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\") " Apr 04 02:50:04 crc kubenswrapper[4681]: I0404 02:50:04.719774 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-telemetry-combined-ca-bundle\") pod \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\" (UID: \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\") " Apr 04 02:50:04 crc kubenswrapper[4681]: I0404 02:50:04.719807 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-inventory\") pod \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\" (UID: \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\") " Apr 04 02:50:04 crc kubenswrapper[4681]: I0404 02:50:04.719838 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-ssh-key-openstack-edpm-ipam\") pod \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\" (UID: \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\") " Apr 04 02:50:04 crc kubenswrapper[4681]: I0404 02:50:04.719872 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-ceilometer-compute-config-data-0\") pod \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\" (UID: \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\") " Apr 04 02:50:04 crc kubenswrapper[4681]: I0404 02:50:04.719890 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-ceilometer-compute-config-data-2\") pod \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\" (UID: \"d7ef2b80-e8d5-4f17-8617-d3a88ef35137\") " Apr 04 02:50:04 crc kubenswrapper[4681]: I0404 02:50:04.724627 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95508813-b747-4dbb-8b5b-f845e7044829-kube-api-access-r8kvn" (OuterVolumeSpecName: "kube-api-access-r8kvn") pod "95508813-b747-4dbb-8b5b-f845e7044829" (UID: "95508813-b747-4dbb-8b5b-f845e7044829"). InnerVolumeSpecName "kube-api-access-r8kvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:50:04 crc kubenswrapper[4681]: I0404 02:50:04.730378 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-kube-api-access-dx5hb" (OuterVolumeSpecName: "kube-api-access-dx5hb") pod "d7ef2b80-e8d5-4f17-8617-d3a88ef35137" (UID: "d7ef2b80-e8d5-4f17-8617-d3a88ef35137"). InnerVolumeSpecName "kube-api-access-dx5hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:50:04 crc kubenswrapper[4681]: I0404 02:50:04.731307 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "d7ef2b80-e8d5-4f17-8617-d3a88ef35137" (UID: "d7ef2b80-e8d5-4f17-8617-d3a88ef35137"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:50:04 crc kubenswrapper[4681]: I0404 02:50:04.753732 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "d7ef2b80-e8d5-4f17-8617-d3a88ef35137" (UID: "d7ef2b80-e8d5-4f17-8617-d3a88ef35137"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:50:04 crc kubenswrapper[4681]: I0404 02:50:04.755555 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "d7ef2b80-e8d5-4f17-8617-d3a88ef35137" (UID: "d7ef2b80-e8d5-4f17-8617-d3a88ef35137"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:50:04 crc kubenswrapper[4681]: I0404 02:50:04.763623 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "d7ef2b80-e8d5-4f17-8617-d3a88ef35137" (UID: "d7ef2b80-e8d5-4f17-8617-d3a88ef35137"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:50:04 crc kubenswrapper[4681]: I0404 02:50:04.770854 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d7ef2b80-e8d5-4f17-8617-d3a88ef35137" (UID: "d7ef2b80-e8d5-4f17-8617-d3a88ef35137"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:50:04 crc kubenswrapper[4681]: I0404 02:50:04.776782 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-inventory" (OuterVolumeSpecName: "inventory") pod "d7ef2b80-e8d5-4f17-8617-d3a88ef35137" (UID: "d7ef2b80-e8d5-4f17-8617-d3a88ef35137"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:50:04 crc kubenswrapper[4681]: I0404 02:50:04.822523 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx5hb\" (UniqueName: \"kubernetes.io/projected/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-kube-api-access-dx5hb\") on node \"crc\" DevicePath \"\"" Apr 04 02:50:04 crc kubenswrapper[4681]: I0404 02:50:04.822563 4681 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:50:04 crc kubenswrapper[4681]: I0404 02:50:04.822577 4681 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-inventory\") on node \"crc\" DevicePath \"\"" Apr 04 02:50:04 crc kubenswrapper[4681]: I0404 02:50:04.822590 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Apr 04 02:50:04 crc kubenswrapper[4681]: I0404 02:50:04.822603 4681 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Apr 04 02:50:04 crc kubenswrapper[4681]: I0404 02:50:04.822616 4681 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Apr 04 02:50:04 crc kubenswrapper[4681]: I0404 02:50:04.822631 4681 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/d7ef2b80-e8d5-4f17-8617-d3a88ef35137-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Apr 04 02:50:04 crc kubenswrapper[4681]: I0404 02:50:04.822645 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8kvn\" (UniqueName: \"kubernetes.io/projected/95508813-b747-4dbb-8b5b-f845e7044829-kube-api-access-r8kvn\") on node \"crc\" DevicePath \"\"" Apr 04 02:50:05 crc kubenswrapper[4681]: I0404 02:50:05.214554 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587850-9pj6m" Apr 04 02:50:05 crc kubenswrapper[4681]: I0404 02:50:05.216646 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn" Apr 04 02:50:05 crc kubenswrapper[4681]: I0404 02:50:05.217149 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587850-9pj6m" event={"ID":"95508813-b747-4dbb-8b5b-f845e7044829","Type":"ContainerDied","Data":"a8b28c88b808bfbff6c88624cd0d43ae60023c718c6d2c60a24dd4c8336276b2"} Apr 04 02:50:05 crc kubenswrapper[4681]: I0404 02:50:05.217179 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8b28c88b808bfbff6c88624cd0d43ae60023c718c6d2c60a24dd4c8336276b2" Apr 04 02:50:05 crc kubenswrapper[4681]: I0404 02:50:05.217192 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn" event={"ID":"d7ef2b80-e8d5-4f17-8617-d3a88ef35137","Type":"ContainerDied","Data":"03cc045b2cfd8db3831674562989efec5ff32b8699a6a6b93231a62ffe5d227c"} Apr 04 02:50:05 crc kubenswrapper[4681]: I0404 02:50:05.217202 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03cc045b2cfd8db3831674562989efec5ff32b8699a6a6b93231a62ffe5d227c" Apr 04 02:50:05 crc kubenswrapper[4681]: I0404 02:50:05.699603 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587844-rtm7n"] Apr 04 02:50:05 crc kubenswrapper[4681]: I0404 02:50:05.710060 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587844-rtm7n"] Apr 04 02:50:07 crc kubenswrapper[4681]: I0404 02:50:07.215678 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd7d2abb-280f-400b-ac78-1befdefa6c9c" path="/var/lib/kubelet/pods/bd7d2abb-280f-400b-ac78-1befdefa6c9c/volumes" Apr 04 02:50:12 crc kubenswrapper[4681]: I0404 02:50:12.201561 4681 scope.go:117] "RemoveContainer" containerID="690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01" Apr 04 02:50:12 crc kubenswrapper[4681]: E0404 02:50:12.202603 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:50:19 crc kubenswrapper[4681]: I0404 02:50:19.037211 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hkk26"] Apr 04 02:50:19 crc kubenswrapper[4681]: E0404 02:50:19.038140 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95508813-b747-4dbb-8b5b-f845e7044829" containerName="oc" Apr 04 02:50:19 crc kubenswrapper[4681]: I0404 02:50:19.038153 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="95508813-b747-4dbb-8b5b-f845e7044829" containerName="oc" Apr 04 02:50:19 crc kubenswrapper[4681]: E0404 02:50:19.038171 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7ef2b80-e8d5-4f17-8617-d3a88ef35137" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Apr 04 02:50:19 crc kubenswrapper[4681]: I0404 02:50:19.038179 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7ef2b80-e8d5-4f17-8617-d3a88ef35137" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Apr 04 02:50:19 crc kubenswrapper[4681]: I0404 02:50:19.038376 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7ef2b80-e8d5-4f17-8617-d3a88ef35137" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Apr 04 02:50:19 crc kubenswrapper[4681]: I0404 02:50:19.038392 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="95508813-b747-4dbb-8b5b-f845e7044829" containerName="oc" Apr 04 02:50:19 crc kubenswrapper[4681]: I0404 02:50:19.039736 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkk26" Apr 04 02:50:19 crc kubenswrapper[4681]: I0404 02:50:19.055461 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkk26"] Apr 04 02:50:19 crc kubenswrapper[4681]: I0404 02:50:19.167827 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkz6l\" (UniqueName: \"kubernetes.io/projected/97ef63ca-62f1-427e-8133-b65435c3ad9d-kube-api-access-tkz6l\") pod \"redhat-marketplace-hkk26\" (UID: \"97ef63ca-62f1-427e-8133-b65435c3ad9d\") " pod="openshift-marketplace/redhat-marketplace-hkk26" Apr 04 02:50:19 crc kubenswrapper[4681]: I0404 02:50:19.167971 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97ef63ca-62f1-427e-8133-b65435c3ad9d-utilities\") pod \"redhat-marketplace-hkk26\" (UID: \"97ef63ca-62f1-427e-8133-b65435c3ad9d\") " pod="openshift-marketplace/redhat-marketplace-hkk26" Apr 04 02:50:19 crc kubenswrapper[4681]: I0404 02:50:19.168015 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97ef63ca-62f1-427e-8133-b65435c3ad9d-catalog-content\") pod \"redhat-marketplace-hkk26\" (UID: \"97ef63ca-62f1-427e-8133-b65435c3ad9d\") " pod="openshift-marketplace/redhat-marketplace-hkk26" Apr 04 02:50:19 crc kubenswrapper[4681]: I0404 02:50:19.270033 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97ef63ca-62f1-427e-8133-b65435c3ad9d-utilities\") pod \"redhat-marketplace-hkk26\" (UID: \"97ef63ca-62f1-427e-8133-b65435c3ad9d\") " pod="openshift-marketplace/redhat-marketplace-hkk26" Apr 04 02:50:19 crc kubenswrapper[4681]: I0404 02:50:19.270089 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97ef63ca-62f1-427e-8133-b65435c3ad9d-catalog-content\") pod \"redhat-marketplace-hkk26\" (UID: \"97ef63ca-62f1-427e-8133-b65435c3ad9d\") " pod="openshift-marketplace/redhat-marketplace-hkk26" Apr 04 02:50:19 crc kubenswrapper[4681]: I0404 02:50:19.270177 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkz6l\" (UniqueName: \"kubernetes.io/projected/97ef63ca-62f1-427e-8133-b65435c3ad9d-kube-api-access-tkz6l\") pod \"redhat-marketplace-hkk26\" (UID: \"97ef63ca-62f1-427e-8133-b65435c3ad9d\") " pod="openshift-marketplace/redhat-marketplace-hkk26" Apr 04 02:50:19 crc kubenswrapper[4681]: I0404 02:50:19.270558 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97ef63ca-62f1-427e-8133-b65435c3ad9d-utilities\") pod \"redhat-marketplace-hkk26\" (UID: \"97ef63ca-62f1-427e-8133-b65435c3ad9d\") " pod="openshift-marketplace/redhat-marketplace-hkk26" Apr 04 02:50:19 crc kubenswrapper[4681]: I0404 02:50:19.270696 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97ef63ca-62f1-427e-8133-b65435c3ad9d-catalog-content\") pod \"redhat-marketplace-hkk26\" (UID: \"97ef63ca-62f1-427e-8133-b65435c3ad9d\") " pod="openshift-marketplace/redhat-marketplace-hkk26" Apr 04 02:50:19 crc kubenswrapper[4681]: I0404 02:50:19.288468 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkz6l\" (UniqueName: \"kubernetes.io/projected/97ef63ca-62f1-427e-8133-b65435c3ad9d-kube-api-access-tkz6l\") pod \"redhat-marketplace-hkk26\" (UID: \"97ef63ca-62f1-427e-8133-b65435c3ad9d\") " pod="openshift-marketplace/redhat-marketplace-hkk26" Apr 04 02:50:19 crc kubenswrapper[4681]: I0404 02:50:19.364426 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkk26" Apr 04 02:50:19 crc kubenswrapper[4681]: I0404 02:50:19.821866 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkk26"] Apr 04 02:50:20 crc kubenswrapper[4681]: I0404 02:50:20.387544 4681 generic.go:334] "Generic (PLEG): container finished" podID="97ef63ca-62f1-427e-8133-b65435c3ad9d" containerID="5a8aff0d5deb908d66d6f0ba4b935b1ea4ab1cf5c52a200254c4bcbe298af818" exitCode=0 Apr 04 02:50:20 crc kubenswrapper[4681]: I0404 02:50:20.387654 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkk26" event={"ID":"97ef63ca-62f1-427e-8133-b65435c3ad9d","Type":"ContainerDied","Data":"5a8aff0d5deb908d66d6f0ba4b935b1ea4ab1cf5c52a200254c4bcbe298af818"} Apr 04 02:50:20 crc kubenswrapper[4681]: I0404 02:50:20.387964 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkk26" event={"ID":"97ef63ca-62f1-427e-8133-b65435c3ad9d","Type":"ContainerStarted","Data":"de35143439a8418c1f3ed3bcf1c8bee67ff83a8aaca2255df2862fea85ac1d3d"} Apr 04 02:50:22 crc kubenswrapper[4681]: I0404 02:50:22.414157 4681 generic.go:334] "Generic (PLEG): container finished" podID="97ef63ca-62f1-427e-8133-b65435c3ad9d" containerID="a3ca5259344b32786d04bd6d73398194dd2c1150817eb968422a58eecf66729d" exitCode=0 Apr 04 02:50:22 crc kubenswrapper[4681]: I0404 02:50:22.414227 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkk26" event={"ID":"97ef63ca-62f1-427e-8133-b65435c3ad9d","Type":"ContainerDied","Data":"a3ca5259344b32786d04bd6d73398194dd2c1150817eb968422a58eecf66729d"} Apr 04 02:50:24 crc kubenswrapper[4681]: I0404 02:50:24.201566 4681 scope.go:117] "RemoveContainer" containerID="690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01" Apr 04 02:50:24 crc kubenswrapper[4681]: E0404 02:50:24.202282 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:50:24 crc kubenswrapper[4681]: I0404 02:50:24.444370 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkk26" event={"ID":"97ef63ca-62f1-427e-8133-b65435c3ad9d","Type":"ContainerStarted","Data":"c3c0b7d9f7f585694cedf70a9d2ff5c9526a102ce43ada2bcc2ce689fe2b1bc1"} Apr 04 02:50:24 crc kubenswrapper[4681]: I0404 02:50:24.466914 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hkk26" podStartSLOduration=2.563592461 podStartE2EDuration="5.466895387s" podCreationTimestamp="2026-04-04 02:50:19 +0000 UTC" firstStartedPulling="2026-04-04 02:50:20.390201763 +0000 UTC m=+3300.055976883" lastFinishedPulling="2026-04-04 02:50:23.293504689 +0000 UTC m=+3302.959279809" observedRunningTime="2026-04-04 02:50:24.461703915 +0000 UTC m=+3304.127479035" watchObservedRunningTime="2026-04-04 02:50:24.466895387 +0000 UTC m=+3304.132670507" Apr 04 02:50:29 crc kubenswrapper[4681]: I0404 02:50:29.364961 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hkk26" Apr 04 02:50:29 crc kubenswrapper[4681]: I0404 02:50:29.365558 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hkk26" Apr 04 02:50:29 crc kubenswrapper[4681]: I0404 02:50:29.413136 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hkk26" Apr 04 02:50:29 crc kubenswrapper[4681]: I0404 02:50:29.569096 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hkk26" Apr 04 02:50:29 crc kubenswrapper[4681]: I0404 02:50:29.648489 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkk26"] Apr 04 02:50:30 crc kubenswrapper[4681]: I0404 02:50:30.895836 4681 scope.go:117] "RemoveContainer" containerID="c2b6ca826731d1140be1d2fd7ee6d692d7a8be750b2cc06ef0b6343bccbc80dc" Apr 04 02:50:31 crc kubenswrapper[4681]: I0404 02:50:31.548602 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hkk26" podUID="97ef63ca-62f1-427e-8133-b65435c3ad9d" containerName="registry-server" containerID="cri-o://c3c0b7d9f7f585694cedf70a9d2ff5c9526a102ce43ada2bcc2ce689fe2b1bc1" gracePeriod=2 Apr 04 02:50:32 crc kubenswrapper[4681]: I0404 02:50:32.109632 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkk26" Apr 04 02:50:32 crc kubenswrapper[4681]: I0404 02:50:32.261521 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkz6l\" (UniqueName: \"kubernetes.io/projected/97ef63ca-62f1-427e-8133-b65435c3ad9d-kube-api-access-tkz6l\") pod \"97ef63ca-62f1-427e-8133-b65435c3ad9d\" (UID: \"97ef63ca-62f1-427e-8133-b65435c3ad9d\") " Apr 04 02:50:32 crc kubenswrapper[4681]: I0404 02:50:32.261597 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97ef63ca-62f1-427e-8133-b65435c3ad9d-utilities\") pod \"97ef63ca-62f1-427e-8133-b65435c3ad9d\" (UID: \"97ef63ca-62f1-427e-8133-b65435c3ad9d\") " Apr 04 02:50:32 crc kubenswrapper[4681]: I0404 02:50:32.261652 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97ef63ca-62f1-427e-8133-b65435c3ad9d-catalog-content\") pod \"97ef63ca-62f1-427e-8133-b65435c3ad9d\" (UID: \"97ef63ca-62f1-427e-8133-b65435c3ad9d\") " Apr 04 02:50:32 crc kubenswrapper[4681]: I0404 02:50:32.262596 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97ef63ca-62f1-427e-8133-b65435c3ad9d-utilities" (OuterVolumeSpecName: "utilities") pod "97ef63ca-62f1-427e-8133-b65435c3ad9d" (UID: "97ef63ca-62f1-427e-8133-b65435c3ad9d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:50:32 crc kubenswrapper[4681]: I0404 02:50:32.267879 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97ef63ca-62f1-427e-8133-b65435c3ad9d-kube-api-access-tkz6l" (OuterVolumeSpecName: "kube-api-access-tkz6l") pod "97ef63ca-62f1-427e-8133-b65435c3ad9d" (UID: "97ef63ca-62f1-427e-8133-b65435c3ad9d"). InnerVolumeSpecName "kube-api-access-tkz6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:50:32 crc kubenswrapper[4681]: I0404 02:50:32.292371 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97ef63ca-62f1-427e-8133-b65435c3ad9d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97ef63ca-62f1-427e-8133-b65435c3ad9d" (UID: "97ef63ca-62f1-427e-8133-b65435c3ad9d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:50:32 crc kubenswrapper[4681]: I0404 02:50:32.364389 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkz6l\" (UniqueName: \"kubernetes.io/projected/97ef63ca-62f1-427e-8133-b65435c3ad9d-kube-api-access-tkz6l\") on node \"crc\" DevicePath \"\"" Apr 04 02:50:32 crc kubenswrapper[4681]: I0404 02:50:32.364430 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97ef63ca-62f1-427e-8133-b65435c3ad9d-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 02:50:32 crc kubenswrapper[4681]: I0404 02:50:32.364441 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97ef63ca-62f1-427e-8133-b65435c3ad9d-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 02:50:32 crc kubenswrapper[4681]: I0404 02:50:32.562082 4681 generic.go:334] "Generic (PLEG): container finished" podID="97ef63ca-62f1-427e-8133-b65435c3ad9d" containerID="c3c0b7d9f7f585694cedf70a9d2ff5c9526a102ce43ada2bcc2ce689fe2b1bc1" exitCode=0 Apr 04 02:50:32 crc kubenswrapper[4681]: I0404 02:50:32.562157 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hkk26" Apr 04 02:50:32 crc kubenswrapper[4681]: I0404 02:50:32.562162 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkk26" event={"ID":"97ef63ca-62f1-427e-8133-b65435c3ad9d","Type":"ContainerDied","Data":"c3c0b7d9f7f585694cedf70a9d2ff5c9526a102ce43ada2bcc2ce689fe2b1bc1"} Apr 04 02:50:32 crc kubenswrapper[4681]: I0404 02:50:32.562555 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hkk26" event={"ID":"97ef63ca-62f1-427e-8133-b65435c3ad9d","Type":"ContainerDied","Data":"de35143439a8418c1f3ed3bcf1c8bee67ff83a8aaca2255df2862fea85ac1d3d"} Apr 04 02:50:32 crc kubenswrapper[4681]: I0404 02:50:32.562578 4681 scope.go:117] "RemoveContainer" containerID="c3c0b7d9f7f585694cedf70a9d2ff5c9526a102ce43ada2bcc2ce689fe2b1bc1" Apr 04 02:50:32 crc kubenswrapper[4681]: I0404 02:50:32.598304 4681 scope.go:117] "RemoveContainer" containerID="a3ca5259344b32786d04bd6d73398194dd2c1150817eb968422a58eecf66729d" Apr 04 02:50:32 crc kubenswrapper[4681]: I0404 02:50:32.614233 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkk26"] Apr 04 02:50:32 crc kubenswrapper[4681]: I0404 02:50:32.627608 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hkk26"] Apr 04 02:50:32 crc kubenswrapper[4681]: I0404 02:50:32.635981 4681 scope.go:117] "RemoveContainer" containerID="5a8aff0d5deb908d66d6f0ba4b935b1ea4ab1cf5c52a200254c4bcbe298af818" Apr 04 02:50:32 crc kubenswrapper[4681]: I0404 02:50:32.663714 4681 scope.go:117] "RemoveContainer" containerID="c3c0b7d9f7f585694cedf70a9d2ff5c9526a102ce43ada2bcc2ce689fe2b1bc1" Apr 04 02:50:32 crc kubenswrapper[4681]: E0404 02:50:32.664171 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3c0b7d9f7f585694cedf70a9d2ff5c9526a102ce43ada2bcc2ce689fe2b1bc1\": container with ID starting with c3c0b7d9f7f585694cedf70a9d2ff5c9526a102ce43ada2bcc2ce689fe2b1bc1 not found: ID does not exist" containerID="c3c0b7d9f7f585694cedf70a9d2ff5c9526a102ce43ada2bcc2ce689fe2b1bc1" Apr 04 02:50:32 crc kubenswrapper[4681]: I0404 02:50:32.664209 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3c0b7d9f7f585694cedf70a9d2ff5c9526a102ce43ada2bcc2ce689fe2b1bc1"} err="failed to get container status \"c3c0b7d9f7f585694cedf70a9d2ff5c9526a102ce43ada2bcc2ce689fe2b1bc1\": rpc error: code = NotFound desc = could not find container \"c3c0b7d9f7f585694cedf70a9d2ff5c9526a102ce43ada2bcc2ce689fe2b1bc1\": container with ID starting with c3c0b7d9f7f585694cedf70a9d2ff5c9526a102ce43ada2bcc2ce689fe2b1bc1 not found: ID does not exist" Apr 04 02:50:32 crc kubenswrapper[4681]: I0404 02:50:32.664229 4681 scope.go:117] "RemoveContainer" containerID="a3ca5259344b32786d04bd6d73398194dd2c1150817eb968422a58eecf66729d" Apr 04 02:50:32 crc kubenswrapper[4681]: E0404 02:50:32.667682 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3ca5259344b32786d04bd6d73398194dd2c1150817eb968422a58eecf66729d\": container with ID starting with a3ca5259344b32786d04bd6d73398194dd2c1150817eb968422a58eecf66729d not found: ID does not exist" containerID="a3ca5259344b32786d04bd6d73398194dd2c1150817eb968422a58eecf66729d" Apr 04 02:50:32 crc kubenswrapper[4681]: I0404 02:50:32.667731 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3ca5259344b32786d04bd6d73398194dd2c1150817eb968422a58eecf66729d"} err="failed to get container status \"a3ca5259344b32786d04bd6d73398194dd2c1150817eb968422a58eecf66729d\": rpc error: code = NotFound desc = could not find container \"a3ca5259344b32786d04bd6d73398194dd2c1150817eb968422a58eecf66729d\": container with ID starting with a3ca5259344b32786d04bd6d73398194dd2c1150817eb968422a58eecf66729d not found: ID does not exist" Apr 04 02:50:32 crc kubenswrapper[4681]: I0404 02:50:32.667762 4681 scope.go:117] "RemoveContainer" containerID="5a8aff0d5deb908d66d6f0ba4b935b1ea4ab1cf5c52a200254c4bcbe298af818" Apr 04 02:50:32 crc kubenswrapper[4681]: E0404 02:50:32.668100 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a8aff0d5deb908d66d6f0ba4b935b1ea4ab1cf5c52a200254c4bcbe298af818\": container with ID starting with 5a8aff0d5deb908d66d6f0ba4b935b1ea4ab1cf5c52a200254c4bcbe298af818 not found: ID does not exist" containerID="5a8aff0d5deb908d66d6f0ba4b935b1ea4ab1cf5c52a200254c4bcbe298af818" Apr 04 02:50:32 crc kubenswrapper[4681]: I0404 02:50:32.668128 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a8aff0d5deb908d66d6f0ba4b935b1ea4ab1cf5c52a200254c4bcbe298af818"} err="failed to get container status \"5a8aff0d5deb908d66d6f0ba4b935b1ea4ab1cf5c52a200254c4bcbe298af818\": rpc error: code = NotFound desc = could not find container \"5a8aff0d5deb908d66d6f0ba4b935b1ea4ab1cf5c52a200254c4bcbe298af818\": container with ID starting with 5a8aff0d5deb908d66d6f0ba4b935b1ea4ab1cf5c52a200254c4bcbe298af818 not found: ID does not exist" Apr 04 02:50:33 crc kubenswrapper[4681]: I0404 02:50:33.222224 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97ef63ca-62f1-427e-8133-b65435c3ad9d" path="/var/lib/kubelet/pods/97ef63ca-62f1-427e-8133-b65435c3ad9d/volumes" Apr 04 02:50:35 crc kubenswrapper[4681]: I0404 02:50:35.201422 4681 scope.go:117] "RemoveContainer" containerID="690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01" Apr 04 02:50:35 crc kubenswrapper[4681]: E0404 02:50:35.202035 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.643172 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Apr 04 02:50:38 crc kubenswrapper[4681]: E0404 02:50:38.644277 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97ef63ca-62f1-427e-8133-b65435c3ad9d" containerName="extract-utilities" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.645616 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ef63ca-62f1-427e-8133-b65435c3ad9d" containerName="extract-utilities" Apr 04 02:50:38 crc kubenswrapper[4681]: E0404 02:50:38.645629 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97ef63ca-62f1-427e-8133-b65435c3ad9d" containerName="extract-content" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.645637 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ef63ca-62f1-427e-8133-b65435c3ad9d" containerName="extract-content" Apr 04 02:50:38 crc kubenswrapper[4681]: E0404 02:50:38.645675 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97ef63ca-62f1-427e-8133-b65435c3ad9d" containerName="registry-server" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.645680 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ef63ca-62f1-427e-8133-b65435c3ad9d" containerName="registry-server" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.645889 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="97ef63ca-62f1-427e-8133-b65435c3ad9d" containerName="registry-server" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.647085 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.659882 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.662371 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.716717 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-0"] Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.718816 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.720528 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-config-data" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.729977 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.785205 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.786979 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.788917 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-2-config-data" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.797953 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.812298 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b36b7670-b847-4635-8dd5-8d5ea0d7825c-etc-nvme\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.812339 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b36b7670-b847-4635-8dd5-8d5ea0d7825c-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.812375 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b36b7670-b847-4635-8dd5-8d5ea0d7825c-dev\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.812390 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b36b7670-b847-4635-8dd5-8d5ea0d7825c-scripts\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.812413 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b36b7670-b847-4635-8dd5-8d5ea0d7825c-lib-modules\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.812450 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b36b7670-b847-4635-8dd5-8d5ea0d7825c-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.812470 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b36b7670-b847-4635-8dd5-8d5ea0d7825c-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.812498 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b36b7670-b847-4635-8dd5-8d5ea0d7825c-run\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.812529 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b36b7670-b847-4635-8dd5-8d5ea0d7825c-config-data-custom\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.812550 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b36b7670-b847-4635-8dd5-8d5ea0d7825c-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.812567 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b36b7670-b847-4635-8dd5-8d5ea0d7825c-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.812583 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b36b7670-b847-4635-8dd5-8d5ea0d7825c-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.812647 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b36b7670-b847-4635-8dd5-8d5ea0d7825c-sys\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.812679 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b36b7670-b847-4635-8dd5-8d5ea0d7825c-config-data\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.812694 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xqrp\" (UniqueName: \"kubernetes.io/projected/b36b7670-b847-4635-8dd5-8d5ea0d7825c-kube-api-access-6xqrp\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.914903 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1ba04b4d-7697-4313-8759-e95a65957daa-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.914949 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.914981 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b36b7670-b847-4635-8dd5-8d5ea0d7825c-run\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.915002 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.915035 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1ba04b4d-7697-4313-8759-e95a65957daa-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.915065 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b36b7670-b847-4635-8dd5-8d5ea0d7825c-config-data-custom\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.915080 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1ba04b4d-7697-4313-8759-e95a65957daa-sys\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.915089 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b36b7670-b847-4635-8dd5-8d5ea0d7825c-run\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.915096 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1ba04b4d-7697-4313-8759-e95a65957daa-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.915159 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.915215 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b36b7670-b847-4635-8dd5-8d5ea0d7825c-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.915241 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.915276 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b36b7670-b847-4635-8dd5-8d5ea0d7825c-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.915303 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1ba04b4d-7697-4313-8759-e95a65957daa-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.915325 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b36b7670-b847-4635-8dd5-8d5ea0d7825c-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.915386 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.915415 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1ba04b4d-7697-4313-8759-e95a65957daa-dev\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.915434 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ba04b4d-7697-4313-8759-e95a65957daa-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.915456 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzwc8\" (UniqueName: \"kubernetes.io/projected/1ba04b4d-7697-4313-8759-e95a65957daa-kube-api-access-dzwc8\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.915543 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ba04b4d-7697-4313-8759-e95a65957daa-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.915587 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b36b7670-b847-4635-8dd5-8d5ea0d7825c-sys\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.915602 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ba04b4d-7697-4313-8759-e95a65957daa-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.915630 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba04b4d-7697-4313-8759-e95a65957daa-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.915666 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b36b7670-b847-4635-8dd5-8d5ea0d7825c-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.915690 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b36b7670-b847-4635-8dd5-8d5ea0d7825c-sys\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.915705 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b36b7670-b847-4635-8dd5-8d5ea0d7825c-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.915785 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.915851 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b36b7670-b847-4635-8dd5-8d5ea0d7825c-config-data\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.915875 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xqrp\" (UniqueName: \"kubernetes.io/projected/b36b7670-b847-4635-8dd5-8d5ea0d7825c-kube-api-access-6xqrp\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.915923 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.915947 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b36b7670-b847-4635-8dd5-8d5ea0d7825c-etc-nvme\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.915964 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.915987 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.916002 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b36b7670-b847-4635-8dd5-8d5ea0d7825c-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.916032 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql5bt\" (UniqueName: \"kubernetes.io/projected/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-kube-api-access-ql5bt\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.916064 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b36b7670-b847-4635-8dd5-8d5ea0d7825c-dev\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.916083 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b36b7670-b847-4635-8dd5-8d5ea0d7825c-scripts\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.916114 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.916142 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.916165 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b36b7670-b847-4635-8dd5-8d5ea0d7825c-lib-modules\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.916191 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.916219 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1ba04b4d-7697-4313-8759-e95a65957daa-run\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.916251 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.916286 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.916305 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ba04b4d-7697-4313-8759-e95a65957daa-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.916339 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b36b7670-b847-4635-8dd5-8d5ea0d7825c-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.916358 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1ba04b4d-7697-4313-8759-e95a65957daa-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.916389 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ba04b4d-7697-4313-8759-e95a65957daa-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.916405 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b36b7670-b847-4635-8dd5-8d5ea0d7825c-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.916492 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b36b7670-b847-4635-8dd5-8d5ea0d7825c-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.916843 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b36b7670-b847-4635-8dd5-8d5ea0d7825c-etc-nvme\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.916960 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/b36b7670-b847-4635-8dd5-8d5ea0d7825c-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.917001 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b36b7670-b847-4635-8dd5-8d5ea0d7825c-dev\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.917397 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b36b7670-b847-4635-8dd5-8d5ea0d7825c-lib-modules\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.917500 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/b36b7670-b847-4635-8dd5-8d5ea0d7825c-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.921130 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b36b7670-b847-4635-8dd5-8d5ea0d7825c-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.921512 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b36b7670-b847-4635-8dd5-8d5ea0d7825c-config-data-custom\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.925126 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b36b7670-b847-4635-8dd5-8d5ea0d7825c-config-data\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.927907 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b36b7670-b847-4635-8dd5-8d5ea0d7825c-scripts\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.933532 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xqrp\" (UniqueName: \"kubernetes.io/projected/b36b7670-b847-4635-8dd5-8d5ea0d7825c-kube-api-access-6xqrp\") pod \"cinder-backup-0\" (UID: \"b36b7670-b847-4635-8dd5-8d5ea0d7825c\") " pod="openstack/cinder-backup-0" Apr 04 02:50:38 crc kubenswrapper[4681]: I0404 02:50:38.963586 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.017982 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1ba04b4d-7697-4313-8759-e95a65957daa-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018038 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ba04b4d-7697-4313-8759-e95a65957daa-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018086 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1ba04b4d-7697-4313-8759-e95a65957daa-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018108 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018128 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018149 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1ba04b4d-7697-4313-8759-e95a65957daa-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018176 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1ba04b4d-7697-4313-8759-e95a65957daa-sys\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018164 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1ba04b4d-7697-4313-8759-e95a65957daa-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018196 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018220 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1ba04b4d-7697-4313-8759-e95a65957daa-sys\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018190 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/1ba04b4d-7697-4313-8759-e95a65957daa-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018209 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018240 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1ba04b4d-7697-4313-8759-e95a65957daa-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018196 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1ba04b4d-7697-4313-8759-e95a65957daa-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018318 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018255 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/1ba04b4d-7697-4313-8759-e95a65957daa-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018357 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018372 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018382 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1ba04b4d-7697-4313-8759-e95a65957daa-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018426 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018454 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1ba04b4d-7697-4313-8759-e95a65957daa-dev\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018478 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ba04b4d-7697-4313-8759-e95a65957daa-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018491 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1ba04b4d-7697-4313-8759-e95a65957daa-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018503 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzwc8\" (UniqueName: \"kubernetes.io/projected/1ba04b4d-7697-4313-8759-e95a65957daa-kube-api-access-dzwc8\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018519 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ba04b4d-7697-4313-8759-e95a65957daa-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018523 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1ba04b4d-7697-4313-8759-e95a65957daa-dev\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018512 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018562 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ba04b4d-7697-4313-8759-e95a65957daa-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018584 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ba04b4d-7697-4313-8759-e95a65957daa-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018603 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba04b4d-7697-4313-8759-e95a65957daa-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018626 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018657 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018682 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018699 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018719 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql5bt\" (UniqueName: \"kubernetes.io/projected/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-kube-api-access-ql5bt\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018747 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018764 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018782 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018799 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1ba04b4d-7697-4313-8759-e95a65957daa-run\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018820 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018826 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018843 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018866 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ba04b4d-7697-4313-8759-e95a65957daa-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018955 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ba04b4d-7697-4313-8759-e95a65957daa-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.018988 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.019042 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.019195 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1ba04b4d-7697-4313-8759-e95a65957daa-run\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.022025 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.022099 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.022144 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.024673 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.025112 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.025571 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.025836 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ba04b4d-7697-4313-8759-e95a65957daa-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.026064 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ba04b4d-7697-4313-8759-e95a65957daa-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.026163 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ba04b4d-7697-4313-8759-e95a65957daa-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.028704 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ba04b4d-7697-4313-8759-e95a65957daa-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.031905 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.035291 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql5bt\" (UniqueName: \"kubernetes.io/projected/c220dfdf-0f59-4093-b5dd-b2eba1a80fee-kube-api-access-ql5bt\") pod \"cinder-volume-nfs-2-0\" (UID: \"c220dfdf-0f59-4093-b5dd-b2eba1a80fee\") " pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.035730 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzwc8\" (UniqueName: \"kubernetes.io/projected/1ba04b4d-7697-4313-8759-e95a65957daa-kube-api-access-dzwc8\") pod \"cinder-volume-nfs-0\" (UID: \"1ba04b4d-7697-4313-8759-e95a65957daa\") " pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.036276 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.109140 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.544429 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.642211 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"b36b7670-b847-4635-8dd5-8d5ea0d7825c","Type":"ContainerStarted","Data":"1e7147833ddd3a4d1fc75c1da4bc9ee09bf8c32cd3d18c0920095038e193aa7d"} Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.696598 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Apr 04 02:50:39 crc kubenswrapper[4681]: W0404 02:50:39.733661 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ba04b4d_7697_4313_8759_e95a65957daa.slice/crio-994e99ab3b09881d13a60188c63fa3bb09858b0d9bd3f351e9d532d242a212f3 WatchSource:0}: Error finding container 994e99ab3b09881d13a60188c63fa3bb09858b0d9bd3f351e9d532d242a212f3: Status 404 returned error can't find the container with id 994e99ab3b09881d13a60188c63fa3bb09858b0d9bd3f351e9d532d242a212f3 Apr 04 02:50:39 crc kubenswrapper[4681]: I0404 02:50:39.801782 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Apr 04 02:50:40 crc kubenswrapper[4681]: I0404 02:50:40.656512 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"b36b7670-b847-4635-8dd5-8d5ea0d7825c","Type":"ContainerStarted","Data":"a2c244347dfc5d7b95f4448a867cec35933f67df1ac5d47887ae997cf2c0b2f2"} Apr 04 02:50:40 crc kubenswrapper[4681]: I0404 02:50:40.658711 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"1ba04b4d-7697-4313-8759-e95a65957daa","Type":"ContainerStarted","Data":"0bbf25ae29477f6fbd0fa189c8bd04cc39543d8376305c1d05aa077c1b100855"} Apr 04 02:50:40 crc kubenswrapper[4681]: I0404 02:50:40.658751 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"1ba04b4d-7697-4313-8759-e95a65957daa","Type":"ContainerStarted","Data":"994e99ab3b09881d13a60188c63fa3bb09858b0d9bd3f351e9d532d242a212f3"} Apr 04 02:50:40 crc kubenswrapper[4681]: I0404 02:50:40.661095 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"c220dfdf-0f59-4093-b5dd-b2eba1a80fee","Type":"ContainerStarted","Data":"c4e2c49966d6bb29964f5ffbd74e6f8c130deef107726d743ef991c06af8a1d2"} Apr 04 02:50:40 crc kubenswrapper[4681]: I0404 02:50:40.661126 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"c220dfdf-0f59-4093-b5dd-b2eba1a80fee","Type":"ContainerStarted","Data":"45e40b70d3f67c9bd276aea0ff4d2900933adf1c9a901397b795e48a1a6ef7b8"} Apr 04 02:50:41 crc kubenswrapper[4681]: I0404 02:50:41.674086 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"c220dfdf-0f59-4093-b5dd-b2eba1a80fee","Type":"ContainerStarted","Data":"19ba7c46634db734a77308b271a5fdb085ce97e9c82b336811150b0ae98a5fc6"} Apr 04 02:50:41 crc kubenswrapper[4681]: I0404 02:50:41.677372 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"b36b7670-b847-4635-8dd5-8d5ea0d7825c","Type":"ContainerStarted","Data":"032be3649a241bd40b46739526b305f603c22f7ecee0916f4ed0339ea0c8628e"} Apr 04 02:50:41 crc kubenswrapper[4681]: I0404 02:50:41.680760 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"1ba04b4d-7697-4313-8759-e95a65957daa","Type":"ContainerStarted","Data":"410ae3bddd5d6a8cb63e0c65c8689f84e95eb4e2eea5cf5fe530d263bd8c0d93"} Apr 04 02:50:41 crc kubenswrapper[4681]: I0404 02:50:41.706689 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-2-0" podStartSLOduration=3.5575311320000003 podStartE2EDuration="3.706666725s" podCreationTimestamp="2026-04-04 02:50:38 +0000 UTC" firstStartedPulling="2026-04-04 02:50:39.804541334 +0000 UTC m=+3319.470316454" lastFinishedPulling="2026-04-04 02:50:39.953676927 +0000 UTC m=+3319.619452047" observedRunningTime="2026-04-04 02:50:41.701875814 +0000 UTC m=+3321.367650944" watchObservedRunningTime="2026-04-04 02:50:41.706666725 +0000 UTC m=+3321.372441855" Apr 04 02:50:41 crc kubenswrapper[4681]: I0404 02:50:41.741690 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-0" podStartSLOduration=3.525789335 podStartE2EDuration="3.741674481s" podCreationTimestamp="2026-04-04 02:50:38 +0000 UTC" firstStartedPulling="2026-04-04 02:50:39.736589067 +0000 UTC m=+3319.402364187" lastFinishedPulling="2026-04-04 02:50:39.952474213 +0000 UTC m=+3319.618249333" observedRunningTime="2026-04-04 02:50:41.734079503 +0000 UTC m=+3321.399854623" watchObservedRunningTime="2026-04-04 02:50:41.741674481 +0000 UTC m=+3321.407449601" Apr 04 02:50:41 crc kubenswrapper[4681]: I0404 02:50:41.767449 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.53696368 podStartE2EDuration="3.767432484s" podCreationTimestamp="2026-04-04 02:50:38 +0000 UTC" firstStartedPulling="2026-04-04 02:50:39.550823644 +0000 UTC m=+3319.216598774" lastFinishedPulling="2026-04-04 02:50:39.781292458 +0000 UTC m=+3319.447067578" observedRunningTime="2026-04-04 02:50:41.757945025 +0000 UTC m=+3321.423720145" watchObservedRunningTime="2026-04-04 02:50:41.767432484 +0000 UTC m=+3321.433207604" Apr 04 02:50:43 crc kubenswrapper[4681]: I0404 02:50:43.964402 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Apr 04 02:50:44 crc kubenswrapper[4681]: I0404 02:50:44.036521 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:44 crc kubenswrapper[4681]: I0404 02:50:44.109568 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:50:47 crc kubenswrapper[4681]: I0404 02:50:47.201429 4681 scope.go:117] "RemoveContainer" containerID="690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01" Apr 04 02:50:47 crc kubenswrapper[4681]: E0404 02:50:47.202357 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:50:49 crc kubenswrapper[4681]: I0404 02:50:49.217455 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-0" Apr 04 02:50:49 crc kubenswrapper[4681]: I0404 02:50:49.289839 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Apr 04 02:50:49 crc kubenswrapper[4681]: I0404 02:50:49.784470 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-2-0" Apr 04 02:51:00 crc kubenswrapper[4681]: I0404 02:51:00.200701 4681 scope.go:117] "RemoveContainer" containerID="690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01" Apr 04 02:51:00 crc kubenswrapper[4681]: E0404 02:51:00.201394 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:51:12 crc kubenswrapper[4681]: I0404 02:51:12.200748 4681 scope.go:117] "RemoveContainer" containerID="690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01" Apr 04 02:51:12 crc kubenswrapper[4681]: E0404 02:51:12.201494 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:51:24 crc kubenswrapper[4681]: I0404 02:51:24.201141 4681 scope.go:117] "RemoveContainer" containerID="690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01" Apr 04 02:51:24 crc kubenswrapper[4681]: E0404 02:51:24.203414 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:51:36 crc kubenswrapper[4681]: I0404 02:51:36.201219 4681 scope.go:117] "RemoveContainer" containerID="690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01" Apr 04 02:51:36 crc kubenswrapper[4681]: E0404 02:51:36.202107 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:51:46 crc kubenswrapper[4681]: I0404 02:51:46.050184 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Apr 04 02:51:46 crc kubenswrapper[4681]: I0404 02:51:46.050903 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="03a92323-ceb1-4b90-b706-b0d9f924bdd8" containerName="prometheus" containerID="cri-o://003583dad88aa5da1629220cea8700d686ed286cbe9b53127604904cd23c5237" gracePeriod=600 Apr 04 02:51:46 crc kubenswrapper[4681]: I0404 02:51:46.051403 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="03a92323-ceb1-4b90-b706-b0d9f924bdd8" containerName="thanos-sidecar" containerID="cri-o://44e0064a035b6cb26bde2c70e3bf59c59b5f159533019c04f98c4f0c47779629" gracePeriod=600 Apr 04 02:51:46 crc kubenswrapper[4681]: I0404 02:51:46.051563 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="03a92323-ceb1-4b90-b706-b0d9f924bdd8" containerName="config-reloader" containerID="cri-o://1b95be46818a425f71dd352fae0232fb95f43768be84cde531e71eaff5f2625f" gracePeriod=600 Apr 04 02:51:46 crc kubenswrapper[4681]: I0404 02:51:46.435480 4681 generic.go:334] "Generic (PLEG): container finished" podID="03a92323-ceb1-4b90-b706-b0d9f924bdd8" containerID="44e0064a035b6cb26bde2c70e3bf59c59b5f159533019c04f98c4f0c47779629" exitCode=0 Apr 04 02:51:46 crc kubenswrapper[4681]: I0404 02:51:46.435842 4681 generic.go:334] "Generic (PLEG): container finished" podID="03a92323-ceb1-4b90-b706-b0d9f924bdd8" containerID="1b95be46818a425f71dd352fae0232fb95f43768be84cde531e71eaff5f2625f" exitCode=0 Apr 04 02:51:46 crc kubenswrapper[4681]: I0404 02:51:46.435858 4681 generic.go:334] "Generic (PLEG): container finished" podID="03a92323-ceb1-4b90-b706-b0d9f924bdd8" containerID="003583dad88aa5da1629220cea8700d686ed286cbe9b53127604904cd23c5237" exitCode=0 Apr 04 02:51:46 crc kubenswrapper[4681]: I0404 02:51:46.435593 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"03a92323-ceb1-4b90-b706-b0d9f924bdd8","Type":"ContainerDied","Data":"44e0064a035b6cb26bde2c70e3bf59c59b5f159533019c04f98c4f0c47779629"} Apr 04 02:51:46 crc kubenswrapper[4681]: I0404 02:51:46.435905 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"03a92323-ceb1-4b90-b706-b0d9f924bdd8","Type":"ContainerDied","Data":"1b95be46818a425f71dd352fae0232fb95f43768be84cde531e71eaff5f2625f"} Apr 04 02:51:46 crc kubenswrapper[4681]: I0404 02:51:46.435925 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"03a92323-ceb1-4b90-b706-b0d9f924bdd8","Type":"ContainerDied","Data":"003583dad88aa5da1629220cea8700d686ed286cbe9b53127604904cd23c5237"} Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.203874 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.312645 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-web-config\") pod \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.313108 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-secret-combined-ca-bundle\") pod \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.313197 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/03a92323-ceb1-4b90-b706-b0d9f924bdd8-config-out\") pod \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.313222 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/03a92323-ceb1-4b90-b706-b0d9f924bdd8-prometheus-metric-storage-rulefiles-0\") pod \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.313280 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/03a92323-ceb1-4b90-b706-b0d9f924bdd8-tls-assets\") pod \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.313325 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/03a92323-ceb1-4b90-b706-b0d9f924bdd8-prometheus-metric-storage-rulefiles-1\") pod \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.313362 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/03a92323-ceb1-4b90-b706-b0d9f924bdd8-prometheus-metric-storage-rulefiles-2\") pod \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.313610 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be343324-7666-479e-a8ae-26270ab2cfcc\") pod \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.313667 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9qc2\" (UniqueName: \"kubernetes.io/projected/03a92323-ceb1-4b90-b706-b0d9f924bdd8-kube-api-access-m9qc2\") pod \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.313723 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.313758 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.313780 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-config\") pod \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.313871 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-thanos-prometheus-http-client-file\") pod \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\" (UID: \"03a92323-ceb1-4b90-b706-b0d9f924bdd8\") " Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.313887 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03a92323-ceb1-4b90-b706-b0d9f924bdd8-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "03a92323-ceb1-4b90-b706-b0d9f924bdd8" (UID: "03a92323-ceb1-4b90-b706-b0d9f924bdd8"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.314514 4681 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/03a92323-ceb1-4b90-b706-b0d9f924bdd8-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.314617 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03a92323-ceb1-4b90-b706-b0d9f924bdd8-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "03a92323-ceb1-4b90-b706-b0d9f924bdd8" (UID: "03a92323-ceb1-4b90-b706-b0d9f924bdd8"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.314900 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03a92323-ceb1-4b90-b706-b0d9f924bdd8-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "03a92323-ceb1-4b90-b706-b0d9f924bdd8" (UID: "03a92323-ceb1-4b90-b706-b0d9f924bdd8"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.319695 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03a92323-ceb1-4b90-b706-b0d9f924bdd8-config-out" (OuterVolumeSpecName: "config-out") pod "03a92323-ceb1-4b90-b706-b0d9f924bdd8" (UID: "03a92323-ceb1-4b90-b706-b0d9f924bdd8"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.323892 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "03a92323-ceb1-4b90-b706-b0d9f924bdd8" (UID: "03a92323-ceb1-4b90-b706-b0d9f924bdd8"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.324088 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "03a92323-ceb1-4b90-b706-b0d9f924bdd8" (UID: "03a92323-ceb1-4b90-b706-b0d9f924bdd8"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.324568 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03a92323-ceb1-4b90-b706-b0d9f924bdd8-kube-api-access-m9qc2" (OuterVolumeSpecName: "kube-api-access-m9qc2") pod "03a92323-ceb1-4b90-b706-b0d9f924bdd8" (UID: "03a92323-ceb1-4b90-b706-b0d9f924bdd8"). InnerVolumeSpecName "kube-api-access-m9qc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.324635 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03a92323-ceb1-4b90-b706-b0d9f924bdd8-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "03a92323-ceb1-4b90-b706-b0d9f924bdd8" (UID: "03a92323-ceb1-4b90-b706-b0d9f924bdd8"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.326757 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "03a92323-ceb1-4b90-b706-b0d9f924bdd8" (UID: "03a92323-ceb1-4b90-b706-b0d9f924bdd8"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.327000 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "03a92323-ceb1-4b90-b706-b0d9f924bdd8" (UID: "03a92323-ceb1-4b90-b706-b0d9f924bdd8"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.339000 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-config" (OuterVolumeSpecName: "config") pod "03a92323-ceb1-4b90-b706-b0d9f924bdd8" (UID: "03a92323-ceb1-4b90-b706-b0d9f924bdd8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.359944 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be343324-7666-479e-a8ae-26270ab2cfcc" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "03a92323-ceb1-4b90-b706-b0d9f924bdd8" (UID: "03a92323-ceb1-4b90-b706-b0d9f924bdd8"). InnerVolumeSpecName "pvc-be343324-7666-479e-a8ae-26270ab2cfcc". PluginName "kubernetes.io/csi", VolumeGidValue "" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.417593 4681 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-be343324-7666-479e-a8ae-26270ab2cfcc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be343324-7666-479e-a8ae-26270ab2cfcc\") on node \"crc\" " Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.417647 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9qc2\" (UniqueName: \"kubernetes.io/projected/03a92323-ceb1-4b90-b706-b0d9f924bdd8-kube-api-access-m9qc2\") on node \"crc\" DevicePath \"\"" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.417660 4681 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.417673 4681 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.417684 4681 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.417695 4681 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.417728 4681 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.417737 4681 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/03a92323-ceb1-4b90-b706-b0d9f924bdd8-config-out\") on node \"crc\" DevicePath \"\"" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.417745 4681 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/03a92323-ceb1-4b90-b706-b0d9f924bdd8-tls-assets\") on node \"crc\" DevicePath \"\"" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.417753 4681 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/03a92323-ceb1-4b90-b706-b0d9f924bdd8-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.417763 4681 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/03a92323-ceb1-4b90-b706-b0d9f924bdd8-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.519132 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-web-config" (OuterVolumeSpecName: "web-config") pod "03a92323-ceb1-4b90-b706-b0d9f924bdd8" (UID: "03a92323-ceb1-4b90-b706-b0d9f924bdd8"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.522459 4681 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/03a92323-ceb1-4b90-b706-b0d9f924bdd8-web-config\") on node \"crc\" DevicePath \"\"" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.530343 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"03a92323-ceb1-4b90-b706-b0d9f924bdd8","Type":"ContainerDied","Data":"0646b11d8a5b8040ce3dcefc0cbaf12a765e18ad6a66decb6b3f11500db5537b"} Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.530408 4681 scope.go:117] "RemoveContainer" containerID="44e0064a035b6cb26bde2c70e3bf59c59b5f159533019c04f98c4f0c47779629" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.530601 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.544615 4681 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.544776 4681 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-be343324-7666-479e-a8ae-26270ab2cfcc" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be343324-7666-479e-a8ae-26270ab2cfcc") on node "crc" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.627147 4681 reconciler_common.go:293] "Volume detached for volume \"pvc-be343324-7666-479e-a8ae-26270ab2cfcc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be343324-7666-479e-a8ae-26270ab2cfcc\") on node \"crc\" DevicePath \"\"" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.628200 4681 scope.go:117] "RemoveContainer" containerID="1b95be46818a425f71dd352fae0232fb95f43768be84cde531e71eaff5f2625f" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.630497 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.643847 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.681003 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Apr 04 02:51:47 crc kubenswrapper[4681]: E0404 02:51:47.681418 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a92323-ceb1-4b90-b706-b0d9f924bdd8" containerName="prometheus" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.681436 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a92323-ceb1-4b90-b706-b0d9f924bdd8" containerName="prometheus" Apr 04 02:51:47 crc kubenswrapper[4681]: E0404 02:51:47.681453 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a92323-ceb1-4b90-b706-b0d9f924bdd8" containerName="thanos-sidecar" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.681459 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a92323-ceb1-4b90-b706-b0d9f924bdd8" containerName="thanos-sidecar" Apr 04 02:51:47 crc kubenswrapper[4681]: E0404 02:51:47.681484 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a92323-ceb1-4b90-b706-b0d9f924bdd8" containerName="init-config-reloader" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.681491 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a92323-ceb1-4b90-b706-b0d9f924bdd8" containerName="init-config-reloader" Apr 04 02:51:47 crc kubenswrapper[4681]: E0404 02:51:47.681499 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03a92323-ceb1-4b90-b706-b0d9f924bdd8" containerName="config-reloader" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.681504 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="03a92323-ceb1-4b90-b706-b0d9f924bdd8" containerName="config-reloader" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.681692 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a92323-ceb1-4b90-b706-b0d9f924bdd8" containerName="config-reloader" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.681710 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a92323-ceb1-4b90-b706-b0d9f924bdd8" containerName="thanos-sidecar" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.681721 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="03a92323-ceb1-4b90-b706-b0d9f924bdd8" containerName="prometheus" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.682717 4681 scope.go:117] "RemoveContainer" containerID="003583dad88aa5da1629220cea8700d686ed286cbe9b53127604904cd23c5237" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.683770 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.687599 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.687776 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.687904 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.688028 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.689472 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.690203 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-xgv8d" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.690929 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.695927 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.725580 4681 scope.go:117] "RemoveContainer" containerID="cf130503707aebac706fc013898e06a0fddbf3e63baa8daccf410f2a5cef86d1" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.733367 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/757762e6-7520-4fec-8323-41bf4a53a889-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.733415 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/757762e6-7520-4fec-8323-41bf4a53a889-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.733454 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/757762e6-7520-4fec-8323-41bf4a53a889-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.733534 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/757762e6-7520-4fec-8323-41bf4a53a889-config\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.733563 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/757762e6-7520-4fec-8323-41bf4a53a889-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.733628 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/757762e6-7520-4fec-8323-41bf4a53a889-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.733698 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/757762e6-7520-4fec-8323-41bf4a53a889-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.733767 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/757762e6-7520-4fec-8323-41bf4a53a889-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.733824 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd9cr\" (UniqueName: \"kubernetes.io/projected/757762e6-7520-4fec-8323-41bf4a53a889-kube-api-access-pd9cr\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.733858 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/757762e6-7520-4fec-8323-41bf4a53a889-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.733905 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/757762e6-7520-4fec-8323-41bf4a53a889-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.733963 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/757762e6-7520-4fec-8323-41bf4a53a889-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.734002 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-be343324-7666-479e-a8ae-26270ab2cfcc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be343324-7666-479e-a8ae-26270ab2cfcc\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.758554 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.836435 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/757762e6-7520-4fec-8323-41bf4a53a889-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.836487 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/757762e6-7520-4fec-8323-41bf4a53a889-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.836520 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/757762e6-7520-4fec-8323-41bf4a53a889-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.836573 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/757762e6-7520-4fec-8323-41bf4a53a889-config\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.836596 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/757762e6-7520-4fec-8323-41bf4a53a889-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.836654 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/757762e6-7520-4fec-8323-41bf4a53a889-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.836685 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/757762e6-7520-4fec-8323-41bf4a53a889-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.836720 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/757762e6-7520-4fec-8323-41bf4a53a889-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.836805 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd9cr\" (UniqueName: \"kubernetes.io/projected/757762e6-7520-4fec-8323-41bf4a53a889-kube-api-access-pd9cr\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.836831 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/757762e6-7520-4fec-8323-41bf4a53a889-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.836859 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/757762e6-7520-4fec-8323-41bf4a53a889-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.836897 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/757762e6-7520-4fec-8323-41bf4a53a889-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.836930 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-be343324-7666-479e-a8ae-26270ab2cfcc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be343324-7666-479e-a8ae-26270ab2cfcc\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.838861 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/757762e6-7520-4fec-8323-41bf4a53a889-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.842652 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/757762e6-7520-4fec-8323-41bf4a53a889-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.843068 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/757762e6-7520-4fec-8323-41bf4a53a889-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.848720 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/757762e6-7520-4fec-8323-41bf4a53a889-config\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.851801 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/757762e6-7520-4fec-8323-41bf4a53a889-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.859600 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd9cr\" (UniqueName: \"kubernetes.io/projected/757762e6-7520-4fec-8323-41bf4a53a889-kube-api-access-pd9cr\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.862722 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/757762e6-7520-4fec-8323-41bf4a53a889-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.862921 4681 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.862970 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-be343324-7666-479e-a8ae-26270ab2cfcc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be343324-7666-479e-a8ae-26270ab2cfcc\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1afbb87d4ef2fe230ee8a94c40d1d069f8d7a05e7e9d3bfdb3b9deafd206a254/globalmount\"" pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.864785 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/757762e6-7520-4fec-8323-41bf4a53a889-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.866516 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/757762e6-7520-4fec-8323-41bf4a53a889-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.867106 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/757762e6-7520-4fec-8323-41bf4a53a889-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.868626 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/757762e6-7520-4fec-8323-41bf4a53a889-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.870194 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/757762e6-7520-4fec-8323-41bf4a53a889-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:47 crc kubenswrapper[4681]: I0404 02:51:47.911008 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-be343324-7666-479e-a8ae-26270ab2cfcc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be343324-7666-479e-a8ae-26270ab2cfcc\") pod \"prometheus-metric-storage-0\" (UID: \"757762e6-7520-4fec-8323-41bf4a53a889\") " pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:48 crc kubenswrapper[4681]: I0404 02:51:48.009330 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Apr 04 02:51:48 crc kubenswrapper[4681]: I0404 02:51:48.477485 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Apr 04 02:51:48 crc kubenswrapper[4681]: I0404 02:51:48.547327 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"757762e6-7520-4fec-8323-41bf4a53a889","Type":"ContainerStarted","Data":"155ae3db01cbc00fa5b3d46c6e90d8c38b7e0debb4b11da11ac50e8f71508650"} Apr 04 02:51:49 crc kubenswrapper[4681]: I0404 02:51:49.214186 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03a92323-ceb1-4b90-b706-b0d9f924bdd8" path="/var/lib/kubelet/pods/03a92323-ceb1-4b90-b706-b0d9f924bdd8/volumes" Apr 04 02:51:51 crc kubenswrapper[4681]: I0404 02:51:51.211681 4681 scope.go:117] "RemoveContainer" containerID="690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01" Apr 04 02:51:51 crc kubenswrapper[4681]: E0404 02:51:51.212099 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:51:52 crc kubenswrapper[4681]: I0404 02:51:52.593700 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"757762e6-7520-4fec-8323-41bf4a53a889","Type":"ContainerStarted","Data":"06fe93b3681e5d4801372dd71f7ce733a97a703ba6fb513d729ebc4ac443671e"} Apr 04 02:51:59 crc kubenswrapper[4681]: I0404 02:51:59.680472 4681 generic.go:334] "Generic (PLEG): container finished" podID="757762e6-7520-4fec-8323-41bf4a53a889" containerID="06fe93b3681e5d4801372dd71f7ce733a97a703ba6fb513d729ebc4ac443671e" exitCode=0 Apr 04 02:51:59 crc kubenswrapper[4681]: I0404 02:51:59.680557 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"757762e6-7520-4fec-8323-41bf4a53a889","Type":"ContainerDied","Data":"06fe93b3681e5d4801372dd71f7ce733a97a703ba6fb513d729ebc4ac443671e"} Apr 04 02:52:00 crc kubenswrapper[4681]: I0404 02:52:00.150186 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587852-kv79j"] Apr 04 02:52:00 crc kubenswrapper[4681]: I0404 02:52:00.151921 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587852-kv79j" Apr 04 02:52:00 crc kubenswrapper[4681]: I0404 02:52:00.153839 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 02:52:00 crc kubenswrapper[4681]: I0404 02:52:00.155468 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 02:52:00 crc kubenswrapper[4681]: I0404 02:52:00.155559 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 02:52:00 crc kubenswrapper[4681]: I0404 02:52:00.176911 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587852-kv79j"] Apr 04 02:52:00 crc kubenswrapper[4681]: I0404 02:52:00.212328 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l99kc\" (UniqueName: \"kubernetes.io/projected/21c07a25-0759-4f0f-8eee-025be8595ff9-kube-api-access-l99kc\") pod \"auto-csr-approver-29587852-kv79j\" (UID: \"21c07a25-0759-4f0f-8eee-025be8595ff9\") " pod="openshift-infra/auto-csr-approver-29587852-kv79j" Apr 04 02:52:00 crc kubenswrapper[4681]: I0404 02:52:00.314868 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l99kc\" (UniqueName: \"kubernetes.io/projected/21c07a25-0759-4f0f-8eee-025be8595ff9-kube-api-access-l99kc\") pod \"auto-csr-approver-29587852-kv79j\" (UID: \"21c07a25-0759-4f0f-8eee-025be8595ff9\") " pod="openshift-infra/auto-csr-approver-29587852-kv79j" Apr 04 02:52:00 crc kubenswrapper[4681]: I0404 02:52:00.336879 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l99kc\" (UniqueName: \"kubernetes.io/projected/21c07a25-0759-4f0f-8eee-025be8595ff9-kube-api-access-l99kc\") pod \"auto-csr-approver-29587852-kv79j\" (UID: \"21c07a25-0759-4f0f-8eee-025be8595ff9\") " pod="openshift-infra/auto-csr-approver-29587852-kv79j" Apr 04 02:52:00 crc kubenswrapper[4681]: I0404 02:52:00.472401 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587852-kv79j" Apr 04 02:52:00 crc kubenswrapper[4681]: I0404 02:52:00.703514 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"757762e6-7520-4fec-8323-41bf4a53a889","Type":"ContainerStarted","Data":"b6f897921ef70a0a939dbaeeb6ebf24e667def807639fe0cbff2e5d2683a2c60"} Apr 04 02:52:00 crc kubenswrapper[4681]: I0404 02:52:00.936190 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587852-kv79j"] Apr 04 02:52:00 crc kubenswrapper[4681]: W0404 02:52:00.945920 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21c07a25_0759_4f0f_8eee_025be8595ff9.slice/crio-a703f291faf7d36763bc12f6bff9e22d65bc4545fa748fa0c9440786769fa776 WatchSource:0}: Error finding container a703f291faf7d36763bc12f6bff9e22d65bc4545fa748fa0c9440786769fa776: Status 404 returned error can't find the container with id a703f291faf7d36763bc12f6bff9e22d65bc4545fa748fa0c9440786769fa776 Apr 04 02:52:01 crc kubenswrapper[4681]: I0404 02:52:01.714105 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587852-kv79j" event={"ID":"21c07a25-0759-4f0f-8eee-025be8595ff9","Type":"ContainerStarted","Data":"a703f291faf7d36763bc12f6bff9e22d65bc4545fa748fa0c9440786769fa776"} Apr 04 02:52:02 crc kubenswrapper[4681]: I0404 02:52:02.727340 4681 generic.go:334] "Generic (PLEG): container finished" podID="21c07a25-0759-4f0f-8eee-025be8595ff9" containerID="425b576efaa4c630fe453c683b482f4f9ee9d6f1fcc6f7c028b88655bdb54413" exitCode=0 Apr 04 02:52:02 crc kubenswrapper[4681]: I0404 02:52:02.727408 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587852-kv79j" event={"ID":"21c07a25-0759-4f0f-8eee-025be8595ff9","Type":"ContainerDied","Data":"425b576efaa4c630fe453c683b482f4f9ee9d6f1fcc6f7c028b88655bdb54413"} Apr 04 02:52:03 crc kubenswrapper[4681]: I0404 02:52:03.741431 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"757762e6-7520-4fec-8323-41bf4a53a889","Type":"ContainerStarted","Data":"087687d82c9df8babc99cf3e33f09dff686acc9c3039b8ab0a5a43beb49f6070"} Apr 04 02:52:03 crc kubenswrapper[4681]: I0404 02:52:03.741712 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"757762e6-7520-4fec-8323-41bf4a53a889","Type":"ContainerStarted","Data":"d0c10fe4f2baec8f4f4c6f106020ffd3c6ea54574b0fba672cbc664c85224544"} Apr 04 02:52:03 crc kubenswrapper[4681]: I0404 02:52:03.791589 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=16.79156225 podStartE2EDuration="16.79156225s" podCreationTimestamp="2026-04-04 02:51:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 02:52:03.778781671 +0000 UTC m=+3403.444556821" watchObservedRunningTime="2026-04-04 02:52:03.79156225 +0000 UTC m=+3403.457337390" Apr 04 02:52:04 crc kubenswrapper[4681]: I0404 02:52:04.095415 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587852-kv79j" Apr 04 02:52:04 crc kubenswrapper[4681]: I0404 02:52:04.201703 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l99kc\" (UniqueName: \"kubernetes.io/projected/21c07a25-0759-4f0f-8eee-025be8595ff9-kube-api-access-l99kc\") pod \"21c07a25-0759-4f0f-8eee-025be8595ff9\" (UID: \"21c07a25-0759-4f0f-8eee-025be8595ff9\") " Apr 04 02:52:04 crc kubenswrapper[4681]: I0404 02:52:04.206975 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21c07a25-0759-4f0f-8eee-025be8595ff9-kube-api-access-l99kc" (OuterVolumeSpecName: "kube-api-access-l99kc") pod "21c07a25-0759-4f0f-8eee-025be8595ff9" (UID: "21c07a25-0759-4f0f-8eee-025be8595ff9"). InnerVolumeSpecName "kube-api-access-l99kc". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:52:04 crc kubenswrapper[4681]: I0404 02:52:04.304838 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l99kc\" (UniqueName: \"kubernetes.io/projected/21c07a25-0759-4f0f-8eee-025be8595ff9-kube-api-access-l99kc\") on node \"crc\" DevicePath \"\"" Apr 04 02:52:04 crc kubenswrapper[4681]: I0404 02:52:04.754779 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587852-kv79j" event={"ID":"21c07a25-0759-4f0f-8eee-025be8595ff9","Type":"ContainerDied","Data":"a703f291faf7d36763bc12f6bff9e22d65bc4545fa748fa0c9440786769fa776"} Apr 04 02:52:04 crc kubenswrapper[4681]: I0404 02:52:04.754849 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587852-kv79j" Apr 04 02:52:04 crc kubenswrapper[4681]: I0404 02:52:04.754874 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a703f291faf7d36763bc12f6bff9e22d65bc4545fa748fa0c9440786769fa776" Apr 04 02:52:05 crc kubenswrapper[4681]: I0404 02:52:05.173463 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587846-85pnp"] Apr 04 02:52:05 crc kubenswrapper[4681]: I0404 02:52:05.186143 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587846-85pnp"] Apr 04 02:52:05 crc kubenswrapper[4681]: I0404 02:52:05.201594 4681 scope.go:117] "RemoveContainer" containerID="690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01" Apr 04 02:52:05 crc kubenswrapper[4681]: E0404 02:52:05.201942 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:52:05 crc kubenswrapper[4681]: I0404 02:52:05.216019 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a53b3215-4a81-4fae-9cc0-db1d56d865de" path="/var/lib/kubelet/pods/a53b3215-4a81-4fae-9cc0-db1d56d865de/volumes" Apr 04 02:52:08 crc kubenswrapper[4681]: I0404 02:52:08.010042 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Apr 04 02:52:18 crc kubenswrapper[4681]: I0404 02:52:18.010077 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Apr 04 02:52:18 crc kubenswrapper[4681]: I0404 02:52:18.026367 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Apr 04 02:52:18 crc kubenswrapper[4681]: I0404 02:52:18.201103 4681 scope.go:117] "RemoveContainer" containerID="690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01" Apr 04 02:52:18 crc kubenswrapper[4681]: E0404 02:52:18.201483 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:52:18 crc kubenswrapper[4681]: I0404 02:52:18.924794 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Apr 04 02:52:31 crc kubenswrapper[4681]: I0404 02:52:31.000817 4681 scope.go:117] "RemoveContainer" containerID="9ad3a9b9d20882825766e7bafab29ae5d038ee27339aa68b40d58085f61120f2" Apr 04 02:52:32 crc kubenswrapper[4681]: I0404 02:52:32.203386 4681 scope.go:117] "RemoveContainer" containerID="690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01" Apr 04 02:52:32 crc kubenswrapper[4681]: E0404 02:52:32.203929 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.417579 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Apr 04 02:52:33 crc kubenswrapper[4681]: E0404 02:52:33.418461 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21c07a25-0759-4f0f-8eee-025be8595ff9" containerName="oc" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.418480 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="21c07a25-0759-4f0f-8eee-025be8595ff9" containerName="oc" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.418796 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="21c07a25-0759-4f0f-8eee-025be8595ff9" containerName="oc" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.419661 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.425701 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.425792 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-5r25l" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.425981 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.425856 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.435105 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.522619 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9d245209-8139-42b0-aae0-5cafddfc00dd-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " pod="openstack/tempest-tests-tempest" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.522729 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d245209-8139-42b0-aae0-5cafddfc00dd-config-data\") pod \"tempest-tests-tempest\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " pod="openstack/tempest-tests-tempest" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.522805 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9d245209-8139-42b0-aae0-5cafddfc00dd-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " pod="openstack/tempest-tests-tempest" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.624997 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9d245209-8139-42b0-aae0-5cafddfc00dd-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " pod="openstack/tempest-tests-tempest" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.625073 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9d245209-8139-42b0-aae0-5cafddfc00dd-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " pod="openstack/tempest-tests-tempest" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.625139 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " pod="openstack/tempest-tests-tempest" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.625179 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d245209-8139-42b0-aae0-5cafddfc00dd-config-data\") pod \"tempest-tests-tempest\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " pod="openstack/tempest-tests-tempest" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.625212 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9d245209-8139-42b0-aae0-5cafddfc00dd-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " pod="openstack/tempest-tests-tempest" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.625244 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9d245209-8139-42b0-aae0-5cafddfc00dd-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " pod="openstack/tempest-tests-tempest" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.625344 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9d245209-8139-42b0-aae0-5cafddfc00dd-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " pod="openstack/tempest-tests-tempest" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.625373 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9d245209-8139-42b0-aae0-5cafddfc00dd-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " pod="openstack/tempest-tests-tempest" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.625437 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txt6t\" (UniqueName: \"kubernetes.io/projected/9d245209-8139-42b0-aae0-5cafddfc00dd-kube-api-access-txt6t\") pod \"tempest-tests-tempest\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " pod="openstack/tempest-tests-tempest" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.627094 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d245209-8139-42b0-aae0-5cafddfc00dd-config-data\") pod \"tempest-tests-tempest\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " pod="openstack/tempest-tests-tempest" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.627186 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9d245209-8139-42b0-aae0-5cafddfc00dd-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " pod="openstack/tempest-tests-tempest" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.631321 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9d245209-8139-42b0-aae0-5cafddfc00dd-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " pod="openstack/tempest-tests-tempest" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.727768 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9d245209-8139-42b0-aae0-5cafddfc00dd-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " pod="openstack/tempest-tests-tempest" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.727876 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " pod="openstack/tempest-tests-tempest" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.727936 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9d245209-8139-42b0-aae0-5cafddfc00dd-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " pod="openstack/tempest-tests-tempest" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.728033 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9d245209-8139-42b0-aae0-5cafddfc00dd-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " pod="openstack/tempest-tests-tempest" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.728069 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9d245209-8139-42b0-aae0-5cafddfc00dd-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " pod="openstack/tempest-tests-tempest" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.728139 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txt6t\" (UniqueName: \"kubernetes.io/projected/9d245209-8139-42b0-aae0-5cafddfc00dd-kube-api-access-txt6t\") pod \"tempest-tests-tempest\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " pod="openstack/tempest-tests-tempest" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.728285 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/tempest-tests-tempest" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.728306 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9d245209-8139-42b0-aae0-5cafddfc00dd-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " pod="openstack/tempest-tests-tempest" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.728443 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9d245209-8139-42b0-aae0-5cafddfc00dd-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " pod="openstack/tempest-tests-tempest" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.732048 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9d245209-8139-42b0-aae0-5cafddfc00dd-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " pod="openstack/tempest-tests-tempest" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.733617 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9d245209-8139-42b0-aae0-5cafddfc00dd-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " pod="openstack/tempest-tests-tempest" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.743976 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txt6t\" (UniqueName: \"kubernetes.io/projected/9d245209-8139-42b0-aae0-5cafddfc00dd-kube-api-access-txt6t\") pod \"tempest-tests-tempest\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " pod="openstack/tempest-tests-tempest" Apr 04 02:52:33 crc kubenswrapper[4681]: I0404 02:52:33.763517 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"tempest-tests-tempest\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " pod="openstack/tempest-tests-tempest" Apr 04 02:52:34 crc kubenswrapper[4681]: I0404 02:52:34.035351 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Apr 04 02:52:34 crc kubenswrapper[4681]: I0404 02:52:34.466545 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Apr 04 02:52:35 crc kubenswrapper[4681]: I0404 02:52:35.081993 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9d245209-8139-42b0-aae0-5cafddfc00dd","Type":"ContainerStarted","Data":"ac4f245b101187bf37c6a24ea79fa37ab14375bdb90e08c394db084aca1e328f"} Apr 04 02:52:46 crc kubenswrapper[4681]: I0404 02:52:46.200780 4681 scope.go:117] "RemoveContainer" containerID="690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01" Apr 04 02:52:46 crc kubenswrapper[4681]: E0404 02:52:46.201778 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:52:48 crc kubenswrapper[4681]: I0404 02:52:48.223221 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9d245209-8139-42b0-aae0-5cafddfc00dd","Type":"ContainerStarted","Data":"094f9ad09696ebe66b7b1ffd4d7e985a677f284c3c10e16a5e8e4f68a7a81ba3"} Apr 04 02:52:48 crc kubenswrapper[4681]: I0404 02:52:48.249801 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.176508176 podStartE2EDuration="16.249781864s" podCreationTimestamp="2026-04-04 02:52:32 +0000 UTC" firstStartedPulling="2026-04-04 02:52:34.471212271 +0000 UTC m=+3434.136987391" lastFinishedPulling="2026-04-04 02:52:46.544485959 +0000 UTC m=+3446.210261079" observedRunningTime="2026-04-04 02:52:48.242817055 +0000 UTC m=+3447.908592195" watchObservedRunningTime="2026-04-04 02:52:48.249781864 +0000 UTC m=+3447.915556984" Apr 04 02:52:58 crc kubenswrapper[4681]: I0404 02:52:58.200514 4681 scope.go:117] "RemoveContainer" containerID="690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01" Apr 04 02:52:58 crc kubenswrapper[4681]: E0404 02:52:58.201346 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:53:12 crc kubenswrapper[4681]: I0404 02:53:12.201771 4681 scope.go:117] "RemoveContainer" containerID="690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01" Apr 04 02:53:12 crc kubenswrapper[4681]: E0404 02:53:12.202696 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:53:23 crc kubenswrapper[4681]: I0404 02:53:23.200735 4681 scope.go:117] "RemoveContainer" containerID="690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01" Apr 04 02:53:23 crc kubenswrapper[4681]: E0404 02:53:23.201547 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:53:37 crc kubenswrapper[4681]: I0404 02:53:37.201507 4681 scope.go:117] "RemoveContainer" containerID="690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01" Apr 04 02:53:37 crc kubenswrapper[4681]: I0404 02:53:37.830457 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerStarted","Data":"9a5a1bde3866fbb86f1c88c1ac7b7dfef788ee5c808f1c22105ee69b8ca81656"} Apr 04 02:53:45 crc kubenswrapper[4681]: I0404 02:53:45.494758 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kwhfh"] Apr 04 02:53:45 crc kubenswrapper[4681]: I0404 02:53:45.499148 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwhfh" Apr 04 02:53:45 crc kubenswrapper[4681]: I0404 02:53:45.511768 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kwhfh"] Apr 04 02:53:45 crc kubenswrapper[4681]: I0404 02:53:45.687187 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f4t2\" (UniqueName: \"kubernetes.io/projected/379b1597-6fc5-4304-8cc2-5cf4358194c3-kube-api-access-2f4t2\") pod \"certified-operators-kwhfh\" (UID: \"379b1597-6fc5-4304-8cc2-5cf4358194c3\") " pod="openshift-marketplace/certified-operators-kwhfh" Apr 04 02:53:45 crc kubenswrapper[4681]: I0404 02:53:45.687333 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/379b1597-6fc5-4304-8cc2-5cf4358194c3-catalog-content\") pod \"certified-operators-kwhfh\" (UID: \"379b1597-6fc5-4304-8cc2-5cf4358194c3\") " pod="openshift-marketplace/certified-operators-kwhfh" Apr 04 02:53:45 crc kubenswrapper[4681]: I0404 02:53:45.688048 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/379b1597-6fc5-4304-8cc2-5cf4358194c3-utilities\") pod \"certified-operators-kwhfh\" (UID: \"379b1597-6fc5-4304-8cc2-5cf4358194c3\") " pod="openshift-marketplace/certified-operators-kwhfh" Apr 04 02:53:45 crc kubenswrapper[4681]: I0404 02:53:45.789420 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/379b1597-6fc5-4304-8cc2-5cf4358194c3-catalog-content\") pod \"certified-operators-kwhfh\" (UID: \"379b1597-6fc5-4304-8cc2-5cf4358194c3\") " pod="openshift-marketplace/certified-operators-kwhfh" Apr 04 02:53:45 crc kubenswrapper[4681]: I0404 02:53:45.789492 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/379b1597-6fc5-4304-8cc2-5cf4358194c3-utilities\") pod \"certified-operators-kwhfh\" (UID: \"379b1597-6fc5-4304-8cc2-5cf4358194c3\") " pod="openshift-marketplace/certified-operators-kwhfh" Apr 04 02:53:45 crc kubenswrapper[4681]: I0404 02:53:45.789628 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f4t2\" (UniqueName: \"kubernetes.io/projected/379b1597-6fc5-4304-8cc2-5cf4358194c3-kube-api-access-2f4t2\") pod \"certified-operators-kwhfh\" (UID: \"379b1597-6fc5-4304-8cc2-5cf4358194c3\") " pod="openshift-marketplace/certified-operators-kwhfh" Apr 04 02:53:45 crc kubenswrapper[4681]: I0404 02:53:45.790079 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/379b1597-6fc5-4304-8cc2-5cf4358194c3-catalog-content\") pod \"certified-operators-kwhfh\" (UID: \"379b1597-6fc5-4304-8cc2-5cf4358194c3\") " pod="openshift-marketplace/certified-operators-kwhfh" Apr 04 02:53:45 crc kubenswrapper[4681]: I0404 02:53:45.790403 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/379b1597-6fc5-4304-8cc2-5cf4358194c3-utilities\") pod \"certified-operators-kwhfh\" (UID: \"379b1597-6fc5-4304-8cc2-5cf4358194c3\") " pod="openshift-marketplace/certified-operators-kwhfh" Apr 04 02:53:45 crc kubenswrapper[4681]: I0404 02:53:45.823854 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f4t2\" (UniqueName: \"kubernetes.io/projected/379b1597-6fc5-4304-8cc2-5cf4358194c3-kube-api-access-2f4t2\") pod \"certified-operators-kwhfh\" (UID: \"379b1597-6fc5-4304-8cc2-5cf4358194c3\") " pod="openshift-marketplace/certified-operators-kwhfh" Apr 04 02:53:45 crc kubenswrapper[4681]: I0404 02:53:45.834770 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwhfh" Apr 04 02:53:46 crc kubenswrapper[4681]: I0404 02:53:46.446146 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kwhfh"] Apr 04 02:53:46 crc kubenswrapper[4681]: I0404 02:53:46.916109 4681 generic.go:334] "Generic (PLEG): container finished" podID="379b1597-6fc5-4304-8cc2-5cf4358194c3" containerID="eb65906818390671e8ef91527123529691fad066fd05949dc04fd29ab27f5a85" exitCode=0 Apr 04 02:53:46 crc kubenswrapper[4681]: I0404 02:53:46.916236 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwhfh" event={"ID":"379b1597-6fc5-4304-8cc2-5cf4358194c3","Type":"ContainerDied","Data":"eb65906818390671e8ef91527123529691fad066fd05949dc04fd29ab27f5a85"} Apr 04 02:53:46 crc kubenswrapper[4681]: I0404 02:53:46.916767 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwhfh" event={"ID":"379b1597-6fc5-4304-8cc2-5cf4358194c3","Type":"ContainerStarted","Data":"b72ff9c44657f9fa73ec351afcace5c59055907dfb174dd4ed72b36bb7e13d88"} Apr 04 02:53:46 crc kubenswrapper[4681]: I0404 02:53:46.918853 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 04 02:53:47 crc kubenswrapper[4681]: I0404 02:53:47.937733 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwhfh" event={"ID":"379b1597-6fc5-4304-8cc2-5cf4358194c3","Type":"ContainerStarted","Data":"a39c547369253334db8343672305ac6e47faa47ae29b8b88bcf822f315eb40fc"} Apr 04 02:53:50 crc kubenswrapper[4681]: I0404 02:53:50.968944 4681 generic.go:334] "Generic (PLEG): container finished" podID="379b1597-6fc5-4304-8cc2-5cf4358194c3" containerID="a39c547369253334db8343672305ac6e47faa47ae29b8b88bcf822f315eb40fc" exitCode=0 Apr 04 02:53:50 crc kubenswrapper[4681]: I0404 02:53:50.968996 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwhfh" event={"ID":"379b1597-6fc5-4304-8cc2-5cf4358194c3","Type":"ContainerDied","Data":"a39c547369253334db8343672305ac6e47faa47ae29b8b88bcf822f315eb40fc"} Apr 04 02:53:51 crc kubenswrapper[4681]: E0404 02:53:51.058714 4681 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod379b1597_6fc5_4304_8cc2_5cf4358194c3.slice/crio-a39c547369253334db8343672305ac6e47faa47ae29b8b88bcf822f315eb40fc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod379b1597_6fc5_4304_8cc2_5cf4358194c3.slice/crio-conmon-a39c547369253334db8343672305ac6e47faa47ae29b8b88bcf822f315eb40fc.scope\": RecentStats: unable to find data in memory cache]" Apr 04 02:53:51 crc kubenswrapper[4681]: I0404 02:53:51.980226 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwhfh" event={"ID":"379b1597-6fc5-4304-8cc2-5cf4358194c3","Type":"ContainerStarted","Data":"df0270318fb9017c15061fb72dfd290cf0b227e2289d96e98a17d6b814cf32f9"} Apr 04 02:53:52 crc kubenswrapper[4681]: I0404 02:53:52.003397 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kwhfh" podStartSLOduration=2.576290117 podStartE2EDuration="7.00337902s" podCreationTimestamp="2026-04-04 02:53:45 +0000 UTC" firstStartedPulling="2026-04-04 02:53:46.918645325 +0000 UTC m=+3506.584420445" lastFinishedPulling="2026-04-04 02:53:51.345734228 +0000 UTC m=+3511.011509348" observedRunningTime="2026-04-04 02:53:52.000776929 +0000 UTC m=+3511.666552059" watchObservedRunningTime="2026-04-04 02:53:52.00337902 +0000 UTC m=+3511.669154140" Apr 04 02:53:55 crc kubenswrapper[4681]: I0404 02:53:55.836552 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kwhfh" Apr 04 02:53:55 crc kubenswrapper[4681]: I0404 02:53:55.837299 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kwhfh" Apr 04 02:53:55 crc kubenswrapper[4681]: I0404 02:53:55.947584 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kwhfh" Apr 04 02:53:56 crc kubenswrapper[4681]: I0404 02:53:56.068370 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kwhfh" Apr 04 02:53:56 crc kubenswrapper[4681]: I0404 02:53:56.200606 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kwhfh"] Apr 04 02:53:58 crc kubenswrapper[4681]: I0404 02:53:58.039596 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-kwhfh" podUID="379b1597-6fc5-4304-8cc2-5cf4358194c3" containerName="registry-server" containerID="cri-o://df0270318fb9017c15061fb72dfd290cf0b227e2289d96e98a17d6b814cf32f9" gracePeriod=2 Apr 04 02:53:58 crc kubenswrapper[4681]: I0404 02:53:58.629560 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwhfh" Apr 04 02:53:58 crc kubenswrapper[4681]: I0404 02:53:58.787631 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/379b1597-6fc5-4304-8cc2-5cf4358194c3-utilities\") pod \"379b1597-6fc5-4304-8cc2-5cf4358194c3\" (UID: \"379b1597-6fc5-4304-8cc2-5cf4358194c3\") " Apr 04 02:53:58 crc kubenswrapper[4681]: I0404 02:53:58.787893 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/379b1597-6fc5-4304-8cc2-5cf4358194c3-catalog-content\") pod \"379b1597-6fc5-4304-8cc2-5cf4358194c3\" (UID: \"379b1597-6fc5-4304-8cc2-5cf4358194c3\") " Apr 04 02:53:58 crc kubenswrapper[4681]: I0404 02:53:58.788042 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f4t2\" (UniqueName: \"kubernetes.io/projected/379b1597-6fc5-4304-8cc2-5cf4358194c3-kube-api-access-2f4t2\") pod \"379b1597-6fc5-4304-8cc2-5cf4358194c3\" (UID: \"379b1597-6fc5-4304-8cc2-5cf4358194c3\") " Apr 04 02:53:58 crc kubenswrapper[4681]: I0404 02:53:58.789429 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/379b1597-6fc5-4304-8cc2-5cf4358194c3-utilities" (OuterVolumeSpecName: "utilities") pod "379b1597-6fc5-4304-8cc2-5cf4358194c3" (UID: "379b1597-6fc5-4304-8cc2-5cf4358194c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:53:58 crc kubenswrapper[4681]: I0404 02:53:58.804212 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/379b1597-6fc5-4304-8cc2-5cf4358194c3-kube-api-access-2f4t2" (OuterVolumeSpecName: "kube-api-access-2f4t2") pod "379b1597-6fc5-4304-8cc2-5cf4358194c3" (UID: "379b1597-6fc5-4304-8cc2-5cf4358194c3"). InnerVolumeSpecName "kube-api-access-2f4t2". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:53:58 crc kubenswrapper[4681]: I0404 02:53:58.849406 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/379b1597-6fc5-4304-8cc2-5cf4358194c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "379b1597-6fc5-4304-8cc2-5cf4358194c3" (UID: "379b1597-6fc5-4304-8cc2-5cf4358194c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:53:58 crc kubenswrapper[4681]: I0404 02:53:58.890772 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/379b1597-6fc5-4304-8cc2-5cf4358194c3-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 02:53:58 crc kubenswrapper[4681]: I0404 02:53:58.890808 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f4t2\" (UniqueName: \"kubernetes.io/projected/379b1597-6fc5-4304-8cc2-5cf4358194c3-kube-api-access-2f4t2\") on node \"crc\" DevicePath \"\"" Apr 04 02:53:58 crc kubenswrapper[4681]: I0404 02:53:58.890825 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/379b1597-6fc5-4304-8cc2-5cf4358194c3-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 02:53:59 crc kubenswrapper[4681]: I0404 02:53:59.051751 4681 generic.go:334] "Generic (PLEG): container finished" podID="379b1597-6fc5-4304-8cc2-5cf4358194c3" containerID="df0270318fb9017c15061fb72dfd290cf0b227e2289d96e98a17d6b814cf32f9" exitCode=0 Apr 04 02:53:59 crc kubenswrapper[4681]: I0404 02:53:59.051807 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwhfh" event={"ID":"379b1597-6fc5-4304-8cc2-5cf4358194c3","Type":"ContainerDied","Data":"df0270318fb9017c15061fb72dfd290cf0b227e2289d96e98a17d6b814cf32f9"} Apr 04 02:53:59 crc kubenswrapper[4681]: I0404 02:53:59.051862 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kwhfh" event={"ID":"379b1597-6fc5-4304-8cc2-5cf4358194c3","Type":"ContainerDied","Data":"b72ff9c44657f9fa73ec351afcace5c59055907dfb174dd4ed72b36bb7e13d88"} Apr 04 02:53:59 crc kubenswrapper[4681]: I0404 02:53:59.051885 4681 scope.go:117] "RemoveContainer" containerID="df0270318fb9017c15061fb72dfd290cf0b227e2289d96e98a17d6b814cf32f9" Apr 04 02:53:59 crc kubenswrapper[4681]: I0404 02:53:59.051895 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kwhfh" Apr 04 02:53:59 crc kubenswrapper[4681]: I0404 02:53:59.077787 4681 scope.go:117] "RemoveContainer" containerID="a39c547369253334db8343672305ac6e47faa47ae29b8b88bcf822f315eb40fc" Apr 04 02:53:59 crc kubenswrapper[4681]: I0404 02:53:59.123107 4681 scope.go:117] "RemoveContainer" containerID="eb65906818390671e8ef91527123529691fad066fd05949dc04fd29ab27f5a85" Apr 04 02:53:59 crc kubenswrapper[4681]: I0404 02:53:59.126393 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-kwhfh"] Apr 04 02:53:59 crc kubenswrapper[4681]: I0404 02:53:59.135149 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-kwhfh"] Apr 04 02:53:59 crc kubenswrapper[4681]: I0404 02:53:59.170247 4681 scope.go:117] "RemoveContainer" containerID="df0270318fb9017c15061fb72dfd290cf0b227e2289d96e98a17d6b814cf32f9" Apr 04 02:53:59 crc kubenswrapper[4681]: E0404 02:53:59.170820 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df0270318fb9017c15061fb72dfd290cf0b227e2289d96e98a17d6b814cf32f9\": container with ID starting with df0270318fb9017c15061fb72dfd290cf0b227e2289d96e98a17d6b814cf32f9 not found: ID does not exist" containerID="df0270318fb9017c15061fb72dfd290cf0b227e2289d96e98a17d6b814cf32f9" Apr 04 02:53:59 crc kubenswrapper[4681]: I0404 02:53:59.170879 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df0270318fb9017c15061fb72dfd290cf0b227e2289d96e98a17d6b814cf32f9"} err="failed to get container status \"df0270318fb9017c15061fb72dfd290cf0b227e2289d96e98a17d6b814cf32f9\": rpc error: code = NotFound desc = could not find container \"df0270318fb9017c15061fb72dfd290cf0b227e2289d96e98a17d6b814cf32f9\": container with ID starting with df0270318fb9017c15061fb72dfd290cf0b227e2289d96e98a17d6b814cf32f9 not found: ID does not exist" Apr 04 02:53:59 crc kubenswrapper[4681]: I0404 02:53:59.170913 4681 scope.go:117] "RemoveContainer" containerID="a39c547369253334db8343672305ac6e47faa47ae29b8b88bcf822f315eb40fc" Apr 04 02:53:59 crc kubenswrapper[4681]: E0404 02:53:59.171286 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a39c547369253334db8343672305ac6e47faa47ae29b8b88bcf822f315eb40fc\": container with ID starting with a39c547369253334db8343672305ac6e47faa47ae29b8b88bcf822f315eb40fc not found: ID does not exist" containerID="a39c547369253334db8343672305ac6e47faa47ae29b8b88bcf822f315eb40fc" Apr 04 02:53:59 crc kubenswrapper[4681]: I0404 02:53:59.171322 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a39c547369253334db8343672305ac6e47faa47ae29b8b88bcf822f315eb40fc"} err="failed to get container status \"a39c547369253334db8343672305ac6e47faa47ae29b8b88bcf822f315eb40fc\": rpc error: code = NotFound desc = could not find container \"a39c547369253334db8343672305ac6e47faa47ae29b8b88bcf822f315eb40fc\": container with ID starting with a39c547369253334db8343672305ac6e47faa47ae29b8b88bcf822f315eb40fc not found: ID does not exist" Apr 04 02:53:59 crc kubenswrapper[4681]: I0404 02:53:59.171345 4681 scope.go:117] "RemoveContainer" containerID="eb65906818390671e8ef91527123529691fad066fd05949dc04fd29ab27f5a85" Apr 04 02:53:59 crc kubenswrapper[4681]: E0404 02:53:59.171828 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb65906818390671e8ef91527123529691fad066fd05949dc04fd29ab27f5a85\": container with ID starting with eb65906818390671e8ef91527123529691fad066fd05949dc04fd29ab27f5a85 not found: ID does not exist" containerID="eb65906818390671e8ef91527123529691fad066fd05949dc04fd29ab27f5a85" Apr 04 02:53:59 crc kubenswrapper[4681]: I0404 02:53:59.171903 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb65906818390671e8ef91527123529691fad066fd05949dc04fd29ab27f5a85"} err="failed to get container status \"eb65906818390671e8ef91527123529691fad066fd05949dc04fd29ab27f5a85\": rpc error: code = NotFound desc = could not find container \"eb65906818390671e8ef91527123529691fad066fd05949dc04fd29ab27f5a85\": container with ID starting with eb65906818390671e8ef91527123529691fad066fd05949dc04fd29ab27f5a85 not found: ID does not exist" Apr 04 02:53:59 crc kubenswrapper[4681]: I0404 02:53:59.214710 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="379b1597-6fc5-4304-8cc2-5cf4358194c3" path="/var/lib/kubelet/pods/379b1597-6fc5-4304-8cc2-5cf4358194c3/volumes" Apr 04 02:54:00 crc kubenswrapper[4681]: I0404 02:54:00.153927 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587854-xn2hn"] Apr 04 02:54:00 crc kubenswrapper[4681]: E0404 02:54:00.154743 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="379b1597-6fc5-4304-8cc2-5cf4358194c3" containerName="extract-utilities" Apr 04 02:54:00 crc kubenswrapper[4681]: I0404 02:54:00.154759 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="379b1597-6fc5-4304-8cc2-5cf4358194c3" containerName="extract-utilities" Apr 04 02:54:00 crc kubenswrapper[4681]: E0404 02:54:00.154777 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="379b1597-6fc5-4304-8cc2-5cf4358194c3" containerName="registry-server" Apr 04 02:54:00 crc kubenswrapper[4681]: I0404 02:54:00.154783 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="379b1597-6fc5-4304-8cc2-5cf4358194c3" containerName="registry-server" Apr 04 02:54:00 crc kubenswrapper[4681]: E0404 02:54:00.154804 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="379b1597-6fc5-4304-8cc2-5cf4358194c3" containerName="extract-content" Apr 04 02:54:00 crc kubenswrapper[4681]: I0404 02:54:00.154811 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="379b1597-6fc5-4304-8cc2-5cf4358194c3" containerName="extract-content" Apr 04 02:54:00 crc kubenswrapper[4681]: I0404 02:54:00.170461 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="379b1597-6fc5-4304-8cc2-5cf4358194c3" containerName="registry-server" Apr 04 02:54:00 crc kubenswrapper[4681]: I0404 02:54:00.171928 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587854-xn2hn" Apr 04 02:54:00 crc kubenswrapper[4681]: I0404 02:54:00.181549 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 02:54:00 crc kubenswrapper[4681]: I0404 02:54:00.181852 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 02:54:00 crc kubenswrapper[4681]: I0404 02:54:00.182069 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 02:54:00 crc kubenswrapper[4681]: I0404 02:54:00.218074 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587854-xn2hn"] Apr 04 02:54:00 crc kubenswrapper[4681]: I0404 02:54:00.220507 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j76g\" (UniqueName: \"kubernetes.io/projected/4fbe7ac4-b502-4c72-ad0a-53020c479d69-kube-api-access-8j76g\") pod \"auto-csr-approver-29587854-xn2hn\" (UID: \"4fbe7ac4-b502-4c72-ad0a-53020c479d69\") " pod="openshift-infra/auto-csr-approver-29587854-xn2hn" Apr 04 02:54:00 crc kubenswrapper[4681]: I0404 02:54:00.322996 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j76g\" (UniqueName: \"kubernetes.io/projected/4fbe7ac4-b502-4c72-ad0a-53020c479d69-kube-api-access-8j76g\") pod \"auto-csr-approver-29587854-xn2hn\" (UID: \"4fbe7ac4-b502-4c72-ad0a-53020c479d69\") " pod="openshift-infra/auto-csr-approver-29587854-xn2hn" Apr 04 02:54:00 crc kubenswrapper[4681]: I0404 02:54:00.344202 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j76g\" (UniqueName: \"kubernetes.io/projected/4fbe7ac4-b502-4c72-ad0a-53020c479d69-kube-api-access-8j76g\") pod \"auto-csr-approver-29587854-xn2hn\" (UID: \"4fbe7ac4-b502-4c72-ad0a-53020c479d69\") " pod="openshift-infra/auto-csr-approver-29587854-xn2hn" Apr 04 02:54:00 crc kubenswrapper[4681]: I0404 02:54:00.518490 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587854-xn2hn" Apr 04 02:54:00 crc kubenswrapper[4681]: I0404 02:54:00.981569 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587854-xn2hn"] Apr 04 02:54:01 crc kubenswrapper[4681]: I0404 02:54:01.070791 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587854-xn2hn" event={"ID":"4fbe7ac4-b502-4c72-ad0a-53020c479d69","Type":"ContainerStarted","Data":"5690e51da62efb26b64bf4dce32ed4faf55696e3e6e5b9a2b65782227c37d55c"} Apr 04 02:54:03 crc kubenswrapper[4681]: I0404 02:54:03.091358 4681 generic.go:334] "Generic (PLEG): container finished" podID="4fbe7ac4-b502-4c72-ad0a-53020c479d69" containerID="4327b21da8a66a5c153bc9c1d97e2850bee18317ee4158f810fa57b8fc93fe05" exitCode=0 Apr 04 02:54:03 crc kubenswrapper[4681]: I0404 02:54:03.091402 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587854-xn2hn" event={"ID":"4fbe7ac4-b502-4c72-ad0a-53020c479d69","Type":"ContainerDied","Data":"4327b21da8a66a5c153bc9c1d97e2850bee18317ee4158f810fa57b8fc93fe05"} Apr 04 02:54:04 crc kubenswrapper[4681]: I0404 02:54:04.456795 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587854-xn2hn" Apr 04 02:54:04 crc kubenswrapper[4681]: I0404 02:54:04.611685 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j76g\" (UniqueName: \"kubernetes.io/projected/4fbe7ac4-b502-4c72-ad0a-53020c479d69-kube-api-access-8j76g\") pod \"4fbe7ac4-b502-4c72-ad0a-53020c479d69\" (UID: \"4fbe7ac4-b502-4c72-ad0a-53020c479d69\") " Apr 04 02:54:04 crc kubenswrapper[4681]: I0404 02:54:04.623668 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fbe7ac4-b502-4c72-ad0a-53020c479d69-kube-api-access-8j76g" (OuterVolumeSpecName: "kube-api-access-8j76g") pod "4fbe7ac4-b502-4c72-ad0a-53020c479d69" (UID: "4fbe7ac4-b502-4c72-ad0a-53020c479d69"). InnerVolumeSpecName "kube-api-access-8j76g". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:54:04 crc kubenswrapper[4681]: I0404 02:54:04.715070 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j76g\" (UniqueName: \"kubernetes.io/projected/4fbe7ac4-b502-4c72-ad0a-53020c479d69-kube-api-access-8j76g\") on node \"crc\" DevicePath \"\"" Apr 04 02:54:05 crc kubenswrapper[4681]: I0404 02:54:05.110704 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587854-xn2hn" event={"ID":"4fbe7ac4-b502-4c72-ad0a-53020c479d69","Type":"ContainerDied","Data":"5690e51da62efb26b64bf4dce32ed4faf55696e3e6e5b9a2b65782227c37d55c"} Apr 04 02:54:05 crc kubenswrapper[4681]: I0404 02:54:05.111018 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5690e51da62efb26b64bf4dce32ed4faf55696e3e6e5b9a2b65782227c37d55c" Apr 04 02:54:05 crc kubenswrapper[4681]: I0404 02:54:05.110774 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587854-xn2hn" Apr 04 02:54:05 crc kubenswrapper[4681]: I0404 02:54:05.537553 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587848-ns5cp"] Apr 04 02:54:05 crc kubenswrapper[4681]: I0404 02:54:05.549746 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587848-ns5cp"] Apr 04 02:54:07 crc kubenswrapper[4681]: I0404 02:54:07.213732 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc439b89-de86-4c8b-a244-924e7b571fb1" path="/var/lib/kubelet/pods/fc439b89-de86-4c8b-a244-924e7b571fb1/volumes" Apr 04 02:54:31 crc kubenswrapper[4681]: I0404 02:54:31.144534 4681 scope.go:117] "RemoveContainer" containerID="b850dcd9333add2cbc5c47e52f33122674d144df0f92c32d7ff61095b55cf0d5" Apr 04 02:55:56 crc kubenswrapper[4681]: I0404 02:55:56.524119 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:55:56 crc kubenswrapper[4681]: I0404 02:55:56.525206 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:56:00 crc kubenswrapper[4681]: I0404 02:56:00.178611 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587856-dpr6f"] Apr 04 02:56:00 crc kubenswrapper[4681]: E0404 02:56:00.179684 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fbe7ac4-b502-4c72-ad0a-53020c479d69" containerName="oc" Apr 04 02:56:00 crc kubenswrapper[4681]: I0404 02:56:00.179702 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fbe7ac4-b502-4c72-ad0a-53020c479d69" containerName="oc" Apr 04 02:56:00 crc kubenswrapper[4681]: I0404 02:56:00.179990 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fbe7ac4-b502-4c72-ad0a-53020c479d69" containerName="oc" Apr 04 02:56:00 crc kubenswrapper[4681]: I0404 02:56:00.183106 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587856-dpr6f" Apr 04 02:56:00 crc kubenswrapper[4681]: I0404 02:56:00.185631 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 02:56:00 crc kubenswrapper[4681]: I0404 02:56:00.186395 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 02:56:00 crc kubenswrapper[4681]: I0404 02:56:00.189865 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 02:56:00 crc kubenswrapper[4681]: I0404 02:56:00.200504 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587856-dpr6f"] Apr 04 02:56:00 crc kubenswrapper[4681]: I0404 02:56:00.357436 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdl2d\" (UniqueName: \"kubernetes.io/projected/ed7b3a2f-5dc4-4107-9027-259bbcbf4895-kube-api-access-gdl2d\") pod \"auto-csr-approver-29587856-dpr6f\" (UID: \"ed7b3a2f-5dc4-4107-9027-259bbcbf4895\") " pod="openshift-infra/auto-csr-approver-29587856-dpr6f" Apr 04 02:56:00 crc kubenswrapper[4681]: I0404 02:56:00.459635 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdl2d\" (UniqueName: \"kubernetes.io/projected/ed7b3a2f-5dc4-4107-9027-259bbcbf4895-kube-api-access-gdl2d\") pod \"auto-csr-approver-29587856-dpr6f\" (UID: \"ed7b3a2f-5dc4-4107-9027-259bbcbf4895\") " pod="openshift-infra/auto-csr-approver-29587856-dpr6f" Apr 04 02:56:00 crc kubenswrapper[4681]: I0404 02:56:00.499156 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdl2d\" (UniqueName: \"kubernetes.io/projected/ed7b3a2f-5dc4-4107-9027-259bbcbf4895-kube-api-access-gdl2d\") pod \"auto-csr-approver-29587856-dpr6f\" (UID: \"ed7b3a2f-5dc4-4107-9027-259bbcbf4895\") " pod="openshift-infra/auto-csr-approver-29587856-dpr6f" Apr 04 02:56:00 crc kubenswrapper[4681]: I0404 02:56:00.505050 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587856-dpr6f" Apr 04 02:56:00 crc kubenswrapper[4681]: I0404 02:56:00.985781 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587856-dpr6f"] Apr 04 02:56:01 crc kubenswrapper[4681]: I0404 02:56:01.660272 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587856-dpr6f" event={"ID":"ed7b3a2f-5dc4-4107-9027-259bbcbf4895","Type":"ContainerStarted","Data":"dfe30608d84079cc6b114f6caa27047450ef700dbd34f056c73fcfd7b37d4cb7"} Apr 04 02:56:02 crc kubenswrapper[4681]: I0404 02:56:02.671799 4681 generic.go:334] "Generic (PLEG): container finished" podID="ed7b3a2f-5dc4-4107-9027-259bbcbf4895" containerID="c34d02d833f224a53a7194f5626141d5924607edbd1a0ffc5614f1594f3814ac" exitCode=0 Apr 04 02:56:02 crc kubenswrapper[4681]: I0404 02:56:02.671860 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587856-dpr6f" event={"ID":"ed7b3a2f-5dc4-4107-9027-259bbcbf4895","Type":"ContainerDied","Data":"c34d02d833f224a53a7194f5626141d5924607edbd1a0ffc5614f1594f3814ac"} Apr 04 02:56:04 crc kubenswrapper[4681]: I0404 02:56:04.031110 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587856-dpr6f" Apr 04 02:56:04 crc kubenswrapper[4681]: I0404 02:56:04.137289 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdl2d\" (UniqueName: \"kubernetes.io/projected/ed7b3a2f-5dc4-4107-9027-259bbcbf4895-kube-api-access-gdl2d\") pod \"ed7b3a2f-5dc4-4107-9027-259bbcbf4895\" (UID: \"ed7b3a2f-5dc4-4107-9027-259bbcbf4895\") " Apr 04 02:56:04 crc kubenswrapper[4681]: I0404 02:56:04.147996 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed7b3a2f-5dc4-4107-9027-259bbcbf4895-kube-api-access-gdl2d" (OuterVolumeSpecName: "kube-api-access-gdl2d") pod "ed7b3a2f-5dc4-4107-9027-259bbcbf4895" (UID: "ed7b3a2f-5dc4-4107-9027-259bbcbf4895"). InnerVolumeSpecName "kube-api-access-gdl2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:56:04 crc kubenswrapper[4681]: I0404 02:56:04.239394 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdl2d\" (UniqueName: \"kubernetes.io/projected/ed7b3a2f-5dc4-4107-9027-259bbcbf4895-kube-api-access-gdl2d\") on node \"crc\" DevicePath \"\"" Apr 04 02:56:04 crc kubenswrapper[4681]: I0404 02:56:04.692008 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587856-dpr6f" Apr 04 02:56:04 crc kubenswrapper[4681]: I0404 02:56:04.692480 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587856-dpr6f" event={"ID":"ed7b3a2f-5dc4-4107-9027-259bbcbf4895","Type":"ContainerDied","Data":"dfe30608d84079cc6b114f6caa27047450ef700dbd34f056c73fcfd7b37d4cb7"} Apr 04 02:56:04 crc kubenswrapper[4681]: I0404 02:56:04.692537 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfe30608d84079cc6b114f6caa27047450ef700dbd34f056c73fcfd7b37d4cb7" Apr 04 02:56:05 crc kubenswrapper[4681]: I0404 02:56:05.104818 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587850-9pj6m"] Apr 04 02:56:05 crc kubenswrapper[4681]: I0404 02:56:05.116507 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587850-9pj6m"] Apr 04 02:56:05 crc kubenswrapper[4681]: I0404 02:56:05.215741 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95508813-b747-4dbb-8b5b-f845e7044829" path="/var/lib/kubelet/pods/95508813-b747-4dbb-8b5b-f845e7044829/volumes" Apr 04 02:56:15 crc kubenswrapper[4681]: I0404 02:56:15.411203 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xgq5l"] Apr 04 02:56:15 crc kubenswrapper[4681]: E0404 02:56:15.413967 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed7b3a2f-5dc4-4107-9027-259bbcbf4895" containerName="oc" Apr 04 02:56:15 crc kubenswrapper[4681]: I0404 02:56:15.414145 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed7b3a2f-5dc4-4107-9027-259bbcbf4895" containerName="oc" Apr 04 02:56:15 crc kubenswrapper[4681]: I0404 02:56:15.414659 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed7b3a2f-5dc4-4107-9027-259bbcbf4895" containerName="oc" Apr 04 02:56:15 crc kubenswrapper[4681]: I0404 02:56:15.417396 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xgq5l" Apr 04 02:56:15 crc kubenswrapper[4681]: I0404 02:56:15.444955 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xgq5l"] Apr 04 02:56:15 crc kubenswrapper[4681]: I0404 02:56:15.491747 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5be37def-f6bb-4f48-9163-bf93050d3035-utilities\") pod \"redhat-operators-xgq5l\" (UID: \"5be37def-f6bb-4f48-9163-bf93050d3035\") " pod="openshift-marketplace/redhat-operators-xgq5l" Apr 04 02:56:15 crc kubenswrapper[4681]: I0404 02:56:15.491830 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5be37def-f6bb-4f48-9163-bf93050d3035-catalog-content\") pod \"redhat-operators-xgq5l\" (UID: \"5be37def-f6bb-4f48-9163-bf93050d3035\") " pod="openshift-marketplace/redhat-operators-xgq5l" Apr 04 02:56:15 crc kubenswrapper[4681]: I0404 02:56:15.491987 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zgcj\" (UniqueName: \"kubernetes.io/projected/5be37def-f6bb-4f48-9163-bf93050d3035-kube-api-access-2zgcj\") pod \"redhat-operators-xgq5l\" (UID: \"5be37def-f6bb-4f48-9163-bf93050d3035\") " pod="openshift-marketplace/redhat-operators-xgq5l" Apr 04 02:56:15 crc kubenswrapper[4681]: I0404 02:56:15.593600 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5be37def-f6bb-4f48-9163-bf93050d3035-utilities\") pod \"redhat-operators-xgq5l\" (UID: \"5be37def-f6bb-4f48-9163-bf93050d3035\") " pod="openshift-marketplace/redhat-operators-xgq5l" Apr 04 02:56:15 crc kubenswrapper[4681]: I0404 02:56:15.593683 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5be37def-f6bb-4f48-9163-bf93050d3035-catalog-content\") pod \"redhat-operators-xgq5l\" (UID: \"5be37def-f6bb-4f48-9163-bf93050d3035\") " pod="openshift-marketplace/redhat-operators-xgq5l" Apr 04 02:56:15 crc kubenswrapper[4681]: I0404 02:56:15.593856 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zgcj\" (UniqueName: \"kubernetes.io/projected/5be37def-f6bb-4f48-9163-bf93050d3035-kube-api-access-2zgcj\") pod \"redhat-operators-xgq5l\" (UID: \"5be37def-f6bb-4f48-9163-bf93050d3035\") " pod="openshift-marketplace/redhat-operators-xgq5l" Apr 04 02:56:15 crc kubenswrapper[4681]: I0404 02:56:15.594177 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5be37def-f6bb-4f48-9163-bf93050d3035-utilities\") pod \"redhat-operators-xgq5l\" (UID: \"5be37def-f6bb-4f48-9163-bf93050d3035\") " pod="openshift-marketplace/redhat-operators-xgq5l" Apr 04 02:56:15 crc kubenswrapper[4681]: I0404 02:56:15.594316 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5be37def-f6bb-4f48-9163-bf93050d3035-catalog-content\") pod \"redhat-operators-xgq5l\" (UID: \"5be37def-f6bb-4f48-9163-bf93050d3035\") " pod="openshift-marketplace/redhat-operators-xgq5l" Apr 04 02:56:15 crc kubenswrapper[4681]: I0404 02:56:15.618163 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zgcj\" (UniqueName: \"kubernetes.io/projected/5be37def-f6bb-4f48-9163-bf93050d3035-kube-api-access-2zgcj\") pod \"redhat-operators-xgq5l\" (UID: \"5be37def-f6bb-4f48-9163-bf93050d3035\") " pod="openshift-marketplace/redhat-operators-xgq5l" Apr 04 02:56:15 crc kubenswrapper[4681]: I0404 02:56:15.747167 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xgq5l" Apr 04 02:56:16 crc kubenswrapper[4681]: I0404 02:56:16.260077 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xgq5l"] Apr 04 02:56:16 crc kubenswrapper[4681]: I0404 02:56:16.825462 4681 generic.go:334] "Generic (PLEG): container finished" podID="5be37def-f6bb-4f48-9163-bf93050d3035" containerID="c7cda7e261ed165e6eac15b2ef164af73644a266bbf4ac43d02f87d416b243a4" exitCode=0 Apr 04 02:56:16 crc kubenswrapper[4681]: I0404 02:56:16.825561 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgq5l" event={"ID":"5be37def-f6bb-4f48-9163-bf93050d3035","Type":"ContainerDied","Data":"c7cda7e261ed165e6eac15b2ef164af73644a266bbf4ac43d02f87d416b243a4"} Apr 04 02:56:16 crc kubenswrapper[4681]: I0404 02:56:16.825793 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgq5l" event={"ID":"5be37def-f6bb-4f48-9163-bf93050d3035","Type":"ContainerStarted","Data":"6ab908293c8aa4752c85a3ea2ab0d1210bc9ce5da3624aa9f1cae1732a037ccd"} Apr 04 02:56:17 crc kubenswrapper[4681]: I0404 02:56:17.836128 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgq5l" event={"ID":"5be37def-f6bb-4f48-9163-bf93050d3035","Type":"ContainerStarted","Data":"3f6f2965e0c3422c766ee5d4a6ce3f94d61e74ff2fa365bf919c8b045070543d"} Apr 04 02:56:24 crc kubenswrapper[4681]: I0404 02:56:24.910807 4681 generic.go:334] "Generic (PLEG): container finished" podID="5be37def-f6bb-4f48-9163-bf93050d3035" containerID="3f6f2965e0c3422c766ee5d4a6ce3f94d61e74ff2fa365bf919c8b045070543d" exitCode=0 Apr 04 02:56:24 crc kubenswrapper[4681]: I0404 02:56:24.910863 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgq5l" event={"ID":"5be37def-f6bb-4f48-9163-bf93050d3035","Type":"ContainerDied","Data":"3f6f2965e0c3422c766ee5d4a6ce3f94d61e74ff2fa365bf919c8b045070543d"} Apr 04 02:56:25 crc kubenswrapper[4681]: I0404 02:56:25.923196 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgq5l" event={"ID":"5be37def-f6bb-4f48-9163-bf93050d3035","Type":"ContainerStarted","Data":"3ef1f407dd6f3d973c8f01d30037a329d6edc08e5c271a66ceb942d2d96491f5"} Apr 04 02:56:25 crc kubenswrapper[4681]: I0404 02:56:25.960682 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xgq5l" podStartSLOduration=2.540676924 podStartE2EDuration="10.960665103s" podCreationTimestamp="2026-04-04 02:56:15 +0000 UTC" firstStartedPulling="2026-04-04 02:56:16.827117604 +0000 UTC m=+3656.492892724" lastFinishedPulling="2026-04-04 02:56:25.247105783 +0000 UTC m=+3664.912880903" observedRunningTime="2026-04-04 02:56:25.945859227 +0000 UTC m=+3665.611634397" watchObservedRunningTime="2026-04-04 02:56:25.960665103 +0000 UTC m=+3665.626440223" Apr 04 02:56:26 crc kubenswrapper[4681]: I0404 02:56:26.525213 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:56:26 crc kubenswrapper[4681]: I0404 02:56:26.525340 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:56:31 crc kubenswrapper[4681]: I0404 02:56:31.264025 4681 scope.go:117] "RemoveContainer" containerID="d386b321f444658bac254380f2c6812782087281ca92c0871e6c0aa7fe1ab957" Apr 04 02:56:35 crc kubenswrapper[4681]: I0404 02:56:35.747820 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xgq5l" Apr 04 02:56:35 crc kubenswrapper[4681]: I0404 02:56:35.748639 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xgq5l" Apr 04 02:56:36 crc kubenswrapper[4681]: I0404 02:56:36.796047 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xgq5l" podUID="5be37def-f6bb-4f48-9163-bf93050d3035" containerName="registry-server" probeResult="failure" output=< Apr 04 02:56:36 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Apr 04 02:56:36 crc kubenswrapper[4681]: > Apr 04 02:56:45 crc kubenswrapper[4681]: I0404 02:56:45.827681 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xgq5l" Apr 04 02:56:45 crc kubenswrapper[4681]: I0404 02:56:45.876459 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xgq5l" Apr 04 02:56:46 crc kubenswrapper[4681]: I0404 02:56:46.594016 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xgq5l"] Apr 04 02:56:47 crc kubenswrapper[4681]: I0404 02:56:47.168086 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xgq5l" podUID="5be37def-f6bb-4f48-9163-bf93050d3035" containerName="registry-server" containerID="cri-o://3ef1f407dd6f3d973c8f01d30037a329d6edc08e5c271a66ceb942d2d96491f5" gracePeriod=2 Apr 04 02:56:47 crc kubenswrapper[4681]: I0404 02:56:47.661949 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xgq5l" Apr 04 02:56:47 crc kubenswrapper[4681]: I0404 02:56:47.725086 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5be37def-f6bb-4f48-9163-bf93050d3035-catalog-content\") pod \"5be37def-f6bb-4f48-9163-bf93050d3035\" (UID: \"5be37def-f6bb-4f48-9163-bf93050d3035\") " Apr 04 02:56:47 crc kubenswrapper[4681]: I0404 02:56:47.725169 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zgcj\" (UniqueName: \"kubernetes.io/projected/5be37def-f6bb-4f48-9163-bf93050d3035-kube-api-access-2zgcj\") pod \"5be37def-f6bb-4f48-9163-bf93050d3035\" (UID: \"5be37def-f6bb-4f48-9163-bf93050d3035\") " Apr 04 02:56:47 crc kubenswrapper[4681]: I0404 02:56:47.725200 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5be37def-f6bb-4f48-9163-bf93050d3035-utilities\") pod \"5be37def-f6bb-4f48-9163-bf93050d3035\" (UID: \"5be37def-f6bb-4f48-9163-bf93050d3035\") " Apr 04 02:56:47 crc kubenswrapper[4681]: I0404 02:56:47.726744 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5be37def-f6bb-4f48-9163-bf93050d3035-utilities" (OuterVolumeSpecName: "utilities") pod "5be37def-f6bb-4f48-9163-bf93050d3035" (UID: "5be37def-f6bb-4f48-9163-bf93050d3035"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:56:47 crc kubenswrapper[4681]: I0404 02:56:47.733552 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5be37def-f6bb-4f48-9163-bf93050d3035-kube-api-access-2zgcj" (OuterVolumeSpecName: "kube-api-access-2zgcj") pod "5be37def-f6bb-4f48-9163-bf93050d3035" (UID: "5be37def-f6bb-4f48-9163-bf93050d3035"). InnerVolumeSpecName "kube-api-access-2zgcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:56:47 crc kubenswrapper[4681]: I0404 02:56:47.827732 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zgcj\" (UniqueName: \"kubernetes.io/projected/5be37def-f6bb-4f48-9163-bf93050d3035-kube-api-access-2zgcj\") on node \"crc\" DevicePath \"\"" Apr 04 02:56:47 crc kubenswrapper[4681]: I0404 02:56:47.827775 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5be37def-f6bb-4f48-9163-bf93050d3035-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 02:56:47 crc kubenswrapper[4681]: I0404 02:56:47.865047 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5be37def-f6bb-4f48-9163-bf93050d3035-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5be37def-f6bb-4f48-9163-bf93050d3035" (UID: "5be37def-f6bb-4f48-9163-bf93050d3035"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:56:47 crc kubenswrapper[4681]: I0404 02:56:47.929776 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5be37def-f6bb-4f48-9163-bf93050d3035-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 02:56:48 crc kubenswrapper[4681]: I0404 02:56:48.179084 4681 generic.go:334] "Generic (PLEG): container finished" podID="5be37def-f6bb-4f48-9163-bf93050d3035" containerID="3ef1f407dd6f3d973c8f01d30037a329d6edc08e5c271a66ceb942d2d96491f5" exitCode=0 Apr 04 02:56:48 crc kubenswrapper[4681]: I0404 02:56:48.179130 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgq5l" event={"ID":"5be37def-f6bb-4f48-9163-bf93050d3035","Type":"ContainerDied","Data":"3ef1f407dd6f3d973c8f01d30037a329d6edc08e5c271a66ceb942d2d96491f5"} Apr 04 02:56:48 crc kubenswrapper[4681]: I0404 02:56:48.179195 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xgq5l" event={"ID":"5be37def-f6bb-4f48-9163-bf93050d3035","Type":"ContainerDied","Data":"6ab908293c8aa4752c85a3ea2ab0d1210bc9ce5da3624aa9f1cae1732a037ccd"} Apr 04 02:56:48 crc kubenswrapper[4681]: I0404 02:56:48.179204 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xgq5l" Apr 04 02:56:48 crc kubenswrapper[4681]: I0404 02:56:48.179224 4681 scope.go:117] "RemoveContainer" containerID="3ef1f407dd6f3d973c8f01d30037a329d6edc08e5c271a66ceb942d2d96491f5" Apr 04 02:56:48 crc kubenswrapper[4681]: I0404 02:56:48.208240 4681 scope.go:117] "RemoveContainer" containerID="3f6f2965e0c3422c766ee5d4a6ce3f94d61e74ff2fa365bf919c8b045070543d" Apr 04 02:56:48 crc kubenswrapper[4681]: I0404 02:56:48.219097 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xgq5l"] Apr 04 02:56:48 crc kubenswrapper[4681]: I0404 02:56:48.229213 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xgq5l"] Apr 04 02:56:48 crc kubenswrapper[4681]: I0404 02:56:48.246304 4681 scope.go:117] "RemoveContainer" containerID="c7cda7e261ed165e6eac15b2ef164af73644a266bbf4ac43d02f87d416b243a4" Apr 04 02:56:48 crc kubenswrapper[4681]: I0404 02:56:48.283037 4681 scope.go:117] "RemoveContainer" containerID="3ef1f407dd6f3d973c8f01d30037a329d6edc08e5c271a66ceb942d2d96491f5" Apr 04 02:56:48 crc kubenswrapper[4681]: E0404 02:56:48.283389 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ef1f407dd6f3d973c8f01d30037a329d6edc08e5c271a66ceb942d2d96491f5\": container with ID starting with 3ef1f407dd6f3d973c8f01d30037a329d6edc08e5c271a66ceb942d2d96491f5 not found: ID does not exist" containerID="3ef1f407dd6f3d973c8f01d30037a329d6edc08e5c271a66ceb942d2d96491f5" Apr 04 02:56:48 crc kubenswrapper[4681]: I0404 02:56:48.283415 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ef1f407dd6f3d973c8f01d30037a329d6edc08e5c271a66ceb942d2d96491f5"} err="failed to get container status \"3ef1f407dd6f3d973c8f01d30037a329d6edc08e5c271a66ceb942d2d96491f5\": rpc error: code = NotFound desc = could not find container \"3ef1f407dd6f3d973c8f01d30037a329d6edc08e5c271a66ceb942d2d96491f5\": container with ID starting with 3ef1f407dd6f3d973c8f01d30037a329d6edc08e5c271a66ceb942d2d96491f5 not found: ID does not exist" Apr 04 02:56:48 crc kubenswrapper[4681]: I0404 02:56:48.283433 4681 scope.go:117] "RemoveContainer" containerID="3f6f2965e0c3422c766ee5d4a6ce3f94d61e74ff2fa365bf919c8b045070543d" Apr 04 02:56:48 crc kubenswrapper[4681]: E0404 02:56:48.283595 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f6f2965e0c3422c766ee5d4a6ce3f94d61e74ff2fa365bf919c8b045070543d\": container with ID starting with 3f6f2965e0c3422c766ee5d4a6ce3f94d61e74ff2fa365bf919c8b045070543d not found: ID does not exist" containerID="3f6f2965e0c3422c766ee5d4a6ce3f94d61e74ff2fa365bf919c8b045070543d" Apr 04 02:56:48 crc kubenswrapper[4681]: I0404 02:56:48.283612 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f6f2965e0c3422c766ee5d4a6ce3f94d61e74ff2fa365bf919c8b045070543d"} err="failed to get container status \"3f6f2965e0c3422c766ee5d4a6ce3f94d61e74ff2fa365bf919c8b045070543d\": rpc error: code = NotFound desc = could not find container \"3f6f2965e0c3422c766ee5d4a6ce3f94d61e74ff2fa365bf919c8b045070543d\": container with ID starting with 3f6f2965e0c3422c766ee5d4a6ce3f94d61e74ff2fa365bf919c8b045070543d not found: ID does not exist" Apr 04 02:56:48 crc kubenswrapper[4681]: I0404 02:56:48.283623 4681 scope.go:117] "RemoveContainer" containerID="c7cda7e261ed165e6eac15b2ef164af73644a266bbf4ac43d02f87d416b243a4" Apr 04 02:56:48 crc kubenswrapper[4681]: E0404 02:56:48.283985 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7cda7e261ed165e6eac15b2ef164af73644a266bbf4ac43d02f87d416b243a4\": container with ID starting with c7cda7e261ed165e6eac15b2ef164af73644a266bbf4ac43d02f87d416b243a4 not found: ID does not exist" containerID="c7cda7e261ed165e6eac15b2ef164af73644a266bbf4ac43d02f87d416b243a4" Apr 04 02:56:48 crc kubenswrapper[4681]: I0404 02:56:48.283999 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7cda7e261ed165e6eac15b2ef164af73644a266bbf4ac43d02f87d416b243a4"} err="failed to get container status \"c7cda7e261ed165e6eac15b2ef164af73644a266bbf4ac43d02f87d416b243a4\": rpc error: code = NotFound desc = could not find container \"c7cda7e261ed165e6eac15b2ef164af73644a266bbf4ac43d02f87d416b243a4\": container with ID starting with c7cda7e261ed165e6eac15b2ef164af73644a266bbf4ac43d02f87d416b243a4 not found: ID does not exist" Apr 04 02:56:49 crc kubenswrapper[4681]: I0404 02:56:49.212367 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5be37def-f6bb-4f48-9163-bf93050d3035" path="/var/lib/kubelet/pods/5be37def-f6bb-4f48-9163-bf93050d3035/volumes" Apr 04 02:56:56 crc kubenswrapper[4681]: I0404 02:56:56.524039 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:56:56 crc kubenswrapper[4681]: I0404 02:56:56.524549 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:56:56 crc kubenswrapper[4681]: I0404 02:56:56.524593 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 02:56:56 crc kubenswrapper[4681]: I0404 02:56:56.525739 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9a5a1bde3866fbb86f1c88c1ac7b7dfef788ee5c808f1c22105ee69b8ca81656"} pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Apr 04 02:56:56 crc kubenswrapper[4681]: I0404 02:56:56.525795 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" containerID="cri-o://9a5a1bde3866fbb86f1c88c1ac7b7dfef788ee5c808f1c22105ee69b8ca81656" gracePeriod=600 Apr 04 02:56:57 crc kubenswrapper[4681]: I0404 02:56:57.263519 4681 generic.go:334] "Generic (PLEG): container finished" podID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerID="9a5a1bde3866fbb86f1c88c1ac7b7dfef788ee5c808f1c22105ee69b8ca81656" exitCode=0 Apr 04 02:56:57 crc kubenswrapper[4681]: I0404 02:56:57.263550 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerDied","Data":"9a5a1bde3866fbb86f1c88c1ac7b7dfef788ee5c808f1c22105ee69b8ca81656"} Apr 04 02:56:57 crc kubenswrapper[4681]: I0404 02:56:57.264103 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerStarted","Data":"447590cb991b3a8f7b114ff69052f3e9ff022024bc6e1ab610dde7ea4d8598b6"} Apr 04 02:56:57 crc kubenswrapper[4681]: I0404 02:56:57.264126 4681 scope.go:117] "RemoveContainer" containerID="690be4a2fcc0283901336e24296f8e03cadd8695ffe4317592f89e27ef111d01" Apr 04 02:58:00 crc kubenswrapper[4681]: I0404 02:58:00.163077 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587858-54ggf"] Apr 04 02:58:00 crc kubenswrapper[4681]: E0404 02:58:00.163988 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5be37def-f6bb-4f48-9163-bf93050d3035" containerName="extract-content" Apr 04 02:58:00 crc kubenswrapper[4681]: I0404 02:58:00.164001 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="5be37def-f6bb-4f48-9163-bf93050d3035" containerName="extract-content" Apr 04 02:58:00 crc kubenswrapper[4681]: E0404 02:58:00.164030 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5be37def-f6bb-4f48-9163-bf93050d3035" containerName="registry-server" Apr 04 02:58:00 crc kubenswrapper[4681]: I0404 02:58:00.164036 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="5be37def-f6bb-4f48-9163-bf93050d3035" containerName="registry-server" Apr 04 02:58:00 crc kubenswrapper[4681]: E0404 02:58:00.164051 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5be37def-f6bb-4f48-9163-bf93050d3035" containerName="extract-utilities" Apr 04 02:58:00 crc kubenswrapper[4681]: I0404 02:58:00.164057 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="5be37def-f6bb-4f48-9163-bf93050d3035" containerName="extract-utilities" Apr 04 02:58:00 crc kubenswrapper[4681]: I0404 02:58:00.164282 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="5be37def-f6bb-4f48-9163-bf93050d3035" containerName="registry-server" Apr 04 02:58:00 crc kubenswrapper[4681]: I0404 02:58:00.165063 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587858-54ggf" Apr 04 02:58:00 crc kubenswrapper[4681]: I0404 02:58:00.168386 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 02:58:00 crc kubenswrapper[4681]: I0404 02:58:00.168386 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 02:58:00 crc kubenswrapper[4681]: I0404 02:58:00.168918 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 02:58:00 crc kubenswrapper[4681]: I0404 02:58:00.179443 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587858-54ggf"] Apr 04 02:58:00 crc kubenswrapper[4681]: I0404 02:58:00.289941 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-459q8\" (UniqueName: \"kubernetes.io/projected/d07e3739-bc4b-48bd-981f-f37f9a52e4e1-kube-api-access-459q8\") pod \"auto-csr-approver-29587858-54ggf\" (UID: \"d07e3739-bc4b-48bd-981f-f37f9a52e4e1\") " pod="openshift-infra/auto-csr-approver-29587858-54ggf" Apr 04 02:58:00 crc kubenswrapper[4681]: I0404 02:58:00.393313 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-459q8\" (UniqueName: \"kubernetes.io/projected/d07e3739-bc4b-48bd-981f-f37f9a52e4e1-kube-api-access-459q8\") pod \"auto-csr-approver-29587858-54ggf\" (UID: \"d07e3739-bc4b-48bd-981f-f37f9a52e4e1\") " pod="openshift-infra/auto-csr-approver-29587858-54ggf" Apr 04 02:58:00 crc kubenswrapper[4681]: I0404 02:58:00.411336 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-459q8\" (UniqueName: \"kubernetes.io/projected/d07e3739-bc4b-48bd-981f-f37f9a52e4e1-kube-api-access-459q8\") pod \"auto-csr-approver-29587858-54ggf\" (UID: \"d07e3739-bc4b-48bd-981f-f37f9a52e4e1\") " pod="openshift-infra/auto-csr-approver-29587858-54ggf" Apr 04 02:58:00 crc kubenswrapper[4681]: I0404 02:58:00.484603 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587858-54ggf" Apr 04 02:58:00 crc kubenswrapper[4681]: I0404 02:58:00.981063 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587858-54ggf"] Apr 04 02:58:01 crc kubenswrapper[4681]: I0404 02:58:01.908573 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587858-54ggf" event={"ID":"d07e3739-bc4b-48bd-981f-f37f9a52e4e1","Type":"ContainerStarted","Data":"c98ab913b031710868e2825cfe8a582553468b9f29dd60d44a26573ddd7ba3b8"} Apr 04 02:58:02 crc kubenswrapper[4681]: I0404 02:58:02.919236 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587858-54ggf" event={"ID":"d07e3739-bc4b-48bd-981f-f37f9a52e4e1","Type":"ContainerStarted","Data":"935aad3be6a753236a487dfa67607c9ab7ff2e608bb599f7c6c02186a9421af6"} Apr 04 02:58:02 crc kubenswrapper[4681]: I0404 02:58:02.942944 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29587858-54ggf" podStartSLOduration=2.046899721 podStartE2EDuration="2.942921982s" podCreationTimestamp="2026-04-04 02:58:00 +0000 UTC" firstStartedPulling="2026-04-04 02:58:00.982771926 +0000 UTC m=+3760.648547046" lastFinishedPulling="2026-04-04 02:58:01.878794177 +0000 UTC m=+3761.544569307" observedRunningTime="2026-04-04 02:58:02.936178748 +0000 UTC m=+3762.601953868" watchObservedRunningTime="2026-04-04 02:58:02.942921982 +0000 UTC m=+3762.608697102" Apr 04 02:58:03 crc kubenswrapper[4681]: I0404 02:58:03.928238 4681 generic.go:334] "Generic (PLEG): container finished" podID="d07e3739-bc4b-48bd-981f-f37f9a52e4e1" containerID="935aad3be6a753236a487dfa67607c9ab7ff2e608bb599f7c6c02186a9421af6" exitCode=0 Apr 04 02:58:03 crc kubenswrapper[4681]: I0404 02:58:03.928619 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587858-54ggf" event={"ID":"d07e3739-bc4b-48bd-981f-f37f9a52e4e1","Type":"ContainerDied","Data":"935aad3be6a753236a487dfa67607c9ab7ff2e608bb599f7c6c02186a9421af6"} Apr 04 02:58:05 crc kubenswrapper[4681]: I0404 02:58:05.371216 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587858-54ggf" Apr 04 02:58:05 crc kubenswrapper[4681]: I0404 02:58:05.412005 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-459q8\" (UniqueName: \"kubernetes.io/projected/d07e3739-bc4b-48bd-981f-f37f9a52e4e1-kube-api-access-459q8\") pod \"d07e3739-bc4b-48bd-981f-f37f9a52e4e1\" (UID: \"d07e3739-bc4b-48bd-981f-f37f9a52e4e1\") " Apr 04 02:58:05 crc kubenswrapper[4681]: I0404 02:58:05.421488 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d07e3739-bc4b-48bd-981f-f37f9a52e4e1-kube-api-access-459q8" (OuterVolumeSpecName: "kube-api-access-459q8") pod "d07e3739-bc4b-48bd-981f-f37f9a52e4e1" (UID: "d07e3739-bc4b-48bd-981f-f37f9a52e4e1"). InnerVolumeSpecName "kube-api-access-459q8". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:58:05 crc kubenswrapper[4681]: I0404 02:58:05.516683 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-459q8\" (UniqueName: \"kubernetes.io/projected/d07e3739-bc4b-48bd-981f-f37f9a52e4e1-kube-api-access-459q8\") on node \"crc\" DevicePath \"\"" Apr 04 02:58:05 crc kubenswrapper[4681]: I0404 02:58:05.951584 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587858-54ggf" event={"ID":"d07e3739-bc4b-48bd-981f-f37f9a52e4e1","Type":"ContainerDied","Data":"c98ab913b031710868e2825cfe8a582553468b9f29dd60d44a26573ddd7ba3b8"} Apr 04 02:58:05 crc kubenswrapper[4681]: I0404 02:58:05.951641 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c98ab913b031710868e2825cfe8a582553468b9f29dd60d44a26573ddd7ba3b8" Apr 04 02:58:05 crc kubenswrapper[4681]: I0404 02:58:05.951670 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587858-54ggf" Apr 04 02:58:06 crc kubenswrapper[4681]: I0404 02:58:06.025480 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587852-kv79j"] Apr 04 02:58:06 crc kubenswrapper[4681]: I0404 02:58:06.037629 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587852-kv79j"] Apr 04 02:58:07 crc kubenswrapper[4681]: I0404 02:58:07.213798 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21c07a25-0759-4f0f-8eee-025be8595ff9" path="/var/lib/kubelet/pods/21c07a25-0759-4f0f-8eee-025be8595ff9/volumes" Apr 04 02:58:31 crc kubenswrapper[4681]: I0404 02:58:31.395749 4681 scope.go:117] "RemoveContainer" containerID="425b576efaa4c630fe453c683b482f4f9ee9d6f1fcc6f7c028b88655bdb54413" Apr 04 02:58:54 crc kubenswrapper[4681]: I0404 02:58:54.377209 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-499r6"] Apr 04 02:58:54 crc kubenswrapper[4681]: E0404 02:58:54.378327 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d07e3739-bc4b-48bd-981f-f37f9a52e4e1" containerName="oc" Apr 04 02:58:54 crc kubenswrapper[4681]: I0404 02:58:54.378348 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d07e3739-bc4b-48bd-981f-f37f9a52e4e1" containerName="oc" Apr 04 02:58:54 crc kubenswrapper[4681]: I0404 02:58:54.378613 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d07e3739-bc4b-48bd-981f-f37f9a52e4e1" containerName="oc" Apr 04 02:58:54 crc kubenswrapper[4681]: I0404 02:58:54.380434 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-499r6" Apr 04 02:58:54 crc kubenswrapper[4681]: I0404 02:58:54.405480 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-499r6"] Apr 04 02:58:54 crc kubenswrapper[4681]: I0404 02:58:54.466706 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaaa4f37-f00d-4c9c-a0de-618618ac7bbf-utilities\") pod \"community-operators-499r6\" (UID: \"aaaa4f37-f00d-4c9c-a0de-618618ac7bbf\") " pod="openshift-marketplace/community-operators-499r6" Apr 04 02:58:54 crc kubenswrapper[4681]: I0404 02:58:54.467006 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqbg5\" (UniqueName: \"kubernetes.io/projected/aaaa4f37-f00d-4c9c-a0de-618618ac7bbf-kube-api-access-gqbg5\") pod \"community-operators-499r6\" (UID: \"aaaa4f37-f00d-4c9c-a0de-618618ac7bbf\") " pod="openshift-marketplace/community-operators-499r6" Apr 04 02:58:54 crc kubenswrapper[4681]: I0404 02:58:54.467190 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaaa4f37-f00d-4c9c-a0de-618618ac7bbf-catalog-content\") pod \"community-operators-499r6\" (UID: \"aaaa4f37-f00d-4c9c-a0de-618618ac7bbf\") " pod="openshift-marketplace/community-operators-499r6" Apr 04 02:58:54 crc kubenswrapper[4681]: I0404 02:58:54.570067 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaaa4f37-f00d-4c9c-a0de-618618ac7bbf-utilities\") pod \"community-operators-499r6\" (UID: \"aaaa4f37-f00d-4c9c-a0de-618618ac7bbf\") " pod="openshift-marketplace/community-operators-499r6" Apr 04 02:58:54 crc kubenswrapper[4681]: I0404 02:58:54.570279 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqbg5\" (UniqueName: \"kubernetes.io/projected/aaaa4f37-f00d-4c9c-a0de-618618ac7bbf-kube-api-access-gqbg5\") pod \"community-operators-499r6\" (UID: \"aaaa4f37-f00d-4c9c-a0de-618618ac7bbf\") " pod="openshift-marketplace/community-operators-499r6" Apr 04 02:58:54 crc kubenswrapper[4681]: I0404 02:58:54.570311 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaaa4f37-f00d-4c9c-a0de-618618ac7bbf-catalog-content\") pod \"community-operators-499r6\" (UID: \"aaaa4f37-f00d-4c9c-a0de-618618ac7bbf\") " pod="openshift-marketplace/community-operators-499r6" Apr 04 02:58:54 crc kubenswrapper[4681]: I0404 02:58:54.570929 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaaa4f37-f00d-4c9c-a0de-618618ac7bbf-catalog-content\") pod \"community-operators-499r6\" (UID: \"aaaa4f37-f00d-4c9c-a0de-618618ac7bbf\") " pod="openshift-marketplace/community-operators-499r6" Apr 04 02:58:54 crc kubenswrapper[4681]: I0404 02:58:54.571007 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaaa4f37-f00d-4c9c-a0de-618618ac7bbf-utilities\") pod \"community-operators-499r6\" (UID: \"aaaa4f37-f00d-4c9c-a0de-618618ac7bbf\") " pod="openshift-marketplace/community-operators-499r6" Apr 04 02:58:54 crc kubenswrapper[4681]: I0404 02:58:54.590401 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqbg5\" (UniqueName: \"kubernetes.io/projected/aaaa4f37-f00d-4c9c-a0de-618618ac7bbf-kube-api-access-gqbg5\") pod \"community-operators-499r6\" (UID: \"aaaa4f37-f00d-4c9c-a0de-618618ac7bbf\") " pod="openshift-marketplace/community-operators-499r6" Apr 04 02:58:54 crc kubenswrapper[4681]: I0404 02:58:54.703380 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-499r6" Apr 04 02:58:55 crc kubenswrapper[4681]: W0404 02:58:55.320503 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaaaa4f37_f00d_4c9c_a0de_618618ac7bbf.slice/crio-ca7165d400939fb2c6ef004b95fa2c4624216cafea41bebdba2f7f9800d5e6b2 WatchSource:0}: Error finding container ca7165d400939fb2c6ef004b95fa2c4624216cafea41bebdba2f7f9800d5e6b2: Status 404 returned error can't find the container with id ca7165d400939fb2c6ef004b95fa2c4624216cafea41bebdba2f7f9800d5e6b2 Apr 04 02:58:55 crc kubenswrapper[4681]: I0404 02:58:55.322249 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-499r6"] Apr 04 02:58:56 crc kubenswrapper[4681]: I0404 02:58:56.022976 4681 generic.go:334] "Generic (PLEG): container finished" podID="aaaa4f37-f00d-4c9c-a0de-618618ac7bbf" containerID="3ae28a96b9449ef72256dbab2266102b4b720bf47e36c4d4c6e3451bc90be647" exitCode=0 Apr 04 02:58:56 crc kubenswrapper[4681]: I0404 02:58:56.023046 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-499r6" event={"ID":"aaaa4f37-f00d-4c9c-a0de-618618ac7bbf","Type":"ContainerDied","Data":"3ae28a96b9449ef72256dbab2266102b4b720bf47e36c4d4c6e3451bc90be647"} Apr 04 02:58:56 crc kubenswrapper[4681]: I0404 02:58:56.023308 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-499r6" event={"ID":"aaaa4f37-f00d-4c9c-a0de-618618ac7bbf","Type":"ContainerStarted","Data":"ca7165d400939fb2c6ef004b95fa2c4624216cafea41bebdba2f7f9800d5e6b2"} Apr 04 02:58:56 crc kubenswrapper[4681]: I0404 02:58:56.025534 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 04 02:58:56 crc kubenswrapper[4681]: I0404 02:58:56.524004 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:58:56 crc kubenswrapper[4681]: I0404 02:58:56.524443 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:58:57 crc kubenswrapper[4681]: I0404 02:58:57.037679 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-499r6" event={"ID":"aaaa4f37-f00d-4c9c-a0de-618618ac7bbf","Type":"ContainerStarted","Data":"c23d4467ccb8c164b851076872ba7e9da8a1914480498dd35042c8a330336b17"} Apr 04 02:58:59 crc kubenswrapper[4681]: I0404 02:58:59.061620 4681 generic.go:334] "Generic (PLEG): container finished" podID="aaaa4f37-f00d-4c9c-a0de-618618ac7bbf" containerID="c23d4467ccb8c164b851076872ba7e9da8a1914480498dd35042c8a330336b17" exitCode=0 Apr 04 02:58:59 crc kubenswrapper[4681]: I0404 02:58:59.061678 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-499r6" event={"ID":"aaaa4f37-f00d-4c9c-a0de-618618ac7bbf","Type":"ContainerDied","Data":"c23d4467ccb8c164b851076872ba7e9da8a1914480498dd35042c8a330336b17"} Apr 04 02:59:00 crc kubenswrapper[4681]: I0404 02:59:00.081211 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-499r6" event={"ID":"aaaa4f37-f00d-4c9c-a0de-618618ac7bbf","Type":"ContainerStarted","Data":"7cb20fad23fe0fd6ab733c0558478cf2e8fe67492e7d02d4da4f6d5cf5a578e3"} Apr 04 02:59:00 crc kubenswrapper[4681]: I0404 02:59:00.109006 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-499r6" podStartSLOduration=2.6810900159999997 podStartE2EDuration="6.108986989s" podCreationTimestamp="2026-04-04 02:58:54 +0000 UTC" firstStartedPulling="2026-04-04 02:58:56.025129337 +0000 UTC m=+3815.690904467" lastFinishedPulling="2026-04-04 02:58:59.45302632 +0000 UTC m=+3819.118801440" observedRunningTime="2026-04-04 02:59:00.098808062 +0000 UTC m=+3819.764583202" watchObservedRunningTime="2026-04-04 02:59:00.108986989 +0000 UTC m=+3819.774762109" Apr 04 02:59:04 crc kubenswrapper[4681]: I0404 02:59:04.704221 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-499r6" Apr 04 02:59:04 crc kubenswrapper[4681]: I0404 02:59:04.704772 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-499r6" Apr 04 02:59:04 crc kubenswrapper[4681]: I0404 02:59:04.767433 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-499r6" Apr 04 02:59:05 crc kubenswrapper[4681]: I0404 02:59:05.170597 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-499r6" Apr 04 02:59:05 crc kubenswrapper[4681]: I0404 02:59:05.216830 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-499r6"] Apr 04 02:59:07 crc kubenswrapper[4681]: I0404 02:59:07.156829 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-499r6" podUID="aaaa4f37-f00d-4c9c-a0de-618618ac7bbf" containerName="registry-server" containerID="cri-o://7cb20fad23fe0fd6ab733c0558478cf2e8fe67492e7d02d4da4f6d5cf5a578e3" gracePeriod=2 Apr 04 02:59:07 crc kubenswrapper[4681]: I0404 02:59:07.673235 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-499r6" Apr 04 02:59:07 crc kubenswrapper[4681]: I0404 02:59:07.736395 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaaa4f37-f00d-4c9c-a0de-618618ac7bbf-utilities\") pod \"aaaa4f37-f00d-4c9c-a0de-618618ac7bbf\" (UID: \"aaaa4f37-f00d-4c9c-a0de-618618ac7bbf\") " Apr 04 02:59:07 crc kubenswrapper[4681]: I0404 02:59:07.736450 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqbg5\" (UniqueName: \"kubernetes.io/projected/aaaa4f37-f00d-4c9c-a0de-618618ac7bbf-kube-api-access-gqbg5\") pod \"aaaa4f37-f00d-4c9c-a0de-618618ac7bbf\" (UID: \"aaaa4f37-f00d-4c9c-a0de-618618ac7bbf\") " Apr 04 02:59:07 crc kubenswrapper[4681]: I0404 02:59:07.736496 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaaa4f37-f00d-4c9c-a0de-618618ac7bbf-catalog-content\") pod \"aaaa4f37-f00d-4c9c-a0de-618618ac7bbf\" (UID: \"aaaa4f37-f00d-4c9c-a0de-618618ac7bbf\") " Apr 04 02:59:07 crc kubenswrapper[4681]: I0404 02:59:07.737962 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaaa4f37-f00d-4c9c-a0de-618618ac7bbf-utilities" (OuterVolumeSpecName: "utilities") pod "aaaa4f37-f00d-4c9c-a0de-618618ac7bbf" (UID: "aaaa4f37-f00d-4c9c-a0de-618618ac7bbf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:59:07 crc kubenswrapper[4681]: I0404 02:59:07.751936 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aaaa4f37-f00d-4c9c-a0de-618618ac7bbf-kube-api-access-gqbg5" (OuterVolumeSpecName: "kube-api-access-gqbg5") pod "aaaa4f37-f00d-4c9c-a0de-618618ac7bbf" (UID: "aaaa4f37-f00d-4c9c-a0de-618618ac7bbf"). InnerVolumeSpecName "kube-api-access-gqbg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 02:59:07 crc kubenswrapper[4681]: I0404 02:59:07.838813 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aaaa4f37-f00d-4c9c-a0de-618618ac7bbf-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 02:59:07 crc kubenswrapper[4681]: I0404 02:59:07.838847 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqbg5\" (UniqueName: \"kubernetes.io/projected/aaaa4f37-f00d-4c9c-a0de-618618ac7bbf-kube-api-access-gqbg5\") on node \"crc\" DevicePath \"\"" Apr 04 02:59:08 crc kubenswrapper[4681]: I0404 02:59:08.170510 4681 generic.go:334] "Generic (PLEG): container finished" podID="aaaa4f37-f00d-4c9c-a0de-618618ac7bbf" containerID="7cb20fad23fe0fd6ab733c0558478cf2e8fe67492e7d02d4da4f6d5cf5a578e3" exitCode=0 Apr 04 02:59:08 crc kubenswrapper[4681]: I0404 02:59:08.170590 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-499r6" Apr 04 02:59:08 crc kubenswrapper[4681]: I0404 02:59:08.170610 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-499r6" event={"ID":"aaaa4f37-f00d-4c9c-a0de-618618ac7bbf","Type":"ContainerDied","Data":"7cb20fad23fe0fd6ab733c0558478cf2e8fe67492e7d02d4da4f6d5cf5a578e3"} Apr 04 02:59:08 crc kubenswrapper[4681]: I0404 02:59:08.171737 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-499r6" event={"ID":"aaaa4f37-f00d-4c9c-a0de-618618ac7bbf","Type":"ContainerDied","Data":"ca7165d400939fb2c6ef004b95fa2c4624216cafea41bebdba2f7f9800d5e6b2"} Apr 04 02:59:08 crc kubenswrapper[4681]: I0404 02:59:08.171765 4681 scope.go:117] "RemoveContainer" containerID="7cb20fad23fe0fd6ab733c0558478cf2e8fe67492e7d02d4da4f6d5cf5a578e3" Apr 04 02:59:08 crc kubenswrapper[4681]: I0404 02:59:08.199736 4681 scope.go:117] "RemoveContainer" containerID="c23d4467ccb8c164b851076872ba7e9da8a1914480498dd35042c8a330336b17" Apr 04 02:59:08 crc kubenswrapper[4681]: I0404 02:59:08.233196 4681 scope.go:117] "RemoveContainer" containerID="3ae28a96b9449ef72256dbab2266102b4b720bf47e36c4d4c6e3451bc90be647" Apr 04 02:59:08 crc kubenswrapper[4681]: I0404 02:59:08.297395 4681 scope.go:117] "RemoveContainer" containerID="7cb20fad23fe0fd6ab733c0558478cf2e8fe67492e7d02d4da4f6d5cf5a578e3" Apr 04 02:59:08 crc kubenswrapper[4681]: E0404 02:59:08.302865 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cb20fad23fe0fd6ab733c0558478cf2e8fe67492e7d02d4da4f6d5cf5a578e3\": container with ID starting with 7cb20fad23fe0fd6ab733c0558478cf2e8fe67492e7d02d4da4f6d5cf5a578e3 not found: ID does not exist" containerID="7cb20fad23fe0fd6ab733c0558478cf2e8fe67492e7d02d4da4f6d5cf5a578e3" Apr 04 02:59:08 crc kubenswrapper[4681]: I0404 02:59:08.302937 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cb20fad23fe0fd6ab733c0558478cf2e8fe67492e7d02d4da4f6d5cf5a578e3"} err="failed to get container status \"7cb20fad23fe0fd6ab733c0558478cf2e8fe67492e7d02d4da4f6d5cf5a578e3\": rpc error: code = NotFound desc = could not find container \"7cb20fad23fe0fd6ab733c0558478cf2e8fe67492e7d02d4da4f6d5cf5a578e3\": container with ID starting with 7cb20fad23fe0fd6ab733c0558478cf2e8fe67492e7d02d4da4f6d5cf5a578e3 not found: ID does not exist" Apr 04 02:59:08 crc kubenswrapper[4681]: I0404 02:59:08.302976 4681 scope.go:117] "RemoveContainer" containerID="c23d4467ccb8c164b851076872ba7e9da8a1914480498dd35042c8a330336b17" Apr 04 02:59:08 crc kubenswrapper[4681]: E0404 02:59:08.303592 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c23d4467ccb8c164b851076872ba7e9da8a1914480498dd35042c8a330336b17\": container with ID starting with c23d4467ccb8c164b851076872ba7e9da8a1914480498dd35042c8a330336b17 not found: ID does not exist" containerID="c23d4467ccb8c164b851076872ba7e9da8a1914480498dd35042c8a330336b17" Apr 04 02:59:08 crc kubenswrapper[4681]: I0404 02:59:08.303629 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c23d4467ccb8c164b851076872ba7e9da8a1914480498dd35042c8a330336b17"} err="failed to get container status \"c23d4467ccb8c164b851076872ba7e9da8a1914480498dd35042c8a330336b17\": rpc error: code = NotFound desc = could not find container \"c23d4467ccb8c164b851076872ba7e9da8a1914480498dd35042c8a330336b17\": container with ID starting with c23d4467ccb8c164b851076872ba7e9da8a1914480498dd35042c8a330336b17 not found: ID does not exist" Apr 04 02:59:08 crc kubenswrapper[4681]: I0404 02:59:08.303653 4681 scope.go:117] "RemoveContainer" containerID="3ae28a96b9449ef72256dbab2266102b4b720bf47e36c4d4c6e3451bc90be647" Apr 04 02:59:08 crc kubenswrapper[4681]: I0404 02:59:08.303713 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aaaa4f37-f00d-4c9c-a0de-618618ac7bbf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aaaa4f37-f00d-4c9c-a0de-618618ac7bbf" (UID: "aaaa4f37-f00d-4c9c-a0de-618618ac7bbf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 02:59:08 crc kubenswrapper[4681]: E0404 02:59:08.304225 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ae28a96b9449ef72256dbab2266102b4b720bf47e36c4d4c6e3451bc90be647\": container with ID starting with 3ae28a96b9449ef72256dbab2266102b4b720bf47e36c4d4c6e3451bc90be647 not found: ID does not exist" containerID="3ae28a96b9449ef72256dbab2266102b4b720bf47e36c4d4c6e3451bc90be647" Apr 04 02:59:08 crc kubenswrapper[4681]: I0404 02:59:08.304324 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ae28a96b9449ef72256dbab2266102b4b720bf47e36c4d4c6e3451bc90be647"} err="failed to get container status \"3ae28a96b9449ef72256dbab2266102b4b720bf47e36c4d4c6e3451bc90be647\": rpc error: code = NotFound desc = could not find container \"3ae28a96b9449ef72256dbab2266102b4b720bf47e36c4d4c6e3451bc90be647\": container with ID starting with 3ae28a96b9449ef72256dbab2266102b4b720bf47e36c4d4c6e3451bc90be647 not found: ID does not exist" Apr 04 02:59:08 crc kubenswrapper[4681]: I0404 02:59:08.353619 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aaaa4f37-f00d-4c9c-a0de-618618ac7bbf-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 02:59:08 crc kubenswrapper[4681]: I0404 02:59:08.512281 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-499r6"] Apr 04 02:59:08 crc kubenswrapper[4681]: I0404 02:59:08.525668 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-499r6"] Apr 04 02:59:09 crc kubenswrapper[4681]: I0404 02:59:09.216835 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aaaa4f37-f00d-4c9c-a0de-618618ac7bbf" path="/var/lib/kubelet/pods/aaaa4f37-f00d-4c9c-a0de-618618ac7bbf/volumes" Apr 04 02:59:26 crc kubenswrapper[4681]: I0404 02:59:26.524105 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:59:26 crc kubenswrapper[4681]: I0404 02:59:26.524796 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:59:56 crc kubenswrapper[4681]: I0404 02:59:56.523931 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 02:59:56 crc kubenswrapper[4681]: I0404 02:59:56.524581 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 02:59:56 crc kubenswrapper[4681]: I0404 02:59:56.524635 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 02:59:56 crc kubenswrapper[4681]: I0404 02:59:56.525545 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"447590cb991b3a8f7b114ff69052f3e9ff022024bc6e1ab610dde7ea4d8598b6"} pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Apr 04 02:59:56 crc kubenswrapper[4681]: I0404 02:59:56.525609 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" containerID="cri-o://447590cb991b3a8f7b114ff69052f3e9ff022024bc6e1ab610dde7ea4d8598b6" gracePeriod=600 Apr 04 02:59:56 crc kubenswrapper[4681]: E0404 02:59:56.646725 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 02:59:56 crc kubenswrapper[4681]: I0404 02:59:56.724312 4681 generic.go:334] "Generic (PLEG): container finished" podID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerID="447590cb991b3a8f7b114ff69052f3e9ff022024bc6e1ab610dde7ea4d8598b6" exitCode=0 Apr 04 02:59:56 crc kubenswrapper[4681]: I0404 02:59:56.724366 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerDied","Data":"447590cb991b3a8f7b114ff69052f3e9ff022024bc6e1ab610dde7ea4d8598b6"} Apr 04 02:59:56 crc kubenswrapper[4681]: I0404 02:59:56.724401 4681 scope.go:117] "RemoveContainer" containerID="9a5a1bde3866fbb86f1c88c1ac7b7dfef788ee5c808f1c22105ee69b8ca81656" Apr 04 02:59:56 crc kubenswrapper[4681]: I0404 02:59:56.725088 4681 scope.go:117] "RemoveContainer" containerID="447590cb991b3a8f7b114ff69052f3e9ff022024bc6e1ab610dde7ea4d8598b6" Apr 04 02:59:56 crc kubenswrapper[4681]: E0404 02:59:56.725429 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:00:00 crc kubenswrapper[4681]: I0404 03:00:00.149990 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587860-z4dwn"] Apr 04 03:00:00 crc kubenswrapper[4681]: E0404 03:00:00.151136 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaaa4f37-f00d-4c9c-a0de-618618ac7bbf" containerName="extract-content" Apr 04 03:00:00 crc kubenswrapper[4681]: I0404 03:00:00.151159 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaaa4f37-f00d-4c9c-a0de-618618ac7bbf" containerName="extract-content" Apr 04 03:00:00 crc kubenswrapper[4681]: E0404 03:00:00.151171 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaaa4f37-f00d-4c9c-a0de-618618ac7bbf" containerName="extract-utilities" Apr 04 03:00:00 crc kubenswrapper[4681]: I0404 03:00:00.151179 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaaa4f37-f00d-4c9c-a0de-618618ac7bbf" containerName="extract-utilities" Apr 04 03:00:00 crc kubenswrapper[4681]: E0404 03:00:00.151194 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aaaa4f37-f00d-4c9c-a0de-618618ac7bbf" containerName="registry-server" Apr 04 03:00:00 crc kubenswrapper[4681]: I0404 03:00:00.151202 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="aaaa4f37-f00d-4c9c-a0de-618618ac7bbf" containerName="registry-server" Apr 04 03:00:00 crc kubenswrapper[4681]: I0404 03:00:00.151505 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="aaaa4f37-f00d-4c9c-a0de-618618ac7bbf" containerName="registry-server" Apr 04 03:00:00 crc kubenswrapper[4681]: I0404 03:00:00.152416 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587860-z4dwn" Apr 04 03:00:00 crc kubenswrapper[4681]: I0404 03:00:00.158099 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 03:00:00 crc kubenswrapper[4681]: I0404 03:00:00.158646 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 03:00:00 crc kubenswrapper[4681]: I0404 03:00:00.159311 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 03:00:00 crc kubenswrapper[4681]: I0404 03:00:00.163774 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587860-stj6k"] Apr 04 03:00:00 crc kubenswrapper[4681]: I0404 03:00:00.165986 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587860-stj6k" Apr 04 03:00:00 crc kubenswrapper[4681]: I0404 03:00:00.168248 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Apr 04 03:00:00 crc kubenswrapper[4681]: I0404 03:00:00.170211 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Apr 04 03:00:00 crc kubenswrapper[4681]: I0404 03:00:00.185986 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587860-z4dwn"] Apr 04 03:00:00 crc kubenswrapper[4681]: I0404 03:00:00.225215 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587860-stj6k"] Apr 04 03:00:00 crc kubenswrapper[4681]: I0404 03:00:00.233862 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7239abc9-4b26-4c13-90f7-db97bcd1a76c-config-volume\") pod \"collect-profiles-29587860-stj6k\" (UID: \"7239abc9-4b26-4c13-90f7-db97bcd1a76c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587860-stj6k" Apr 04 03:00:00 crc kubenswrapper[4681]: I0404 03:00:00.233941 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp58b\" (UniqueName: \"kubernetes.io/projected/7239abc9-4b26-4c13-90f7-db97bcd1a76c-kube-api-access-zp58b\") pod \"collect-profiles-29587860-stj6k\" (UID: \"7239abc9-4b26-4c13-90f7-db97bcd1a76c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587860-stj6k" Apr 04 03:00:00 crc kubenswrapper[4681]: I0404 03:00:00.234097 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p62c\" (UniqueName: \"kubernetes.io/projected/3a2def9e-7fb8-4501-ab87-267bdf00c720-kube-api-access-2p62c\") pod \"auto-csr-approver-29587860-z4dwn\" (UID: \"3a2def9e-7fb8-4501-ab87-267bdf00c720\") " pod="openshift-infra/auto-csr-approver-29587860-z4dwn" Apr 04 03:00:00 crc kubenswrapper[4681]: I0404 03:00:00.234140 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7239abc9-4b26-4c13-90f7-db97bcd1a76c-secret-volume\") pod \"collect-profiles-29587860-stj6k\" (UID: \"7239abc9-4b26-4c13-90f7-db97bcd1a76c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587860-stj6k" Apr 04 03:00:00 crc kubenswrapper[4681]: I0404 03:00:00.335731 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7239abc9-4b26-4c13-90f7-db97bcd1a76c-secret-volume\") pod \"collect-profiles-29587860-stj6k\" (UID: \"7239abc9-4b26-4c13-90f7-db97bcd1a76c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587860-stj6k" Apr 04 03:00:00 crc kubenswrapper[4681]: I0404 03:00:00.335803 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7239abc9-4b26-4c13-90f7-db97bcd1a76c-config-volume\") pod \"collect-profiles-29587860-stj6k\" (UID: \"7239abc9-4b26-4c13-90f7-db97bcd1a76c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587860-stj6k" Apr 04 03:00:00 crc kubenswrapper[4681]: I0404 03:00:00.335864 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp58b\" (UniqueName: \"kubernetes.io/projected/7239abc9-4b26-4c13-90f7-db97bcd1a76c-kube-api-access-zp58b\") pod \"collect-profiles-29587860-stj6k\" (UID: \"7239abc9-4b26-4c13-90f7-db97bcd1a76c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587860-stj6k" Apr 04 03:00:00 crc kubenswrapper[4681]: I0404 03:00:00.336119 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p62c\" (UniqueName: \"kubernetes.io/projected/3a2def9e-7fb8-4501-ab87-267bdf00c720-kube-api-access-2p62c\") pod \"auto-csr-approver-29587860-z4dwn\" (UID: \"3a2def9e-7fb8-4501-ab87-267bdf00c720\") " pod="openshift-infra/auto-csr-approver-29587860-z4dwn" Apr 04 03:00:00 crc kubenswrapper[4681]: I0404 03:00:00.337250 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7239abc9-4b26-4c13-90f7-db97bcd1a76c-config-volume\") pod \"collect-profiles-29587860-stj6k\" (UID: \"7239abc9-4b26-4c13-90f7-db97bcd1a76c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587860-stj6k" Apr 04 03:00:00 crc kubenswrapper[4681]: I0404 03:00:00.343911 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7239abc9-4b26-4c13-90f7-db97bcd1a76c-secret-volume\") pod \"collect-profiles-29587860-stj6k\" (UID: \"7239abc9-4b26-4c13-90f7-db97bcd1a76c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587860-stj6k" Apr 04 03:00:00 crc kubenswrapper[4681]: I0404 03:00:00.352006 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp58b\" (UniqueName: \"kubernetes.io/projected/7239abc9-4b26-4c13-90f7-db97bcd1a76c-kube-api-access-zp58b\") pod \"collect-profiles-29587860-stj6k\" (UID: \"7239abc9-4b26-4c13-90f7-db97bcd1a76c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587860-stj6k" Apr 04 03:00:00 crc kubenswrapper[4681]: I0404 03:00:00.355170 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p62c\" (UniqueName: \"kubernetes.io/projected/3a2def9e-7fb8-4501-ab87-267bdf00c720-kube-api-access-2p62c\") pod \"auto-csr-approver-29587860-z4dwn\" (UID: \"3a2def9e-7fb8-4501-ab87-267bdf00c720\") " pod="openshift-infra/auto-csr-approver-29587860-z4dwn" Apr 04 03:00:00 crc kubenswrapper[4681]: I0404 03:00:00.473241 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587860-z4dwn" Apr 04 03:00:00 crc kubenswrapper[4681]: I0404 03:00:00.504692 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587860-stj6k" Apr 04 03:00:00 crc kubenswrapper[4681]: I0404 03:00:00.990789 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587860-z4dwn"] Apr 04 03:00:01 crc kubenswrapper[4681]: W0404 03:00:01.081088 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7239abc9_4b26_4c13_90f7_db97bcd1a76c.slice/crio-21bf005c742d97eab7f4fd756937ca1c2fa2c6f90bad0aaa4ee40d3c0224c545 WatchSource:0}: Error finding container 21bf005c742d97eab7f4fd756937ca1c2fa2c6f90bad0aaa4ee40d3c0224c545: Status 404 returned error can't find the container with id 21bf005c742d97eab7f4fd756937ca1c2fa2c6f90bad0aaa4ee40d3c0224c545 Apr 04 03:00:01 crc kubenswrapper[4681]: I0404 03:00:01.081553 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587860-stj6k"] Apr 04 03:00:01 crc kubenswrapper[4681]: I0404 03:00:01.779565 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587860-z4dwn" event={"ID":"3a2def9e-7fb8-4501-ab87-267bdf00c720","Type":"ContainerStarted","Data":"5aad90d7c5af42d8735c0aa15a60400d6298ed7119a09f20b392fe8b3d9a70c8"} Apr 04 03:00:01 crc kubenswrapper[4681]: I0404 03:00:01.782630 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587860-stj6k" event={"ID":"7239abc9-4b26-4c13-90f7-db97bcd1a76c","Type":"ContainerStarted","Data":"a0182d3095aeeabaa53f8384729cd9df8d13962bac1c1ded65eb8e2263e933e3"} Apr 04 03:00:01 crc kubenswrapper[4681]: I0404 03:00:01.782672 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587860-stj6k" event={"ID":"7239abc9-4b26-4c13-90f7-db97bcd1a76c","Type":"ContainerStarted","Data":"21bf005c742d97eab7f4fd756937ca1c2fa2c6f90bad0aaa4ee40d3c0224c545"} Apr 04 03:00:02 crc kubenswrapper[4681]: I0404 03:00:02.793886 4681 generic.go:334] "Generic (PLEG): container finished" podID="3a2def9e-7fb8-4501-ab87-267bdf00c720" containerID="23e9565f98689d4f36fe2f5be1b32e192a00642bf418e55fbea479119d78ae3f" exitCode=0 Apr 04 03:00:02 crc kubenswrapper[4681]: I0404 03:00:02.793991 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587860-z4dwn" event={"ID":"3a2def9e-7fb8-4501-ab87-267bdf00c720","Type":"ContainerDied","Data":"23e9565f98689d4f36fe2f5be1b32e192a00642bf418e55fbea479119d78ae3f"} Apr 04 03:00:02 crc kubenswrapper[4681]: I0404 03:00:02.796883 4681 generic.go:334] "Generic (PLEG): container finished" podID="7239abc9-4b26-4c13-90f7-db97bcd1a76c" containerID="a0182d3095aeeabaa53f8384729cd9df8d13962bac1c1ded65eb8e2263e933e3" exitCode=0 Apr 04 03:00:02 crc kubenswrapper[4681]: I0404 03:00:02.796932 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587860-stj6k" event={"ID":"7239abc9-4b26-4c13-90f7-db97bcd1a76c","Type":"ContainerDied","Data":"a0182d3095aeeabaa53f8384729cd9df8d13962bac1c1ded65eb8e2263e933e3"} Apr 04 03:00:03 crc kubenswrapper[4681]: I0404 03:00:03.196652 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587860-stj6k" Apr 04 03:00:03 crc kubenswrapper[4681]: I0404 03:00:03.301027 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7239abc9-4b26-4c13-90f7-db97bcd1a76c-secret-volume\") pod \"7239abc9-4b26-4c13-90f7-db97bcd1a76c\" (UID: \"7239abc9-4b26-4c13-90f7-db97bcd1a76c\") " Apr 04 03:00:03 crc kubenswrapper[4681]: I0404 03:00:03.301427 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp58b\" (UniqueName: \"kubernetes.io/projected/7239abc9-4b26-4c13-90f7-db97bcd1a76c-kube-api-access-zp58b\") pod \"7239abc9-4b26-4c13-90f7-db97bcd1a76c\" (UID: \"7239abc9-4b26-4c13-90f7-db97bcd1a76c\") " Apr 04 03:00:03 crc kubenswrapper[4681]: I0404 03:00:03.301707 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7239abc9-4b26-4c13-90f7-db97bcd1a76c-config-volume\") pod \"7239abc9-4b26-4c13-90f7-db97bcd1a76c\" (UID: \"7239abc9-4b26-4c13-90f7-db97bcd1a76c\") " Apr 04 03:00:03 crc kubenswrapper[4681]: I0404 03:00:03.302439 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7239abc9-4b26-4c13-90f7-db97bcd1a76c-config-volume" (OuterVolumeSpecName: "config-volume") pod "7239abc9-4b26-4c13-90f7-db97bcd1a76c" (UID: "7239abc9-4b26-4c13-90f7-db97bcd1a76c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 03:00:03 crc kubenswrapper[4681]: I0404 03:00:03.303019 4681 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7239abc9-4b26-4c13-90f7-db97bcd1a76c-config-volume\") on node \"crc\" DevicePath \"\"" Apr 04 03:00:03 crc kubenswrapper[4681]: I0404 03:00:03.307435 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7239abc9-4b26-4c13-90f7-db97bcd1a76c-kube-api-access-zp58b" (OuterVolumeSpecName: "kube-api-access-zp58b") pod "7239abc9-4b26-4c13-90f7-db97bcd1a76c" (UID: "7239abc9-4b26-4c13-90f7-db97bcd1a76c"). InnerVolumeSpecName "kube-api-access-zp58b". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:00:03 crc kubenswrapper[4681]: I0404 03:00:03.307836 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7239abc9-4b26-4c13-90f7-db97bcd1a76c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7239abc9-4b26-4c13-90f7-db97bcd1a76c" (UID: "7239abc9-4b26-4c13-90f7-db97bcd1a76c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 03:00:03 crc kubenswrapper[4681]: I0404 03:00:03.404631 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp58b\" (UniqueName: \"kubernetes.io/projected/7239abc9-4b26-4c13-90f7-db97bcd1a76c-kube-api-access-zp58b\") on node \"crc\" DevicePath \"\"" Apr 04 03:00:03 crc kubenswrapper[4681]: I0404 03:00:03.404666 4681 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7239abc9-4b26-4c13-90f7-db97bcd1a76c-secret-volume\") on node \"crc\" DevicePath \"\"" Apr 04 03:00:03 crc kubenswrapper[4681]: I0404 03:00:03.810024 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587860-stj6k" event={"ID":"7239abc9-4b26-4c13-90f7-db97bcd1a76c","Type":"ContainerDied","Data":"21bf005c742d97eab7f4fd756937ca1c2fa2c6f90bad0aaa4ee40d3c0224c545"} Apr 04 03:00:03 crc kubenswrapper[4681]: I0404 03:00:03.810074 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21bf005c742d97eab7f4fd756937ca1c2fa2c6f90bad0aaa4ee40d3c0224c545" Apr 04 03:00:03 crc kubenswrapper[4681]: I0404 03:00:03.810068 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587860-stj6k" Apr 04 03:00:04 crc kubenswrapper[4681]: I0404 03:00:04.184936 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587860-z4dwn" Apr 04 03:00:04 crc kubenswrapper[4681]: I0404 03:00:04.319378 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587815-vr574"] Apr 04 03:00:04 crc kubenswrapper[4681]: I0404 03:00:04.322371 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p62c\" (UniqueName: \"kubernetes.io/projected/3a2def9e-7fb8-4501-ab87-267bdf00c720-kube-api-access-2p62c\") pod \"3a2def9e-7fb8-4501-ab87-267bdf00c720\" (UID: \"3a2def9e-7fb8-4501-ab87-267bdf00c720\") " Apr 04 03:00:04 crc kubenswrapper[4681]: I0404 03:00:04.329022 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a2def9e-7fb8-4501-ab87-267bdf00c720-kube-api-access-2p62c" (OuterVolumeSpecName: "kube-api-access-2p62c") pod "3a2def9e-7fb8-4501-ab87-267bdf00c720" (UID: "3a2def9e-7fb8-4501-ab87-267bdf00c720"). InnerVolumeSpecName "kube-api-access-2p62c". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:00:04 crc kubenswrapper[4681]: I0404 03:00:04.330320 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587815-vr574"] Apr 04 03:00:04 crc kubenswrapper[4681]: I0404 03:00:04.424461 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p62c\" (UniqueName: \"kubernetes.io/projected/3a2def9e-7fb8-4501-ab87-267bdf00c720-kube-api-access-2p62c\") on node \"crc\" DevicePath \"\"" Apr 04 03:00:04 crc kubenswrapper[4681]: I0404 03:00:04.819594 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587860-z4dwn" event={"ID":"3a2def9e-7fb8-4501-ab87-267bdf00c720","Type":"ContainerDied","Data":"5aad90d7c5af42d8735c0aa15a60400d6298ed7119a09f20b392fe8b3d9a70c8"} Apr 04 03:00:04 crc kubenswrapper[4681]: I0404 03:00:04.819641 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aad90d7c5af42d8735c0aa15a60400d6298ed7119a09f20b392fe8b3d9a70c8" Apr 04 03:00:04 crc kubenswrapper[4681]: I0404 03:00:04.819657 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587860-z4dwn" Apr 04 03:00:05 crc kubenswrapper[4681]: I0404 03:00:05.216676 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3a9a69a-f531-41d0-910d-800cab47e903" path="/var/lib/kubelet/pods/c3a9a69a-f531-41d0-910d-800cab47e903/volumes" Apr 04 03:00:05 crc kubenswrapper[4681]: I0404 03:00:05.245960 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587854-xn2hn"] Apr 04 03:00:05 crc kubenswrapper[4681]: I0404 03:00:05.255365 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587854-xn2hn"] Apr 04 03:00:07 crc kubenswrapper[4681]: I0404 03:00:07.219545 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fbe7ac4-b502-4c72-ad0a-53020c479d69" path="/var/lib/kubelet/pods/4fbe7ac4-b502-4c72-ad0a-53020c479d69/volumes" Apr 04 03:00:11 crc kubenswrapper[4681]: I0404 03:00:11.232146 4681 scope.go:117] "RemoveContainer" containerID="447590cb991b3a8f7b114ff69052f3e9ff022024bc6e1ab610dde7ea4d8598b6" Apr 04 03:00:11 crc kubenswrapper[4681]: E0404 03:00:11.234965 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:00:24 crc kubenswrapper[4681]: I0404 03:00:24.200918 4681 scope.go:117] "RemoveContainer" containerID="447590cb991b3a8f7b114ff69052f3e9ff022024bc6e1ab610dde7ea4d8598b6" Apr 04 03:00:24 crc kubenswrapper[4681]: E0404 03:00:24.201796 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:00:31 crc kubenswrapper[4681]: I0404 03:00:31.530418 4681 scope.go:117] "RemoveContainer" containerID="4327b21da8a66a5c153bc9c1d97e2850bee18317ee4158f810fa57b8fc93fe05" Apr 04 03:00:31 crc kubenswrapper[4681]: I0404 03:00:31.606743 4681 scope.go:117] "RemoveContainer" containerID="312f2e48a098299def1a2c79b3a9b67f1f56d01210624b6d493c9b734e018346" Apr 04 03:00:36 crc kubenswrapper[4681]: I0404 03:00:36.200906 4681 scope.go:117] "RemoveContainer" containerID="447590cb991b3a8f7b114ff69052f3e9ff022024bc6e1ab610dde7ea4d8598b6" Apr 04 03:00:36 crc kubenswrapper[4681]: E0404 03:00:36.201681 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:00:49 crc kubenswrapper[4681]: I0404 03:00:49.201151 4681 scope.go:117] "RemoveContainer" containerID="447590cb991b3a8f7b114ff69052f3e9ff022024bc6e1ab610dde7ea4d8598b6" Apr 04 03:00:49 crc kubenswrapper[4681]: E0404 03:00:49.201912 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:01:00 crc kubenswrapper[4681]: I0404 03:01:00.157406 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29587861-lh4pt"] Apr 04 03:01:00 crc kubenswrapper[4681]: E0404 03:01:00.158492 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a2def9e-7fb8-4501-ab87-267bdf00c720" containerName="oc" Apr 04 03:01:00 crc kubenswrapper[4681]: I0404 03:01:00.158507 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a2def9e-7fb8-4501-ab87-267bdf00c720" containerName="oc" Apr 04 03:01:00 crc kubenswrapper[4681]: E0404 03:01:00.158522 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7239abc9-4b26-4c13-90f7-db97bcd1a76c" containerName="collect-profiles" Apr 04 03:01:00 crc kubenswrapper[4681]: I0404 03:01:00.158530 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="7239abc9-4b26-4c13-90f7-db97bcd1a76c" containerName="collect-profiles" Apr 04 03:01:00 crc kubenswrapper[4681]: I0404 03:01:00.158810 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a2def9e-7fb8-4501-ab87-267bdf00c720" containerName="oc" Apr 04 03:01:00 crc kubenswrapper[4681]: I0404 03:01:00.158840 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="7239abc9-4b26-4c13-90f7-db97bcd1a76c" containerName="collect-profiles" Apr 04 03:01:00 crc kubenswrapper[4681]: I0404 03:01:00.159732 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29587861-lh4pt" Apr 04 03:01:00 crc kubenswrapper[4681]: I0404 03:01:00.167048 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29587861-lh4pt"] Apr 04 03:01:00 crc kubenswrapper[4681]: I0404 03:01:00.279472 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfc4e081-9222-4cea-833f-d9137246664a-config-data\") pod \"keystone-cron-29587861-lh4pt\" (UID: \"dfc4e081-9222-4cea-833f-d9137246664a\") " pod="openstack/keystone-cron-29587861-lh4pt" Apr 04 03:01:00 crc kubenswrapper[4681]: I0404 03:01:00.279579 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dfc4e081-9222-4cea-833f-d9137246664a-fernet-keys\") pod \"keystone-cron-29587861-lh4pt\" (UID: \"dfc4e081-9222-4cea-833f-d9137246664a\") " pod="openstack/keystone-cron-29587861-lh4pt" Apr 04 03:01:00 crc kubenswrapper[4681]: I0404 03:01:00.279636 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfc4e081-9222-4cea-833f-d9137246664a-combined-ca-bundle\") pod \"keystone-cron-29587861-lh4pt\" (UID: \"dfc4e081-9222-4cea-833f-d9137246664a\") " pod="openstack/keystone-cron-29587861-lh4pt" Apr 04 03:01:00 crc kubenswrapper[4681]: I0404 03:01:00.280408 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4mhl\" (UniqueName: \"kubernetes.io/projected/dfc4e081-9222-4cea-833f-d9137246664a-kube-api-access-c4mhl\") pod \"keystone-cron-29587861-lh4pt\" (UID: \"dfc4e081-9222-4cea-833f-d9137246664a\") " pod="openstack/keystone-cron-29587861-lh4pt" Apr 04 03:01:00 crc kubenswrapper[4681]: I0404 03:01:00.382776 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4mhl\" (UniqueName: \"kubernetes.io/projected/dfc4e081-9222-4cea-833f-d9137246664a-kube-api-access-c4mhl\") pod \"keystone-cron-29587861-lh4pt\" (UID: \"dfc4e081-9222-4cea-833f-d9137246664a\") " pod="openstack/keystone-cron-29587861-lh4pt" Apr 04 03:01:00 crc kubenswrapper[4681]: I0404 03:01:00.382893 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfc4e081-9222-4cea-833f-d9137246664a-config-data\") pod \"keystone-cron-29587861-lh4pt\" (UID: \"dfc4e081-9222-4cea-833f-d9137246664a\") " pod="openstack/keystone-cron-29587861-lh4pt" Apr 04 03:01:00 crc kubenswrapper[4681]: I0404 03:01:00.382921 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dfc4e081-9222-4cea-833f-d9137246664a-fernet-keys\") pod \"keystone-cron-29587861-lh4pt\" (UID: \"dfc4e081-9222-4cea-833f-d9137246664a\") " pod="openstack/keystone-cron-29587861-lh4pt" Apr 04 03:01:00 crc kubenswrapper[4681]: I0404 03:01:00.382966 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfc4e081-9222-4cea-833f-d9137246664a-combined-ca-bundle\") pod \"keystone-cron-29587861-lh4pt\" (UID: \"dfc4e081-9222-4cea-833f-d9137246664a\") " pod="openstack/keystone-cron-29587861-lh4pt" Apr 04 03:01:00 crc kubenswrapper[4681]: I0404 03:01:00.390564 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfc4e081-9222-4cea-833f-d9137246664a-combined-ca-bundle\") pod \"keystone-cron-29587861-lh4pt\" (UID: \"dfc4e081-9222-4cea-833f-d9137246664a\") " pod="openstack/keystone-cron-29587861-lh4pt" Apr 04 03:01:00 crc kubenswrapper[4681]: I0404 03:01:00.392089 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dfc4e081-9222-4cea-833f-d9137246664a-fernet-keys\") pod \"keystone-cron-29587861-lh4pt\" (UID: \"dfc4e081-9222-4cea-833f-d9137246664a\") " pod="openstack/keystone-cron-29587861-lh4pt" Apr 04 03:01:00 crc kubenswrapper[4681]: I0404 03:01:00.393102 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfc4e081-9222-4cea-833f-d9137246664a-config-data\") pod \"keystone-cron-29587861-lh4pt\" (UID: \"dfc4e081-9222-4cea-833f-d9137246664a\") " pod="openstack/keystone-cron-29587861-lh4pt" Apr 04 03:01:00 crc kubenswrapper[4681]: I0404 03:01:00.401611 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4mhl\" (UniqueName: \"kubernetes.io/projected/dfc4e081-9222-4cea-833f-d9137246664a-kube-api-access-c4mhl\") pod \"keystone-cron-29587861-lh4pt\" (UID: \"dfc4e081-9222-4cea-833f-d9137246664a\") " pod="openstack/keystone-cron-29587861-lh4pt" Apr 04 03:01:00 crc kubenswrapper[4681]: I0404 03:01:00.486919 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29587861-lh4pt" Apr 04 03:01:00 crc kubenswrapper[4681]: I0404 03:01:00.993833 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29587861-lh4pt"] Apr 04 03:01:00 crc kubenswrapper[4681]: W0404 03:01:00.997136 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddfc4e081_9222_4cea_833f_d9137246664a.slice/crio-2f6a666739076995fd661884fd11dd4125d596e36840066a8f623f5c87853468 WatchSource:0}: Error finding container 2f6a666739076995fd661884fd11dd4125d596e36840066a8f623f5c87853468: Status 404 returned error can't find the container with id 2f6a666739076995fd661884fd11dd4125d596e36840066a8f623f5c87853468 Apr 04 03:01:01 crc kubenswrapper[4681]: I0404 03:01:01.546375 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29587861-lh4pt" event={"ID":"dfc4e081-9222-4cea-833f-d9137246664a","Type":"ContainerStarted","Data":"974224ac5a129d511988e89c5e05870a858361e51b1aae236df6cc4a2b8247ee"} Apr 04 03:01:01 crc kubenswrapper[4681]: I0404 03:01:01.546689 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29587861-lh4pt" event={"ID":"dfc4e081-9222-4cea-833f-d9137246664a","Type":"ContainerStarted","Data":"2f6a666739076995fd661884fd11dd4125d596e36840066a8f623f5c87853468"} Apr 04 03:01:01 crc kubenswrapper[4681]: I0404 03:01:01.569633 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29587861-lh4pt" podStartSLOduration=1.5696082709999999 podStartE2EDuration="1.569608271s" podCreationTimestamp="2026-04-04 03:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 03:01:01.560881853 +0000 UTC m=+3941.226656973" watchObservedRunningTime="2026-04-04 03:01:01.569608271 +0000 UTC m=+3941.235383391" Apr 04 03:01:02 crc kubenswrapper[4681]: I0404 03:01:02.201652 4681 scope.go:117] "RemoveContainer" containerID="447590cb991b3a8f7b114ff69052f3e9ff022024bc6e1ab610dde7ea4d8598b6" Apr 04 03:01:02 crc kubenswrapper[4681]: E0404 03:01:02.201980 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:01:05 crc kubenswrapper[4681]: I0404 03:01:05.590144 4681 generic.go:334] "Generic (PLEG): container finished" podID="dfc4e081-9222-4cea-833f-d9137246664a" containerID="974224ac5a129d511988e89c5e05870a858361e51b1aae236df6cc4a2b8247ee" exitCode=0 Apr 04 03:01:05 crc kubenswrapper[4681]: I0404 03:01:05.590422 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29587861-lh4pt" event={"ID":"dfc4e081-9222-4cea-833f-d9137246664a","Type":"ContainerDied","Data":"974224ac5a129d511988e89c5e05870a858361e51b1aae236df6cc4a2b8247ee"} Apr 04 03:01:06 crc kubenswrapper[4681]: I0404 03:01:06.998350 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29587861-lh4pt" Apr 04 03:01:07 crc kubenswrapper[4681]: I0404 03:01:07.126394 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dfc4e081-9222-4cea-833f-d9137246664a-fernet-keys\") pod \"dfc4e081-9222-4cea-833f-d9137246664a\" (UID: \"dfc4e081-9222-4cea-833f-d9137246664a\") " Apr 04 03:01:07 crc kubenswrapper[4681]: I0404 03:01:07.126538 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4mhl\" (UniqueName: \"kubernetes.io/projected/dfc4e081-9222-4cea-833f-d9137246664a-kube-api-access-c4mhl\") pod \"dfc4e081-9222-4cea-833f-d9137246664a\" (UID: \"dfc4e081-9222-4cea-833f-d9137246664a\") " Apr 04 03:01:07 crc kubenswrapper[4681]: I0404 03:01:07.126796 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfc4e081-9222-4cea-833f-d9137246664a-config-data\") pod \"dfc4e081-9222-4cea-833f-d9137246664a\" (UID: \"dfc4e081-9222-4cea-833f-d9137246664a\") " Apr 04 03:01:07 crc kubenswrapper[4681]: I0404 03:01:07.126879 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfc4e081-9222-4cea-833f-d9137246664a-combined-ca-bundle\") pod \"dfc4e081-9222-4cea-833f-d9137246664a\" (UID: \"dfc4e081-9222-4cea-833f-d9137246664a\") " Apr 04 03:01:07 crc kubenswrapper[4681]: I0404 03:01:07.131684 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfc4e081-9222-4cea-833f-d9137246664a-kube-api-access-c4mhl" (OuterVolumeSpecName: "kube-api-access-c4mhl") pod "dfc4e081-9222-4cea-833f-d9137246664a" (UID: "dfc4e081-9222-4cea-833f-d9137246664a"). InnerVolumeSpecName "kube-api-access-c4mhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:01:07 crc kubenswrapper[4681]: I0404 03:01:07.134428 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc4e081-9222-4cea-833f-d9137246664a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "dfc4e081-9222-4cea-833f-d9137246664a" (UID: "dfc4e081-9222-4cea-833f-d9137246664a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 03:01:07 crc kubenswrapper[4681]: I0404 03:01:07.170603 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc4e081-9222-4cea-833f-d9137246664a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dfc4e081-9222-4cea-833f-d9137246664a" (UID: "dfc4e081-9222-4cea-833f-d9137246664a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 03:01:07 crc kubenswrapper[4681]: I0404 03:01:07.200788 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfc4e081-9222-4cea-833f-d9137246664a-config-data" (OuterVolumeSpecName: "config-data") pod "dfc4e081-9222-4cea-833f-d9137246664a" (UID: "dfc4e081-9222-4cea-833f-d9137246664a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 03:01:07 crc kubenswrapper[4681]: I0404 03:01:07.229434 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dfc4e081-9222-4cea-833f-d9137246664a-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 03:01:07 crc kubenswrapper[4681]: I0404 03:01:07.229464 4681 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dfc4e081-9222-4cea-833f-d9137246664a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Apr 04 03:01:07 crc kubenswrapper[4681]: I0404 03:01:07.229485 4681 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dfc4e081-9222-4cea-833f-d9137246664a-fernet-keys\") on node \"crc\" DevicePath \"\"" Apr 04 03:01:07 crc kubenswrapper[4681]: I0404 03:01:07.229493 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4mhl\" (UniqueName: \"kubernetes.io/projected/dfc4e081-9222-4cea-833f-d9137246664a-kube-api-access-c4mhl\") on node \"crc\" DevicePath \"\"" Apr 04 03:01:07 crc kubenswrapper[4681]: I0404 03:01:07.611665 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29587861-lh4pt" event={"ID":"dfc4e081-9222-4cea-833f-d9137246664a","Type":"ContainerDied","Data":"2f6a666739076995fd661884fd11dd4125d596e36840066a8f623f5c87853468"} Apr 04 03:01:07 crc kubenswrapper[4681]: I0404 03:01:07.611988 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f6a666739076995fd661884fd11dd4125d596e36840066a8f623f5c87853468" Apr 04 03:01:07 crc kubenswrapper[4681]: I0404 03:01:07.611703 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29587861-lh4pt" Apr 04 03:01:15 crc kubenswrapper[4681]: I0404 03:01:15.201204 4681 scope.go:117] "RemoveContainer" containerID="447590cb991b3a8f7b114ff69052f3e9ff022024bc6e1ab610dde7ea4d8598b6" Apr 04 03:01:15 crc kubenswrapper[4681]: E0404 03:01:15.202542 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:01:29 crc kubenswrapper[4681]: I0404 03:01:29.201801 4681 scope.go:117] "RemoveContainer" containerID="447590cb991b3a8f7b114ff69052f3e9ff022024bc6e1ab610dde7ea4d8598b6" Apr 04 03:01:29 crc kubenswrapper[4681]: E0404 03:01:29.202855 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:01:36 crc kubenswrapper[4681]: I0404 03:01:36.348107 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hf89r"] Apr 04 03:01:36 crc kubenswrapper[4681]: E0404 03:01:36.349309 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfc4e081-9222-4cea-833f-d9137246664a" containerName="keystone-cron" Apr 04 03:01:36 crc kubenswrapper[4681]: I0404 03:01:36.349327 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfc4e081-9222-4cea-833f-d9137246664a" containerName="keystone-cron" Apr 04 03:01:36 crc kubenswrapper[4681]: I0404 03:01:36.349630 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfc4e081-9222-4cea-833f-d9137246664a" containerName="keystone-cron" Apr 04 03:01:36 crc kubenswrapper[4681]: I0404 03:01:36.351531 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hf89r" Apr 04 03:01:36 crc kubenswrapper[4681]: I0404 03:01:36.359057 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hf89r"] Apr 04 03:01:36 crc kubenswrapper[4681]: I0404 03:01:36.476247 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eaeaaef-e430-4fea-96a5-de0269ec0dd2-utilities\") pod \"redhat-marketplace-hf89r\" (UID: \"1eaeaaef-e430-4fea-96a5-de0269ec0dd2\") " pod="openshift-marketplace/redhat-marketplace-hf89r" Apr 04 03:01:36 crc kubenswrapper[4681]: I0404 03:01:36.476590 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eaeaaef-e430-4fea-96a5-de0269ec0dd2-catalog-content\") pod \"redhat-marketplace-hf89r\" (UID: \"1eaeaaef-e430-4fea-96a5-de0269ec0dd2\") " pod="openshift-marketplace/redhat-marketplace-hf89r" Apr 04 03:01:36 crc kubenswrapper[4681]: I0404 03:01:36.476741 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sqj5\" (UniqueName: \"kubernetes.io/projected/1eaeaaef-e430-4fea-96a5-de0269ec0dd2-kube-api-access-5sqj5\") pod \"redhat-marketplace-hf89r\" (UID: \"1eaeaaef-e430-4fea-96a5-de0269ec0dd2\") " pod="openshift-marketplace/redhat-marketplace-hf89r" Apr 04 03:01:36 crc kubenswrapper[4681]: I0404 03:01:36.578772 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sqj5\" (UniqueName: \"kubernetes.io/projected/1eaeaaef-e430-4fea-96a5-de0269ec0dd2-kube-api-access-5sqj5\") pod \"redhat-marketplace-hf89r\" (UID: \"1eaeaaef-e430-4fea-96a5-de0269ec0dd2\") " pod="openshift-marketplace/redhat-marketplace-hf89r" Apr 04 03:01:36 crc kubenswrapper[4681]: I0404 03:01:36.579022 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eaeaaef-e430-4fea-96a5-de0269ec0dd2-utilities\") pod \"redhat-marketplace-hf89r\" (UID: \"1eaeaaef-e430-4fea-96a5-de0269ec0dd2\") " pod="openshift-marketplace/redhat-marketplace-hf89r" Apr 04 03:01:36 crc kubenswrapper[4681]: I0404 03:01:36.579182 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eaeaaef-e430-4fea-96a5-de0269ec0dd2-catalog-content\") pod \"redhat-marketplace-hf89r\" (UID: \"1eaeaaef-e430-4fea-96a5-de0269ec0dd2\") " pod="openshift-marketplace/redhat-marketplace-hf89r" Apr 04 03:01:36 crc kubenswrapper[4681]: I0404 03:01:36.579943 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eaeaaef-e430-4fea-96a5-de0269ec0dd2-catalog-content\") pod \"redhat-marketplace-hf89r\" (UID: \"1eaeaaef-e430-4fea-96a5-de0269ec0dd2\") " pod="openshift-marketplace/redhat-marketplace-hf89r" Apr 04 03:01:36 crc kubenswrapper[4681]: I0404 03:01:36.580768 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eaeaaef-e430-4fea-96a5-de0269ec0dd2-utilities\") pod \"redhat-marketplace-hf89r\" (UID: \"1eaeaaef-e430-4fea-96a5-de0269ec0dd2\") " pod="openshift-marketplace/redhat-marketplace-hf89r" Apr 04 03:01:36 crc kubenswrapper[4681]: I0404 03:01:36.599591 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sqj5\" (UniqueName: \"kubernetes.io/projected/1eaeaaef-e430-4fea-96a5-de0269ec0dd2-kube-api-access-5sqj5\") pod \"redhat-marketplace-hf89r\" (UID: \"1eaeaaef-e430-4fea-96a5-de0269ec0dd2\") " pod="openshift-marketplace/redhat-marketplace-hf89r" Apr 04 03:01:36 crc kubenswrapper[4681]: I0404 03:01:36.683156 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hf89r" Apr 04 03:01:37 crc kubenswrapper[4681]: I0404 03:01:37.181080 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hf89r"] Apr 04 03:01:37 crc kubenswrapper[4681]: I0404 03:01:37.916092 4681 generic.go:334] "Generic (PLEG): container finished" podID="1eaeaaef-e430-4fea-96a5-de0269ec0dd2" containerID="e1194aee41222228e8f328d5f1ad3c3a9e64d786efa1d66be99a30bf96954ffb" exitCode=0 Apr 04 03:01:37 crc kubenswrapper[4681]: I0404 03:01:37.916284 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hf89r" event={"ID":"1eaeaaef-e430-4fea-96a5-de0269ec0dd2","Type":"ContainerDied","Data":"e1194aee41222228e8f328d5f1ad3c3a9e64d786efa1d66be99a30bf96954ffb"} Apr 04 03:01:37 crc kubenswrapper[4681]: I0404 03:01:37.916659 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hf89r" event={"ID":"1eaeaaef-e430-4fea-96a5-de0269ec0dd2","Type":"ContainerStarted","Data":"93566a949e39b00990bb04ddc3243430b8a2fe6cb38098508ca5800466f1e6c1"} Apr 04 03:01:39 crc kubenswrapper[4681]: I0404 03:01:39.939599 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hf89r" event={"ID":"1eaeaaef-e430-4fea-96a5-de0269ec0dd2","Type":"ContainerStarted","Data":"bbfb80e19180e30436a63c365095d7fdf6de86415cbb1d4fc1c320851b42b00d"} Apr 04 03:01:40 crc kubenswrapper[4681]: I0404 03:01:40.954536 4681 generic.go:334] "Generic (PLEG): container finished" podID="1eaeaaef-e430-4fea-96a5-de0269ec0dd2" containerID="bbfb80e19180e30436a63c365095d7fdf6de86415cbb1d4fc1c320851b42b00d" exitCode=0 Apr 04 03:01:40 crc kubenswrapper[4681]: I0404 03:01:40.954754 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hf89r" event={"ID":"1eaeaaef-e430-4fea-96a5-de0269ec0dd2","Type":"ContainerDied","Data":"bbfb80e19180e30436a63c365095d7fdf6de86415cbb1d4fc1c320851b42b00d"} Apr 04 03:01:41 crc kubenswrapper[4681]: I0404 03:01:41.968781 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hf89r" event={"ID":"1eaeaaef-e430-4fea-96a5-de0269ec0dd2","Type":"ContainerStarted","Data":"ddd2e17f30e4859847efcf37d10a89234030d521b043928e10ca6029d8dd99bf"} Apr 04 03:01:41 crc kubenswrapper[4681]: I0404 03:01:41.998680 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hf89r" podStartSLOduration=2.499703573 podStartE2EDuration="5.998654199s" podCreationTimestamp="2026-04-04 03:01:36 +0000 UTC" firstStartedPulling="2026-04-04 03:01:37.919658179 +0000 UTC m=+3977.585433309" lastFinishedPulling="2026-04-04 03:01:41.418608805 +0000 UTC m=+3981.084383935" observedRunningTime="2026-04-04 03:01:41.989072667 +0000 UTC m=+3981.654847787" watchObservedRunningTime="2026-04-04 03:01:41.998654199 +0000 UTC m=+3981.664429349" Apr 04 03:01:42 crc kubenswrapper[4681]: I0404 03:01:42.201304 4681 scope.go:117] "RemoveContainer" containerID="447590cb991b3a8f7b114ff69052f3e9ff022024bc6e1ab610dde7ea4d8598b6" Apr 04 03:01:42 crc kubenswrapper[4681]: E0404 03:01:42.201654 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:01:46 crc kubenswrapper[4681]: I0404 03:01:46.683318 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hf89r" Apr 04 03:01:46 crc kubenswrapper[4681]: I0404 03:01:46.683860 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hf89r" Apr 04 03:01:46 crc kubenswrapper[4681]: I0404 03:01:46.732474 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hf89r" Apr 04 03:01:47 crc kubenswrapper[4681]: I0404 03:01:47.059877 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hf89r" Apr 04 03:01:47 crc kubenswrapper[4681]: I0404 03:01:47.115036 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hf89r"] Apr 04 03:01:49 crc kubenswrapper[4681]: I0404 03:01:49.033701 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hf89r" podUID="1eaeaaef-e430-4fea-96a5-de0269ec0dd2" containerName="registry-server" containerID="cri-o://ddd2e17f30e4859847efcf37d10a89234030d521b043928e10ca6029d8dd99bf" gracePeriod=2 Apr 04 03:01:49 crc kubenswrapper[4681]: I0404 03:01:49.507378 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hf89r" Apr 04 03:01:49 crc kubenswrapper[4681]: I0404 03:01:49.617118 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eaeaaef-e430-4fea-96a5-de0269ec0dd2-utilities\") pod \"1eaeaaef-e430-4fea-96a5-de0269ec0dd2\" (UID: \"1eaeaaef-e430-4fea-96a5-de0269ec0dd2\") " Apr 04 03:01:49 crc kubenswrapper[4681]: I0404 03:01:49.617197 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eaeaaef-e430-4fea-96a5-de0269ec0dd2-catalog-content\") pod \"1eaeaaef-e430-4fea-96a5-de0269ec0dd2\" (UID: \"1eaeaaef-e430-4fea-96a5-de0269ec0dd2\") " Apr 04 03:01:49 crc kubenswrapper[4681]: I0404 03:01:49.617527 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5sqj5\" (UniqueName: \"kubernetes.io/projected/1eaeaaef-e430-4fea-96a5-de0269ec0dd2-kube-api-access-5sqj5\") pod \"1eaeaaef-e430-4fea-96a5-de0269ec0dd2\" (UID: \"1eaeaaef-e430-4fea-96a5-de0269ec0dd2\") " Apr 04 03:01:49 crc kubenswrapper[4681]: I0404 03:01:49.618112 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1eaeaaef-e430-4fea-96a5-de0269ec0dd2-utilities" (OuterVolumeSpecName: "utilities") pod "1eaeaaef-e430-4fea-96a5-de0269ec0dd2" (UID: "1eaeaaef-e430-4fea-96a5-de0269ec0dd2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:01:49 crc kubenswrapper[4681]: I0404 03:01:49.629552 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1eaeaaef-e430-4fea-96a5-de0269ec0dd2-kube-api-access-5sqj5" (OuterVolumeSpecName: "kube-api-access-5sqj5") pod "1eaeaaef-e430-4fea-96a5-de0269ec0dd2" (UID: "1eaeaaef-e430-4fea-96a5-de0269ec0dd2"). InnerVolumeSpecName "kube-api-access-5sqj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:01:49 crc kubenswrapper[4681]: I0404 03:01:49.650606 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1eaeaaef-e430-4fea-96a5-de0269ec0dd2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1eaeaaef-e430-4fea-96a5-de0269ec0dd2" (UID: "1eaeaaef-e430-4fea-96a5-de0269ec0dd2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:01:49 crc kubenswrapper[4681]: I0404 03:01:49.726630 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5sqj5\" (UniqueName: \"kubernetes.io/projected/1eaeaaef-e430-4fea-96a5-de0269ec0dd2-kube-api-access-5sqj5\") on node \"crc\" DevicePath \"\"" Apr 04 03:01:49 crc kubenswrapper[4681]: I0404 03:01:49.726669 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1eaeaaef-e430-4fea-96a5-de0269ec0dd2-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 03:01:49 crc kubenswrapper[4681]: I0404 03:01:49.726682 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1eaeaaef-e430-4fea-96a5-de0269ec0dd2-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 03:01:50 crc kubenswrapper[4681]: I0404 03:01:50.045295 4681 generic.go:334] "Generic (PLEG): container finished" podID="1eaeaaef-e430-4fea-96a5-de0269ec0dd2" containerID="ddd2e17f30e4859847efcf37d10a89234030d521b043928e10ca6029d8dd99bf" exitCode=0 Apr 04 03:01:50 crc kubenswrapper[4681]: I0404 03:01:50.045354 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hf89r" event={"ID":"1eaeaaef-e430-4fea-96a5-de0269ec0dd2","Type":"ContainerDied","Data":"ddd2e17f30e4859847efcf37d10a89234030d521b043928e10ca6029d8dd99bf"} Apr 04 03:01:50 crc kubenswrapper[4681]: I0404 03:01:50.045395 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hf89r" event={"ID":"1eaeaaef-e430-4fea-96a5-de0269ec0dd2","Type":"ContainerDied","Data":"93566a949e39b00990bb04ddc3243430b8a2fe6cb38098508ca5800466f1e6c1"} Apr 04 03:01:50 crc kubenswrapper[4681]: I0404 03:01:50.045420 4681 scope.go:117] "RemoveContainer" containerID="ddd2e17f30e4859847efcf37d10a89234030d521b043928e10ca6029d8dd99bf" Apr 04 03:01:50 crc kubenswrapper[4681]: I0404 03:01:50.045364 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hf89r" Apr 04 03:01:50 crc kubenswrapper[4681]: I0404 03:01:50.073193 4681 scope.go:117] "RemoveContainer" containerID="bbfb80e19180e30436a63c365095d7fdf6de86415cbb1d4fc1c320851b42b00d" Apr 04 03:01:50 crc kubenswrapper[4681]: I0404 03:01:50.092390 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hf89r"] Apr 04 03:01:50 crc kubenswrapper[4681]: I0404 03:01:50.101930 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hf89r"] Apr 04 03:01:50 crc kubenswrapper[4681]: I0404 03:01:50.130182 4681 scope.go:117] "RemoveContainer" containerID="e1194aee41222228e8f328d5f1ad3c3a9e64d786efa1d66be99a30bf96954ffb" Apr 04 03:01:50 crc kubenswrapper[4681]: I0404 03:01:50.155443 4681 scope.go:117] "RemoveContainer" containerID="ddd2e17f30e4859847efcf37d10a89234030d521b043928e10ca6029d8dd99bf" Apr 04 03:01:50 crc kubenswrapper[4681]: E0404 03:01:50.156075 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddd2e17f30e4859847efcf37d10a89234030d521b043928e10ca6029d8dd99bf\": container with ID starting with ddd2e17f30e4859847efcf37d10a89234030d521b043928e10ca6029d8dd99bf not found: ID does not exist" containerID="ddd2e17f30e4859847efcf37d10a89234030d521b043928e10ca6029d8dd99bf" Apr 04 03:01:50 crc kubenswrapper[4681]: I0404 03:01:50.156115 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddd2e17f30e4859847efcf37d10a89234030d521b043928e10ca6029d8dd99bf"} err="failed to get container status \"ddd2e17f30e4859847efcf37d10a89234030d521b043928e10ca6029d8dd99bf\": rpc error: code = NotFound desc = could not find container \"ddd2e17f30e4859847efcf37d10a89234030d521b043928e10ca6029d8dd99bf\": container with ID starting with ddd2e17f30e4859847efcf37d10a89234030d521b043928e10ca6029d8dd99bf not found: ID does not exist" Apr 04 03:01:50 crc kubenswrapper[4681]: I0404 03:01:50.156143 4681 scope.go:117] "RemoveContainer" containerID="bbfb80e19180e30436a63c365095d7fdf6de86415cbb1d4fc1c320851b42b00d" Apr 04 03:01:50 crc kubenswrapper[4681]: E0404 03:01:50.156696 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbfb80e19180e30436a63c365095d7fdf6de86415cbb1d4fc1c320851b42b00d\": container with ID starting with bbfb80e19180e30436a63c365095d7fdf6de86415cbb1d4fc1c320851b42b00d not found: ID does not exist" containerID="bbfb80e19180e30436a63c365095d7fdf6de86415cbb1d4fc1c320851b42b00d" Apr 04 03:01:50 crc kubenswrapper[4681]: I0404 03:01:50.156726 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbfb80e19180e30436a63c365095d7fdf6de86415cbb1d4fc1c320851b42b00d"} err="failed to get container status \"bbfb80e19180e30436a63c365095d7fdf6de86415cbb1d4fc1c320851b42b00d\": rpc error: code = NotFound desc = could not find container \"bbfb80e19180e30436a63c365095d7fdf6de86415cbb1d4fc1c320851b42b00d\": container with ID starting with bbfb80e19180e30436a63c365095d7fdf6de86415cbb1d4fc1c320851b42b00d not found: ID does not exist" Apr 04 03:01:50 crc kubenswrapper[4681]: I0404 03:01:50.156744 4681 scope.go:117] "RemoveContainer" containerID="e1194aee41222228e8f328d5f1ad3c3a9e64d786efa1d66be99a30bf96954ffb" Apr 04 03:01:50 crc kubenswrapper[4681]: E0404 03:01:50.156987 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1194aee41222228e8f328d5f1ad3c3a9e64d786efa1d66be99a30bf96954ffb\": container with ID starting with e1194aee41222228e8f328d5f1ad3c3a9e64d786efa1d66be99a30bf96954ffb not found: ID does not exist" containerID="e1194aee41222228e8f328d5f1ad3c3a9e64d786efa1d66be99a30bf96954ffb" Apr 04 03:01:50 crc kubenswrapper[4681]: I0404 03:01:50.157017 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1194aee41222228e8f328d5f1ad3c3a9e64d786efa1d66be99a30bf96954ffb"} err="failed to get container status \"e1194aee41222228e8f328d5f1ad3c3a9e64d786efa1d66be99a30bf96954ffb\": rpc error: code = NotFound desc = could not find container \"e1194aee41222228e8f328d5f1ad3c3a9e64d786efa1d66be99a30bf96954ffb\": container with ID starting with e1194aee41222228e8f328d5f1ad3c3a9e64d786efa1d66be99a30bf96954ffb not found: ID does not exist" Apr 04 03:01:51 crc kubenswrapper[4681]: I0404 03:01:51.214525 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1eaeaaef-e430-4fea-96a5-de0269ec0dd2" path="/var/lib/kubelet/pods/1eaeaaef-e430-4fea-96a5-de0269ec0dd2/volumes" Apr 04 03:01:53 crc kubenswrapper[4681]: I0404 03:01:53.200218 4681 scope.go:117] "RemoveContainer" containerID="447590cb991b3a8f7b114ff69052f3e9ff022024bc6e1ab610dde7ea4d8598b6" Apr 04 03:01:53 crc kubenswrapper[4681]: E0404 03:01:53.200815 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:02:00 crc kubenswrapper[4681]: I0404 03:02:00.151797 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587862-2ndtx"] Apr 04 03:02:00 crc kubenswrapper[4681]: E0404 03:02:00.152788 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eaeaaef-e430-4fea-96a5-de0269ec0dd2" containerName="extract-utilities" Apr 04 03:02:00 crc kubenswrapper[4681]: I0404 03:02:00.152810 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eaeaaef-e430-4fea-96a5-de0269ec0dd2" containerName="extract-utilities" Apr 04 03:02:00 crc kubenswrapper[4681]: E0404 03:02:00.152834 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eaeaaef-e430-4fea-96a5-de0269ec0dd2" containerName="registry-server" Apr 04 03:02:00 crc kubenswrapper[4681]: I0404 03:02:00.152840 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eaeaaef-e430-4fea-96a5-de0269ec0dd2" containerName="registry-server" Apr 04 03:02:00 crc kubenswrapper[4681]: E0404 03:02:00.152869 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1eaeaaef-e430-4fea-96a5-de0269ec0dd2" containerName="extract-content" Apr 04 03:02:00 crc kubenswrapper[4681]: I0404 03:02:00.152875 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1eaeaaef-e430-4fea-96a5-de0269ec0dd2" containerName="extract-content" Apr 04 03:02:00 crc kubenswrapper[4681]: I0404 03:02:00.153069 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="1eaeaaef-e430-4fea-96a5-de0269ec0dd2" containerName="registry-server" Apr 04 03:02:00 crc kubenswrapper[4681]: I0404 03:02:00.155371 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587862-2ndtx" Apr 04 03:02:00 crc kubenswrapper[4681]: I0404 03:02:00.157598 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 03:02:00 crc kubenswrapper[4681]: I0404 03:02:00.157623 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 03:02:00 crc kubenswrapper[4681]: I0404 03:02:00.158708 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 03:02:00 crc kubenswrapper[4681]: I0404 03:02:00.192117 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587862-2ndtx"] Apr 04 03:02:00 crc kubenswrapper[4681]: I0404 03:02:00.261168 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqtbv\" (UniqueName: \"kubernetes.io/projected/1100f700-c0d5-4d9d-ae38-8d2fae681121-kube-api-access-cqtbv\") pod \"auto-csr-approver-29587862-2ndtx\" (UID: \"1100f700-c0d5-4d9d-ae38-8d2fae681121\") " pod="openshift-infra/auto-csr-approver-29587862-2ndtx" Apr 04 03:02:00 crc kubenswrapper[4681]: I0404 03:02:00.363606 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqtbv\" (UniqueName: \"kubernetes.io/projected/1100f700-c0d5-4d9d-ae38-8d2fae681121-kube-api-access-cqtbv\") pod \"auto-csr-approver-29587862-2ndtx\" (UID: \"1100f700-c0d5-4d9d-ae38-8d2fae681121\") " pod="openshift-infra/auto-csr-approver-29587862-2ndtx" Apr 04 03:02:00 crc kubenswrapper[4681]: I0404 03:02:00.383722 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqtbv\" (UniqueName: \"kubernetes.io/projected/1100f700-c0d5-4d9d-ae38-8d2fae681121-kube-api-access-cqtbv\") pod \"auto-csr-approver-29587862-2ndtx\" (UID: \"1100f700-c0d5-4d9d-ae38-8d2fae681121\") " pod="openshift-infra/auto-csr-approver-29587862-2ndtx" Apr 04 03:02:00 crc kubenswrapper[4681]: I0404 03:02:00.481328 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587862-2ndtx" Apr 04 03:02:00 crc kubenswrapper[4681]: I0404 03:02:00.933475 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587862-2ndtx"] Apr 04 03:02:01 crc kubenswrapper[4681]: I0404 03:02:01.159753 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587862-2ndtx" event={"ID":"1100f700-c0d5-4d9d-ae38-8d2fae681121","Type":"ContainerStarted","Data":"23656c79c4fb9c0b10bde29245b1067845b47dd57091cd79c3edc02d76f38b01"} Apr 04 03:02:02 crc kubenswrapper[4681]: I0404 03:02:02.175358 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587862-2ndtx" event={"ID":"1100f700-c0d5-4d9d-ae38-8d2fae681121","Type":"ContainerStarted","Data":"bcf930e8a6a4fb53230557eec2722ed14e620ca19efc39ae3b69687351819c7a"} Apr 04 03:02:02 crc kubenswrapper[4681]: I0404 03:02:02.189997 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29587862-2ndtx" podStartSLOduration=1.350669343 podStartE2EDuration="2.189977153s" podCreationTimestamp="2026-04-04 03:02:00 +0000 UTC" firstStartedPulling="2026-04-04 03:02:00.936750659 +0000 UTC m=+4000.602525779" lastFinishedPulling="2026-04-04 03:02:01.776058459 +0000 UTC m=+4001.441833589" observedRunningTime="2026-04-04 03:02:02.188892844 +0000 UTC m=+4001.854667964" watchObservedRunningTime="2026-04-04 03:02:02.189977153 +0000 UTC m=+4001.855752273" Apr 04 03:02:03 crc kubenswrapper[4681]: I0404 03:02:03.185699 4681 generic.go:334] "Generic (PLEG): container finished" podID="1100f700-c0d5-4d9d-ae38-8d2fae681121" containerID="bcf930e8a6a4fb53230557eec2722ed14e620ca19efc39ae3b69687351819c7a" exitCode=0 Apr 04 03:02:03 crc kubenswrapper[4681]: I0404 03:02:03.185747 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587862-2ndtx" event={"ID":"1100f700-c0d5-4d9d-ae38-8d2fae681121","Type":"ContainerDied","Data":"bcf930e8a6a4fb53230557eec2722ed14e620ca19efc39ae3b69687351819c7a"} Apr 04 03:02:04 crc kubenswrapper[4681]: I0404 03:02:04.550960 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587862-2ndtx" Apr 04 03:02:04 crc kubenswrapper[4681]: I0404 03:02:04.650393 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqtbv\" (UniqueName: \"kubernetes.io/projected/1100f700-c0d5-4d9d-ae38-8d2fae681121-kube-api-access-cqtbv\") pod \"1100f700-c0d5-4d9d-ae38-8d2fae681121\" (UID: \"1100f700-c0d5-4d9d-ae38-8d2fae681121\") " Apr 04 03:02:04 crc kubenswrapper[4681]: I0404 03:02:04.663877 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1100f700-c0d5-4d9d-ae38-8d2fae681121-kube-api-access-cqtbv" (OuterVolumeSpecName: "kube-api-access-cqtbv") pod "1100f700-c0d5-4d9d-ae38-8d2fae681121" (UID: "1100f700-c0d5-4d9d-ae38-8d2fae681121"). InnerVolumeSpecName "kube-api-access-cqtbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:02:04 crc kubenswrapper[4681]: I0404 03:02:04.752894 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqtbv\" (UniqueName: \"kubernetes.io/projected/1100f700-c0d5-4d9d-ae38-8d2fae681121-kube-api-access-cqtbv\") on node \"crc\" DevicePath \"\"" Apr 04 03:02:05 crc kubenswrapper[4681]: I0404 03:02:05.211182 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587862-2ndtx" Apr 04 03:02:05 crc kubenswrapper[4681]: I0404 03:02:05.215086 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587862-2ndtx" event={"ID":"1100f700-c0d5-4d9d-ae38-8d2fae681121","Type":"ContainerDied","Data":"23656c79c4fb9c0b10bde29245b1067845b47dd57091cd79c3edc02d76f38b01"} Apr 04 03:02:05 crc kubenswrapper[4681]: I0404 03:02:05.215128 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23656c79c4fb9c0b10bde29245b1067845b47dd57091cd79c3edc02d76f38b01" Apr 04 03:02:05 crc kubenswrapper[4681]: I0404 03:02:05.620755 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587856-dpr6f"] Apr 04 03:02:05 crc kubenswrapper[4681]: I0404 03:02:05.631996 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587856-dpr6f"] Apr 04 03:02:06 crc kubenswrapper[4681]: I0404 03:02:06.201111 4681 scope.go:117] "RemoveContainer" containerID="447590cb991b3a8f7b114ff69052f3e9ff022024bc6e1ab610dde7ea4d8598b6" Apr 04 03:02:06 crc kubenswrapper[4681]: E0404 03:02:06.201920 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:02:07 crc kubenswrapper[4681]: I0404 03:02:07.215034 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed7b3a2f-5dc4-4107-9027-259bbcbf4895" path="/var/lib/kubelet/pods/ed7b3a2f-5dc4-4107-9027-259bbcbf4895/volumes" Apr 04 03:02:20 crc kubenswrapper[4681]: I0404 03:02:20.200957 4681 scope.go:117] "RemoveContainer" containerID="447590cb991b3a8f7b114ff69052f3e9ff022024bc6e1ab610dde7ea4d8598b6" Apr 04 03:02:20 crc kubenswrapper[4681]: E0404 03:02:20.201760 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:02:31 crc kubenswrapper[4681]: I0404 03:02:31.209439 4681 scope.go:117] "RemoveContainer" containerID="447590cb991b3a8f7b114ff69052f3e9ff022024bc6e1ab610dde7ea4d8598b6" Apr 04 03:02:31 crc kubenswrapper[4681]: E0404 03:02:31.210191 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:02:31 crc kubenswrapper[4681]: I0404 03:02:31.721057 4681 scope.go:117] "RemoveContainer" containerID="c34d02d833f224a53a7194f5626141d5924607edbd1a0ffc5614f1594f3814ac" Apr 04 03:02:44 crc kubenswrapper[4681]: I0404 03:02:44.201621 4681 scope.go:117] "RemoveContainer" containerID="447590cb991b3a8f7b114ff69052f3e9ff022024bc6e1ab610dde7ea4d8598b6" Apr 04 03:02:44 crc kubenswrapper[4681]: E0404 03:02:44.203208 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:02:59 crc kubenswrapper[4681]: I0404 03:02:59.201359 4681 scope.go:117] "RemoveContainer" containerID="447590cb991b3a8f7b114ff69052f3e9ff022024bc6e1ab610dde7ea4d8598b6" Apr 04 03:02:59 crc kubenswrapper[4681]: E0404 03:02:59.202073 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:03:10 crc kubenswrapper[4681]: I0404 03:03:10.202145 4681 scope.go:117] "RemoveContainer" containerID="447590cb991b3a8f7b114ff69052f3e9ff022024bc6e1ab610dde7ea4d8598b6" Apr 04 03:03:10 crc kubenswrapper[4681]: E0404 03:03:10.203116 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:03:23 crc kubenswrapper[4681]: I0404 03:03:23.201391 4681 scope.go:117] "RemoveContainer" containerID="447590cb991b3a8f7b114ff69052f3e9ff022024bc6e1ab610dde7ea4d8598b6" Apr 04 03:03:23 crc kubenswrapper[4681]: E0404 03:03:23.202586 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:03:38 crc kubenswrapper[4681]: I0404 03:03:38.200946 4681 scope.go:117] "RemoveContainer" containerID="447590cb991b3a8f7b114ff69052f3e9ff022024bc6e1ab610dde7ea4d8598b6" Apr 04 03:03:38 crc kubenswrapper[4681]: E0404 03:03:38.203326 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:03:49 crc kubenswrapper[4681]: I0404 03:03:49.201550 4681 scope.go:117] "RemoveContainer" containerID="447590cb991b3a8f7b114ff69052f3e9ff022024bc6e1ab610dde7ea4d8598b6" Apr 04 03:03:49 crc kubenswrapper[4681]: E0404 03:03:49.202651 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:04:00 crc kubenswrapper[4681]: I0404 03:04:00.161059 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587864-hxkjw"] Apr 04 03:04:00 crc kubenswrapper[4681]: E0404 03:04:00.163551 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1100f700-c0d5-4d9d-ae38-8d2fae681121" containerName="oc" Apr 04 03:04:00 crc kubenswrapper[4681]: I0404 03:04:00.163589 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1100f700-c0d5-4d9d-ae38-8d2fae681121" containerName="oc" Apr 04 03:04:00 crc kubenswrapper[4681]: I0404 03:04:00.163922 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="1100f700-c0d5-4d9d-ae38-8d2fae681121" containerName="oc" Apr 04 03:04:00 crc kubenswrapper[4681]: I0404 03:04:00.164725 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587864-hxkjw" Apr 04 03:04:00 crc kubenswrapper[4681]: I0404 03:04:00.166730 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 03:04:00 crc kubenswrapper[4681]: I0404 03:04:00.168199 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 03:04:00 crc kubenswrapper[4681]: I0404 03:04:00.168574 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 03:04:00 crc kubenswrapper[4681]: I0404 03:04:00.188174 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587864-hxkjw"] Apr 04 03:04:00 crc kubenswrapper[4681]: I0404 03:04:00.290355 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj5h7\" (UniqueName: \"kubernetes.io/projected/99d53d60-1a13-43df-bb02-ad3df25effab-kube-api-access-hj5h7\") pod \"auto-csr-approver-29587864-hxkjw\" (UID: \"99d53d60-1a13-43df-bb02-ad3df25effab\") " pod="openshift-infra/auto-csr-approver-29587864-hxkjw" Apr 04 03:04:00 crc kubenswrapper[4681]: I0404 03:04:00.392996 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj5h7\" (UniqueName: \"kubernetes.io/projected/99d53d60-1a13-43df-bb02-ad3df25effab-kube-api-access-hj5h7\") pod \"auto-csr-approver-29587864-hxkjw\" (UID: \"99d53d60-1a13-43df-bb02-ad3df25effab\") " pod="openshift-infra/auto-csr-approver-29587864-hxkjw" Apr 04 03:04:00 crc kubenswrapper[4681]: I0404 03:04:00.409951 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj5h7\" (UniqueName: \"kubernetes.io/projected/99d53d60-1a13-43df-bb02-ad3df25effab-kube-api-access-hj5h7\") pod \"auto-csr-approver-29587864-hxkjw\" (UID: \"99d53d60-1a13-43df-bb02-ad3df25effab\") " pod="openshift-infra/auto-csr-approver-29587864-hxkjw" Apr 04 03:04:00 crc kubenswrapper[4681]: I0404 03:04:00.491722 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587864-hxkjw" Apr 04 03:04:00 crc kubenswrapper[4681]: I0404 03:04:00.966310 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587864-hxkjw"] Apr 04 03:04:00 crc kubenswrapper[4681]: I0404 03:04:00.980663 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 04 03:04:01 crc kubenswrapper[4681]: I0404 03:04:01.207649 4681 scope.go:117] "RemoveContainer" containerID="447590cb991b3a8f7b114ff69052f3e9ff022024bc6e1ab610dde7ea4d8598b6" Apr 04 03:04:01 crc kubenswrapper[4681]: E0404 03:04:01.207973 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:04:01 crc kubenswrapper[4681]: I0404 03:04:01.439636 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587864-hxkjw" event={"ID":"99d53d60-1a13-43df-bb02-ad3df25effab","Type":"ContainerStarted","Data":"a76878b6ce08713dde2a37a3a03b7ed621efdeb35afc944bc61d339f823c0c40"} Apr 04 03:04:02 crc kubenswrapper[4681]: I0404 03:04:02.455139 4681 generic.go:334] "Generic (PLEG): container finished" podID="99d53d60-1a13-43df-bb02-ad3df25effab" containerID="74494d1307a5e5b8717dc3d20c445d60b75d86adaeea5e8eeeb22f6d1858034f" exitCode=0 Apr 04 03:04:02 crc kubenswrapper[4681]: I0404 03:04:02.455202 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587864-hxkjw" event={"ID":"99d53d60-1a13-43df-bb02-ad3df25effab","Type":"ContainerDied","Data":"74494d1307a5e5b8717dc3d20c445d60b75d86adaeea5e8eeeb22f6d1858034f"} Apr 04 03:04:03 crc kubenswrapper[4681]: I0404 03:04:03.841342 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587864-hxkjw" Apr 04 03:04:03 crc kubenswrapper[4681]: I0404 03:04:03.983045 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj5h7\" (UniqueName: \"kubernetes.io/projected/99d53d60-1a13-43df-bb02-ad3df25effab-kube-api-access-hj5h7\") pod \"99d53d60-1a13-43df-bb02-ad3df25effab\" (UID: \"99d53d60-1a13-43df-bb02-ad3df25effab\") " Apr 04 03:04:03 crc kubenswrapper[4681]: I0404 03:04:03.992593 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99d53d60-1a13-43df-bb02-ad3df25effab-kube-api-access-hj5h7" (OuterVolumeSpecName: "kube-api-access-hj5h7") pod "99d53d60-1a13-43df-bb02-ad3df25effab" (UID: "99d53d60-1a13-43df-bb02-ad3df25effab"). InnerVolumeSpecName "kube-api-access-hj5h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:04:04 crc kubenswrapper[4681]: I0404 03:04:04.085570 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj5h7\" (UniqueName: \"kubernetes.io/projected/99d53d60-1a13-43df-bb02-ad3df25effab-kube-api-access-hj5h7\") on node \"crc\" DevicePath \"\"" Apr 04 03:04:04 crc kubenswrapper[4681]: I0404 03:04:04.476505 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587864-hxkjw" event={"ID":"99d53d60-1a13-43df-bb02-ad3df25effab","Type":"ContainerDied","Data":"a76878b6ce08713dde2a37a3a03b7ed621efdeb35afc944bc61d339f823c0c40"} Apr 04 03:04:04 crc kubenswrapper[4681]: I0404 03:04:04.477085 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a76878b6ce08713dde2a37a3a03b7ed621efdeb35afc944bc61d339f823c0c40" Apr 04 03:04:04 crc kubenswrapper[4681]: I0404 03:04:04.476563 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587864-hxkjw" Apr 04 03:04:04 crc kubenswrapper[4681]: I0404 03:04:04.904436 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587858-54ggf"] Apr 04 03:04:04 crc kubenswrapper[4681]: I0404 03:04:04.927195 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587858-54ggf"] Apr 04 03:04:05 crc kubenswrapper[4681]: I0404 03:04:05.212335 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d07e3739-bc4b-48bd-981f-f37f9a52e4e1" path="/var/lib/kubelet/pods/d07e3739-bc4b-48bd-981f-f37f9a52e4e1/volumes" Apr 04 03:04:14 crc kubenswrapper[4681]: I0404 03:04:14.200731 4681 scope.go:117] "RemoveContainer" containerID="447590cb991b3a8f7b114ff69052f3e9ff022024bc6e1ab610dde7ea4d8598b6" Apr 04 03:04:14 crc kubenswrapper[4681]: E0404 03:04:14.201481 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:04:29 crc kubenswrapper[4681]: I0404 03:04:29.201235 4681 scope.go:117] "RemoveContainer" containerID="447590cb991b3a8f7b114ff69052f3e9ff022024bc6e1ab610dde7ea4d8598b6" Apr 04 03:04:29 crc kubenswrapper[4681]: E0404 03:04:29.202002 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:04:31 crc kubenswrapper[4681]: I0404 03:04:31.839901 4681 scope.go:117] "RemoveContainer" containerID="935aad3be6a753236a487dfa67607c9ab7ff2e608bb599f7c6c02186a9421af6" Apr 04 03:04:44 crc kubenswrapper[4681]: I0404 03:04:44.201159 4681 scope.go:117] "RemoveContainer" containerID="447590cb991b3a8f7b114ff69052f3e9ff022024bc6e1ab610dde7ea4d8598b6" Apr 04 03:04:44 crc kubenswrapper[4681]: E0404 03:04:44.201722 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:04:46 crc kubenswrapper[4681]: I0404 03:04:46.675223 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7sgq5"] Apr 04 03:04:46 crc kubenswrapper[4681]: E0404 03:04:46.676354 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99d53d60-1a13-43df-bb02-ad3df25effab" containerName="oc" Apr 04 03:04:46 crc kubenswrapper[4681]: I0404 03:04:46.676375 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="99d53d60-1a13-43df-bb02-ad3df25effab" containerName="oc" Apr 04 03:04:46 crc kubenswrapper[4681]: I0404 03:04:46.676724 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="99d53d60-1a13-43df-bb02-ad3df25effab" containerName="oc" Apr 04 03:04:46 crc kubenswrapper[4681]: I0404 03:04:46.679021 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7sgq5" Apr 04 03:04:46 crc kubenswrapper[4681]: I0404 03:04:46.695133 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7sgq5"] Apr 04 03:04:46 crc kubenswrapper[4681]: I0404 03:04:46.728176 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0de47e2d-a637-4ec7-aae7-d42eb73a6efb-utilities\") pod \"certified-operators-7sgq5\" (UID: \"0de47e2d-a637-4ec7-aae7-d42eb73a6efb\") " pod="openshift-marketplace/certified-operators-7sgq5" Apr 04 03:04:46 crc kubenswrapper[4681]: I0404 03:04:46.728468 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0de47e2d-a637-4ec7-aae7-d42eb73a6efb-catalog-content\") pod \"certified-operators-7sgq5\" (UID: \"0de47e2d-a637-4ec7-aae7-d42eb73a6efb\") " pod="openshift-marketplace/certified-operators-7sgq5" Apr 04 03:04:46 crc kubenswrapper[4681]: I0404 03:04:46.728524 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs4ns\" (UniqueName: \"kubernetes.io/projected/0de47e2d-a637-4ec7-aae7-d42eb73a6efb-kube-api-access-bs4ns\") pod \"certified-operators-7sgq5\" (UID: \"0de47e2d-a637-4ec7-aae7-d42eb73a6efb\") " pod="openshift-marketplace/certified-operators-7sgq5" Apr 04 03:04:46 crc kubenswrapper[4681]: I0404 03:04:46.830386 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs4ns\" (UniqueName: \"kubernetes.io/projected/0de47e2d-a637-4ec7-aae7-d42eb73a6efb-kube-api-access-bs4ns\") pod \"certified-operators-7sgq5\" (UID: \"0de47e2d-a637-4ec7-aae7-d42eb73a6efb\") " pod="openshift-marketplace/certified-operators-7sgq5" Apr 04 03:04:46 crc kubenswrapper[4681]: I0404 03:04:46.830519 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0de47e2d-a637-4ec7-aae7-d42eb73a6efb-utilities\") pod \"certified-operators-7sgq5\" (UID: \"0de47e2d-a637-4ec7-aae7-d42eb73a6efb\") " pod="openshift-marketplace/certified-operators-7sgq5" Apr 04 03:04:46 crc kubenswrapper[4681]: I0404 03:04:46.830656 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0de47e2d-a637-4ec7-aae7-d42eb73a6efb-catalog-content\") pod \"certified-operators-7sgq5\" (UID: \"0de47e2d-a637-4ec7-aae7-d42eb73a6efb\") " pod="openshift-marketplace/certified-operators-7sgq5" Apr 04 03:04:46 crc kubenswrapper[4681]: I0404 03:04:46.831061 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0de47e2d-a637-4ec7-aae7-d42eb73a6efb-utilities\") pod \"certified-operators-7sgq5\" (UID: \"0de47e2d-a637-4ec7-aae7-d42eb73a6efb\") " pod="openshift-marketplace/certified-operators-7sgq5" Apr 04 03:04:46 crc kubenswrapper[4681]: I0404 03:04:46.831120 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0de47e2d-a637-4ec7-aae7-d42eb73a6efb-catalog-content\") pod \"certified-operators-7sgq5\" (UID: \"0de47e2d-a637-4ec7-aae7-d42eb73a6efb\") " pod="openshift-marketplace/certified-operators-7sgq5" Apr 04 03:04:46 crc kubenswrapper[4681]: I0404 03:04:46.859122 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs4ns\" (UniqueName: \"kubernetes.io/projected/0de47e2d-a637-4ec7-aae7-d42eb73a6efb-kube-api-access-bs4ns\") pod \"certified-operators-7sgq5\" (UID: \"0de47e2d-a637-4ec7-aae7-d42eb73a6efb\") " pod="openshift-marketplace/certified-operators-7sgq5" Apr 04 03:04:47 crc kubenswrapper[4681]: I0404 03:04:47.015670 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7sgq5" Apr 04 03:04:47 crc kubenswrapper[4681]: I0404 03:04:47.581676 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7sgq5"] Apr 04 03:04:47 crc kubenswrapper[4681]: I0404 03:04:47.925049 4681 generic.go:334] "Generic (PLEG): container finished" podID="0de47e2d-a637-4ec7-aae7-d42eb73a6efb" containerID="54307e4060e74a0c4eeab286d99f0db44f65646f7dc934d5f258a39dc9a6ffce" exitCode=0 Apr 04 03:04:47 crc kubenswrapper[4681]: I0404 03:04:47.925371 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sgq5" event={"ID":"0de47e2d-a637-4ec7-aae7-d42eb73a6efb","Type":"ContainerDied","Data":"54307e4060e74a0c4eeab286d99f0db44f65646f7dc934d5f258a39dc9a6ffce"} Apr 04 03:04:47 crc kubenswrapper[4681]: I0404 03:04:47.925403 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sgq5" event={"ID":"0de47e2d-a637-4ec7-aae7-d42eb73a6efb","Type":"ContainerStarted","Data":"1b48e62f7882ff370c2fa9acccdfac003972919935805011aea1fed415e65d94"} Apr 04 03:04:48 crc kubenswrapper[4681]: I0404 03:04:48.934237 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sgq5" event={"ID":"0de47e2d-a637-4ec7-aae7-d42eb73a6efb","Type":"ContainerStarted","Data":"b659e1917e896c118d2b0246dae8417f796f42f48fbe87d01359c43496c6351a"} Apr 04 03:04:50 crc kubenswrapper[4681]: I0404 03:04:50.954815 4681 generic.go:334] "Generic (PLEG): container finished" podID="0de47e2d-a637-4ec7-aae7-d42eb73a6efb" containerID="b659e1917e896c118d2b0246dae8417f796f42f48fbe87d01359c43496c6351a" exitCode=0 Apr 04 03:04:50 crc kubenswrapper[4681]: I0404 03:04:50.954889 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sgq5" event={"ID":"0de47e2d-a637-4ec7-aae7-d42eb73a6efb","Type":"ContainerDied","Data":"b659e1917e896c118d2b0246dae8417f796f42f48fbe87d01359c43496c6351a"} Apr 04 03:04:51 crc kubenswrapper[4681]: I0404 03:04:51.966543 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sgq5" event={"ID":"0de47e2d-a637-4ec7-aae7-d42eb73a6efb","Type":"ContainerStarted","Data":"5cdec67b564488b051caa31386396d5317f72de9a1076cd8ee96e43c57e51ab2"} Apr 04 03:04:51 crc kubenswrapper[4681]: I0404 03:04:51.990086 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7sgq5" podStartSLOduration=2.559543192 podStartE2EDuration="5.990066237s" podCreationTimestamp="2026-04-04 03:04:46 +0000 UTC" firstStartedPulling="2026-04-04 03:04:47.927131856 +0000 UTC m=+4167.592906976" lastFinishedPulling="2026-04-04 03:04:51.357654861 +0000 UTC m=+4171.023430021" observedRunningTime="2026-04-04 03:04:51.982009077 +0000 UTC m=+4171.647784197" watchObservedRunningTime="2026-04-04 03:04:51.990066237 +0000 UTC m=+4171.655841367" Apr 04 03:04:57 crc kubenswrapper[4681]: I0404 03:04:57.016383 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7sgq5" Apr 04 03:04:57 crc kubenswrapper[4681]: I0404 03:04:57.017742 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7sgq5" Apr 04 03:04:57 crc kubenswrapper[4681]: I0404 03:04:57.090905 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7sgq5" Apr 04 03:04:58 crc kubenswrapper[4681]: I0404 03:04:58.068536 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7sgq5" Apr 04 03:04:58 crc kubenswrapper[4681]: I0404 03:04:58.118703 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7sgq5"] Apr 04 03:04:59 crc kubenswrapper[4681]: I0404 03:04:59.201180 4681 scope.go:117] "RemoveContainer" containerID="447590cb991b3a8f7b114ff69052f3e9ff022024bc6e1ab610dde7ea4d8598b6" Apr 04 03:05:00 crc kubenswrapper[4681]: I0404 03:05:00.054881 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerStarted","Data":"8ccfb1040792a13cb64edd32b371118fd1ec9007ac80c528dda7fcb030de9137"} Apr 04 03:05:00 crc kubenswrapper[4681]: I0404 03:05:00.055188 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7sgq5" podUID="0de47e2d-a637-4ec7-aae7-d42eb73a6efb" containerName="registry-server" containerID="cri-o://5cdec67b564488b051caa31386396d5317f72de9a1076cd8ee96e43c57e51ab2" gracePeriod=2 Apr 04 03:05:01 crc kubenswrapper[4681]: I0404 03:05:01.066529 4681 generic.go:334] "Generic (PLEG): container finished" podID="0de47e2d-a637-4ec7-aae7-d42eb73a6efb" containerID="5cdec67b564488b051caa31386396d5317f72de9a1076cd8ee96e43c57e51ab2" exitCode=0 Apr 04 03:05:01 crc kubenswrapper[4681]: I0404 03:05:01.066610 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sgq5" event={"ID":"0de47e2d-a637-4ec7-aae7-d42eb73a6efb","Type":"ContainerDied","Data":"5cdec67b564488b051caa31386396d5317f72de9a1076cd8ee96e43c57e51ab2"} Apr 04 03:05:01 crc kubenswrapper[4681]: I0404 03:05:01.066809 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7sgq5" event={"ID":"0de47e2d-a637-4ec7-aae7-d42eb73a6efb","Type":"ContainerDied","Data":"1b48e62f7882ff370c2fa9acccdfac003972919935805011aea1fed415e65d94"} Apr 04 03:05:01 crc kubenswrapper[4681]: I0404 03:05:01.066822 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b48e62f7882ff370c2fa9acccdfac003972919935805011aea1fed415e65d94" Apr 04 03:05:01 crc kubenswrapper[4681]: I0404 03:05:01.092722 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7sgq5" Apr 04 03:05:01 crc kubenswrapper[4681]: I0404 03:05:01.159128 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0de47e2d-a637-4ec7-aae7-d42eb73a6efb-utilities\") pod \"0de47e2d-a637-4ec7-aae7-d42eb73a6efb\" (UID: \"0de47e2d-a637-4ec7-aae7-d42eb73a6efb\") " Apr 04 03:05:01 crc kubenswrapper[4681]: I0404 03:05:01.159303 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0de47e2d-a637-4ec7-aae7-d42eb73a6efb-catalog-content\") pod \"0de47e2d-a637-4ec7-aae7-d42eb73a6efb\" (UID: \"0de47e2d-a637-4ec7-aae7-d42eb73a6efb\") " Apr 04 03:05:01 crc kubenswrapper[4681]: I0404 03:05:01.159386 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs4ns\" (UniqueName: \"kubernetes.io/projected/0de47e2d-a637-4ec7-aae7-d42eb73a6efb-kube-api-access-bs4ns\") pod \"0de47e2d-a637-4ec7-aae7-d42eb73a6efb\" (UID: \"0de47e2d-a637-4ec7-aae7-d42eb73a6efb\") " Apr 04 03:05:01 crc kubenswrapper[4681]: I0404 03:05:01.160371 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0de47e2d-a637-4ec7-aae7-d42eb73a6efb-utilities" (OuterVolumeSpecName: "utilities") pod "0de47e2d-a637-4ec7-aae7-d42eb73a6efb" (UID: "0de47e2d-a637-4ec7-aae7-d42eb73a6efb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:05:01 crc kubenswrapper[4681]: I0404 03:05:01.176731 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0de47e2d-a637-4ec7-aae7-d42eb73a6efb-kube-api-access-bs4ns" (OuterVolumeSpecName: "kube-api-access-bs4ns") pod "0de47e2d-a637-4ec7-aae7-d42eb73a6efb" (UID: "0de47e2d-a637-4ec7-aae7-d42eb73a6efb"). InnerVolumeSpecName "kube-api-access-bs4ns". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:05:01 crc kubenswrapper[4681]: I0404 03:05:01.231066 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0de47e2d-a637-4ec7-aae7-d42eb73a6efb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0de47e2d-a637-4ec7-aae7-d42eb73a6efb" (UID: "0de47e2d-a637-4ec7-aae7-d42eb73a6efb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:05:01 crc kubenswrapper[4681]: I0404 03:05:01.262624 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0de47e2d-a637-4ec7-aae7-d42eb73a6efb-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 03:05:01 crc kubenswrapper[4681]: I0404 03:05:01.262924 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs4ns\" (UniqueName: \"kubernetes.io/projected/0de47e2d-a637-4ec7-aae7-d42eb73a6efb-kube-api-access-bs4ns\") on node \"crc\" DevicePath \"\"" Apr 04 03:05:01 crc kubenswrapper[4681]: I0404 03:05:01.262938 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0de47e2d-a637-4ec7-aae7-d42eb73a6efb-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 03:05:02 crc kubenswrapper[4681]: I0404 03:05:02.076887 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7sgq5" Apr 04 03:05:02 crc kubenswrapper[4681]: I0404 03:05:02.125374 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7sgq5"] Apr 04 03:05:02 crc kubenswrapper[4681]: I0404 03:05:02.134007 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7sgq5"] Apr 04 03:05:03 crc kubenswrapper[4681]: I0404 03:05:03.218574 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0de47e2d-a637-4ec7-aae7-d42eb73a6efb" path="/var/lib/kubelet/pods/0de47e2d-a637-4ec7-aae7-d42eb73a6efb/volumes" Apr 04 03:06:00 crc kubenswrapper[4681]: I0404 03:06:00.169871 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587866-xzk8f"] Apr 04 03:06:00 crc kubenswrapper[4681]: E0404 03:06:00.171089 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0de47e2d-a637-4ec7-aae7-d42eb73a6efb" containerName="extract-utilities" Apr 04 03:06:00 crc kubenswrapper[4681]: I0404 03:06:00.171108 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="0de47e2d-a637-4ec7-aae7-d42eb73a6efb" containerName="extract-utilities" Apr 04 03:06:00 crc kubenswrapper[4681]: E0404 03:06:00.171125 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0de47e2d-a637-4ec7-aae7-d42eb73a6efb" containerName="extract-content" Apr 04 03:06:00 crc kubenswrapper[4681]: I0404 03:06:00.171133 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="0de47e2d-a637-4ec7-aae7-d42eb73a6efb" containerName="extract-content" Apr 04 03:06:00 crc kubenswrapper[4681]: E0404 03:06:00.171174 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0de47e2d-a637-4ec7-aae7-d42eb73a6efb" containerName="registry-server" Apr 04 03:06:00 crc kubenswrapper[4681]: I0404 03:06:00.171183 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="0de47e2d-a637-4ec7-aae7-d42eb73a6efb" containerName="registry-server" Apr 04 03:06:00 crc kubenswrapper[4681]: I0404 03:06:00.171434 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="0de47e2d-a637-4ec7-aae7-d42eb73a6efb" containerName="registry-server" Apr 04 03:06:00 crc kubenswrapper[4681]: I0404 03:06:00.172353 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587866-xzk8f" Apr 04 03:06:00 crc kubenswrapper[4681]: I0404 03:06:00.176426 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 03:06:00 crc kubenswrapper[4681]: I0404 03:06:00.177434 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 03:06:00 crc kubenswrapper[4681]: I0404 03:06:00.182449 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 03:06:00 crc kubenswrapper[4681]: I0404 03:06:00.186767 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587866-xzk8f"] Apr 04 03:06:00 crc kubenswrapper[4681]: I0404 03:06:00.278035 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kjlp\" (UniqueName: \"kubernetes.io/projected/66dd4c7b-3b2d-4332-bdc7-d0a542c4f96a-kube-api-access-5kjlp\") pod \"auto-csr-approver-29587866-xzk8f\" (UID: \"66dd4c7b-3b2d-4332-bdc7-d0a542c4f96a\") " pod="openshift-infra/auto-csr-approver-29587866-xzk8f" Apr 04 03:06:00 crc kubenswrapper[4681]: I0404 03:06:00.380885 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kjlp\" (UniqueName: \"kubernetes.io/projected/66dd4c7b-3b2d-4332-bdc7-d0a542c4f96a-kube-api-access-5kjlp\") pod \"auto-csr-approver-29587866-xzk8f\" (UID: \"66dd4c7b-3b2d-4332-bdc7-d0a542c4f96a\") " pod="openshift-infra/auto-csr-approver-29587866-xzk8f" Apr 04 03:06:00 crc kubenswrapper[4681]: I0404 03:06:00.401278 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kjlp\" (UniqueName: \"kubernetes.io/projected/66dd4c7b-3b2d-4332-bdc7-d0a542c4f96a-kube-api-access-5kjlp\") pod \"auto-csr-approver-29587866-xzk8f\" (UID: \"66dd4c7b-3b2d-4332-bdc7-d0a542c4f96a\") " pod="openshift-infra/auto-csr-approver-29587866-xzk8f" Apr 04 03:06:00 crc kubenswrapper[4681]: I0404 03:06:00.497299 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587866-xzk8f" Apr 04 03:06:00 crc kubenswrapper[4681]: I0404 03:06:00.969777 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587866-xzk8f"] Apr 04 03:06:01 crc kubenswrapper[4681]: I0404 03:06:01.711632 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587866-xzk8f" event={"ID":"66dd4c7b-3b2d-4332-bdc7-d0a542c4f96a","Type":"ContainerStarted","Data":"8fd40c45512c820b921fe7e22cd4f788d1525b0cd1f6e67bb57a6c8fddcf8d41"} Apr 04 03:06:03 crc kubenswrapper[4681]: I0404 03:06:03.734257 4681 generic.go:334] "Generic (PLEG): container finished" podID="66dd4c7b-3b2d-4332-bdc7-d0a542c4f96a" containerID="968c9ec1393fc59719838aa316c9ec172a9cc596e987ee2663f02f05c0b57bbd" exitCode=0 Apr 04 03:06:03 crc kubenswrapper[4681]: I0404 03:06:03.734706 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587866-xzk8f" event={"ID":"66dd4c7b-3b2d-4332-bdc7-d0a542c4f96a","Type":"ContainerDied","Data":"968c9ec1393fc59719838aa316c9ec172a9cc596e987ee2663f02f05c0b57bbd"} Apr 04 03:06:05 crc kubenswrapper[4681]: I0404 03:06:05.086807 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587866-xzk8f" Apr 04 03:06:05 crc kubenswrapper[4681]: I0404 03:06:05.179084 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kjlp\" (UniqueName: \"kubernetes.io/projected/66dd4c7b-3b2d-4332-bdc7-d0a542c4f96a-kube-api-access-5kjlp\") pod \"66dd4c7b-3b2d-4332-bdc7-d0a542c4f96a\" (UID: \"66dd4c7b-3b2d-4332-bdc7-d0a542c4f96a\") " Apr 04 03:06:05 crc kubenswrapper[4681]: I0404 03:06:05.186816 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66dd4c7b-3b2d-4332-bdc7-d0a542c4f96a-kube-api-access-5kjlp" (OuterVolumeSpecName: "kube-api-access-5kjlp") pod "66dd4c7b-3b2d-4332-bdc7-d0a542c4f96a" (UID: "66dd4c7b-3b2d-4332-bdc7-d0a542c4f96a"). InnerVolumeSpecName "kube-api-access-5kjlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:06:05 crc kubenswrapper[4681]: I0404 03:06:05.282430 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kjlp\" (UniqueName: \"kubernetes.io/projected/66dd4c7b-3b2d-4332-bdc7-d0a542c4f96a-kube-api-access-5kjlp\") on node \"crc\" DevicePath \"\"" Apr 04 03:06:05 crc kubenswrapper[4681]: I0404 03:06:05.753219 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587866-xzk8f" event={"ID":"66dd4c7b-3b2d-4332-bdc7-d0a542c4f96a","Type":"ContainerDied","Data":"8fd40c45512c820b921fe7e22cd4f788d1525b0cd1f6e67bb57a6c8fddcf8d41"} Apr 04 03:06:05 crc kubenswrapper[4681]: I0404 03:06:05.753277 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fd40c45512c820b921fe7e22cd4f788d1525b0cd1f6e67bb57a6c8fddcf8d41" Apr 04 03:06:05 crc kubenswrapper[4681]: I0404 03:06:05.753334 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587866-xzk8f" Apr 04 03:06:06 crc kubenswrapper[4681]: I0404 03:06:06.188763 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587860-z4dwn"] Apr 04 03:06:06 crc kubenswrapper[4681]: I0404 03:06:06.198276 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587860-z4dwn"] Apr 04 03:06:07 crc kubenswrapper[4681]: I0404 03:06:07.213635 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a2def9e-7fb8-4501-ab87-267bdf00c720" path="/var/lib/kubelet/pods/3a2def9e-7fb8-4501-ab87-267bdf00c720/volumes" Apr 04 03:06:31 crc kubenswrapper[4681]: I0404 03:06:31.933684 4681 scope.go:117] "RemoveContainer" containerID="23e9565f98689d4f36fe2f5be1b32e192a00642bf418e55fbea479119d78ae3f" Apr 04 03:06:40 crc kubenswrapper[4681]: I0404 03:06:40.865377 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-74hlb"] Apr 04 03:06:40 crc kubenswrapper[4681]: E0404 03:06:40.866387 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66dd4c7b-3b2d-4332-bdc7-d0a542c4f96a" containerName="oc" Apr 04 03:06:40 crc kubenswrapper[4681]: I0404 03:06:40.866403 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="66dd4c7b-3b2d-4332-bdc7-d0a542c4f96a" containerName="oc" Apr 04 03:06:40 crc kubenswrapper[4681]: I0404 03:06:40.866623 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="66dd4c7b-3b2d-4332-bdc7-d0a542c4f96a" containerName="oc" Apr 04 03:06:40 crc kubenswrapper[4681]: I0404 03:06:40.868251 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-74hlb" Apr 04 03:06:40 crc kubenswrapper[4681]: I0404 03:06:40.899989 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-74hlb"] Apr 04 03:06:41 crc kubenswrapper[4681]: I0404 03:06:41.048419 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d32f28c-6d28-492f-aa47-82e662256e1d-utilities\") pod \"redhat-operators-74hlb\" (UID: \"3d32f28c-6d28-492f-aa47-82e662256e1d\") " pod="openshift-marketplace/redhat-operators-74hlb" Apr 04 03:06:41 crc kubenswrapper[4681]: I0404 03:06:41.048798 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d32f28c-6d28-492f-aa47-82e662256e1d-catalog-content\") pod \"redhat-operators-74hlb\" (UID: \"3d32f28c-6d28-492f-aa47-82e662256e1d\") " pod="openshift-marketplace/redhat-operators-74hlb" Apr 04 03:06:41 crc kubenswrapper[4681]: I0404 03:06:41.048889 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f7fk\" (UniqueName: \"kubernetes.io/projected/3d32f28c-6d28-492f-aa47-82e662256e1d-kube-api-access-7f7fk\") pod \"redhat-operators-74hlb\" (UID: \"3d32f28c-6d28-492f-aa47-82e662256e1d\") " pod="openshift-marketplace/redhat-operators-74hlb" Apr 04 03:06:41 crc kubenswrapper[4681]: I0404 03:06:41.150627 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d32f28c-6d28-492f-aa47-82e662256e1d-utilities\") pod \"redhat-operators-74hlb\" (UID: \"3d32f28c-6d28-492f-aa47-82e662256e1d\") " pod="openshift-marketplace/redhat-operators-74hlb" Apr 04 03:06:41 crc kubenswrapper[4681]: I0404 03:06:41.150788 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d32f28c-6d28-492f-aa47-82e662256e1d-catalog-content\") pod \"redhat-operators-74hlb\" (UID: \"3d32f28c-6d28-492f-aa47-82e662256e1d\") " pod="openshift-marketplace/redhat-operators-74hlb" Apr 04 03:06:41 crc kubenswrapper[4681]: I0404 03:06:41.150823 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f7fk\" (UniqueName: \"kubernetes.io/projected/3d32f28c-6d28-492f-aa47-82e662256e1d-kube-api-access-7f7fk\") pod \"redhat-operators-74hlb\" (UID: \"3d32f28c-6d28-492f-aa47-82e662256e1d\") " pod="openshift-marketplace/redhat-operators-74hlb" Apr 04 03:06:41 crc kubenswrapper[4681]: I0404 03:06:41.151161 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d32f28c-6d28-492f-aa47-82e662256e1d-utilities\") pod \"redhat-operators-74hlb\" (UID: \"3d32f28c-6d28-492f-aa47-82e662256e1d\") " pod="openshift-marketplace/redhat-operators-74hlb" Apr 04 03:06:41 crc kubenswrapper[4681]: I0404 03:06:41.151403 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d32f28c-6d28-492f-aa47-82e662256e1d-catalog-content\") pod \"redhat-operators-74hlb\" (UID: \"3d32f28c-6d28-492f-aa47-82e662256e1d\") " pod="openshift-marketplace/redhat-operators-74hlb" Apr 04 03:06:41 crc kubenswrapper[4681]: I0404 03:06:41.172044 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f7fk\" (UniqueName: \"kubernetes.io/projected/3d32f28c-6d28-492f-aa47-82e662256e1d-kube-api-access-7f7fk\") pod \"redhat-operators-74hlb\" (UID: \"3d32f28c-6d28-492f-aa47-82e662256e1d\") " pod="openshift-marketplace/redhat-operators-74hlb" Apr 04 03:06:41 crc kubenswrapper[4681]: I0404 03:06:41.202938 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-74hlb" Apr 04 03:06:41 crc kubenswrapper[4681]: I0404 03:06:41.684848 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-74hlb"] Apr 04 03:06:42 crc kubenswrapper[4681]: I0404 03:06:42.113047 4681 generic.go:334] "Generic (PLEG): container finished" podID="3d32f28c-6d28-492f-aa47-82e662256e1d" containerID="302707cad0b2fe1fd4b89cbd09ee84ec0d5f6de9e1b9b82823f2d3c076c9242c" exitCode=0 Apr 04 03:06:42 crc kubenswrapper[4681]: I0404 03:06:42.113152 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74hlb" event={"ID":"3d32f28c-6d28-492f-aa47-82e662256e1d","Type":"ContainerDied","Data":"302707cad0b2fe1fd4b89cbd09ee84ec0d5f6de9e1b9b82823f2d3c076c9242c"} Apr 04 03:06:42 crc kubenswrapper[4681]: I0404 03:06:42.113429 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74hlb" event={"ID":"3d32f28c-6d28-492f-aa47-82e662256e1d","Type":"ContainerStarted","Data":"a664eedc0e7d95aaabbbabde4b738a40909e4eb12fb67bfc9dda2610bef073ed"} Apr 04 03:06:44 crc kubenswrapper[4681]: I0404 03:06:44.150160 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74hlb" event={"ID":"3d32f28c-6d28-492f-aa47-82e662256e1d","Type":"ContainerStarted","Data":"da9c1709d69b7c88b57c1ef0f57c3cdc3eba7be8456d3959db0c0865b0afe94c"} Apr 04 03:06:52 crc kubenswrapper[4681]: I0404 03:06:52.231302 4681 generic.go:334] "Generic (PLEG): container finished" podID="3d32f28c-6d28-492f-aa47-82e662256e1d" containerID="da9c1709d69b7c88b57c1ef0f57c3cdc3eba7be8456d3959db0c0865b0afe94c" exitCode=0 Apr 04 03:06:52 crc kubenswrapper[4681]: I0404 03:06:52.231347 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74hlb" event={"ID":"3d32f28c-6d28-492f-aa47-82e662256e1d","Type":"ContainerDied","Data":"da9c1709d69b7c88b57c1ef0f57c3cdc3eba7be8456d3959db0c0865b0afe94c"} Apr 04 03:06:53 crc kubenswrapper[4681]: I0404 03:06:53.241465 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74hlb" event={"ID":"3d32f28c-6d28-492f-aa47-82e662256e1d","Type":"ContainerStarted","Data":"82f06465448a1056ef8ec1da875f0647a4d3868caa5be557d4c30b9ddb526d12"} Apr 04 03:06:53 crc kubenswrapper[4681]: I0404 03:06:53.260200 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-74hlb" podStartSLOduration=2.73938074 podStartE2EDuration="13.260181355s" podCreationTimestamp="2026-04-04 03:06:40 +0000 UTC" firstStartedPulling="2026-04-04 03:06:42.114733376 +0000 UTC m=+4281.780508496" lastFinishedPulling="2026-04-04 03:06:52.635533991 +0000 UTC m=+4292.301309111" observedRunningTime="2026-04-04 03:06:53.259297681 +0000 UTC m=+4292.925072801" watchObservedRunningTime="2026-04-04 03:06:53.260181355 +0000 UTC m=+4292.925956475" Apr 04 03:07:01 crc kubenswrapper[4681]: I0404 03:07:01.224520 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-74hlb" Apr 04 03:07:01 crc kubenswrapper[4681]: I0404 03:07:01.224973 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-74hlb" Apr 04 03:07:01 crc kubenswrapper[4681]: I0404 03:07:01.265783 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-74hlb" Apr 04 03:07:01 crc kubenswrapper[4681]: I0404 03:07:01.368555 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-74hlb" Apr 04 03:07:01 crc kubenswrapper[4681]: I0404 03:07:01.517195 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-74hlb"] Apr 04 03:07:03 crc kubenswrapper[4681]: I0404 03:07:03.338184 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-74hlb" podUID="3d32f28c-6d28-492f-aa47-82e662256e1d" containerName="registry-server" containerID="cri-o://82f06465448a1056ef8ec1da875f0647a4d3868caa5be557d4c30b9ddb526d12" gracePeriod=2 Apr 04 03:07:03 crc kubenswrapper[4681]: I0404 03:07:03.817170 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-74hlb" Apr 04 03:07:03 crc kubenswrapper[4681]: I0404 03:07:03.933722 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7f7fk\" (UniqueName: \"kubernetes.io/projected/3d32f28c-6d28-492f-aa47-82e662256e1d-kube-api-access-7f7fk\") pod \"3d32f28c-6d28-492f-aa47-82e662256e1d\" (UID: \"3d32f28c-6d28-492f-aa47-82e662256e1d\") " Apr 04 03:07:03 crc kubenswrapper[4681]: I0404 03:07:03.934154 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d32f28c-6d28-492f-aa47-82e662256e1d-catalog-content\") pod \"3d32f28c-6d28-492f-aa47-82e662256e1d\" (UID: \"3d32f28c-6d28-492f-aa47-82e662256e1d\") " Apr 04 03:07:03 crc kubenswrapper[4681]: I0404 03:07:03.934222 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d32f28c-6d28-492f-aa47-82e662256e1d-utilities\") pod \"3d32f28c-6d28-492f-aa47-82e662256e1d\" (UID: \"3d32f28c-6d28-492f-aa47-82e662256e1d\") " Apr 04 03:07:03 crc kubenswrapper[4681]: I0404 03:07:03.935423 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d32f28c-6d28-492f-aa47-82e662256e1d-utilities" (OuterVolumeSpecName: "utilities") pod "3d32f28c-6d28-492f-aa47-82e662256e1d" (UID: "3d32f28c-6d28-492f-aa47-82e662256e1d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:07:03 crc kubenswrapper[4681]: I0404 03:07:03.940566 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d32f28c-6d28-492f-aa47-82e662256e1d-kube-api-access-7f7fk" (OuterVolumeSpecName: "kube-api-access-7f7fk") pod "3d32f28c-6d28-492f-aa47-82e662256e1d" (UID: "3d32f28c-6d28-492f-aa47-82e662256e1d"). InnerVolumeSpecName "kube-api-access-7f7fk". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:07:04 crc kubenswrapper[4681]: I0404 03:07:04.037451 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7f7fk\" (UniqueName: \"kubernetes.io/projected/3d32f28c-6d28-492f-aa47-82e662256e1d-kube-api-access-7f7fk\") on node \"crc\" DevicePath \"\"" Apr 04 03:07:04 crc kubenswrapper[4681]: I0404 03:07:04.037490 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d32f28c-6d28-492f-aa47-82e662256e1d-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 03:07:04 crc kubenswrapper[4681]: I0404 03:07:04.077141 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d32f28c-6d28-492f-aa47-82e662256e1d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d32f28c-6d28-492f-aa47-82e662256e1d" (UID: "3d32f28c-6d28-492f-aa47-82e662256e1d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:07:04 crc kubenswrapper[4681]: I0404 03:07:04.139104 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d32f28c-6d28-492f-aa47-82e662256e1d-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 03:07:04 crc kubenswrapper[4681]: I0404 03:07:04.349677 4681 generic.go:334] "Generic (PLEG): container finished" podID="3d32f28c-6d28-492f-aa47-82e662256e1d" containerID="82f06465448a1056ef8ec1da875f0647a4d3868caa5be557d4c30b9ddb526d12" exitCode=0 Apr 04 03:07:04 crc kubenswrapper[4681]: I0404 03:07:04.349718 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74hlb" event={"ID":"3d32f28c-6d28-492f-aa47-82e662256e1d","Type":"ContainerDied","Data":"82f06465448a1056ef8ec1da875f0647a4d3868caa5be557d4c30b9ddb526d12"} Apr 04 03:07:04 crc kubenswrapper[4681]: I0404 03:07:04.349752 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-74hlb" event={"ID":"3d32f28c-6d28-492f-aa47-82e662256e1d","Type":"ContainerDied","Data":"a664eedc0e7d95aaabbbabde4b738a40909e4eb12fb67bfc9dda2610bef073ed"} Apr 04 03:07:04 crc kubenswrapper[4681]: I0404 03:07:04.349769 4681 scope.go:117] "RemoveContainer" containerID="82f06465448a1056ef8ec1da875f0647a4d3868caa5be557d4c30b9ddb526d12" Apr 04 03:07:04 crc kubenswrapper[4681]: I0404 03:07:04.349852 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-74hlb" Apr 04 03:07:04 crc kubenswrapper[4681]: I0404 03:07:04.372172 4681 scope.go:117] "RemoveContainer" containerID="da9c1709d69b7c88b57c1ef0f57c3cdc3eba7be8456d3959db0c0865b0afe94c" Apr 04 03:07:04 crc kubenswrapper[4681]: I0404 03:07:04.408621 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-74hlb"] Apr 04 03:07:04 crc kubenswrapper[4681]: I0404 03:07:04.417127 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-74hlb"] Apr 04 03:07:04 crc kubenswrapper[4681]: I0404 03:07:04.420474 4681 scope.go:117] "RemoveContainer" containerID="302707cad0b2fe1fd4b89cbd09ee84ec0d5f6de9e1b9b82823f2d3c076c9242c" Apr 04 03:07:04 crc kubenswrapper[4681]: I0404 03:07:04.474356 4681 scope.go:117] "RemoveContainer" containerID="82f06465448a1056ef8ec1da875f0647a4d3868caa5be557d4c30b9ddb526d12" Apr 04 03:07:04 crc kubenswrapper[4681]: E0404 03:07:04.474766 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82f06465448a1056ef8ec1da875f0647a4d3868caa5be557d4c30b9ddb526d12\": container with ID starting with 82f06465448a1056ef8ec1da875f0647a4d3868caa5be557d4c30b9ddb526d12 not found: ID does not exist" containerID="82f06465448a1056ef8ec1da875f0647a4d3868caa5be557d4c30b9ddb526d12" Apr 04 03:07:04 crc kubenswrapper[4681]: I0404 03:07:04.474811 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82f06465448a1056ef8ec1da875f0647a4d3868caa5be557d4c30b9ddb526d12"} err="failed to get container status \"82f06465448a1056ef8ec1da875f0647a4d3868caa5be557d4c30b9ddb526d12\": rpc error: code = NotFound desc = could not find container \"82f06465448a1056ef8ec1da875f0647a4d3868caa5be557d4c30b9ddb526d12\": container with ID starting with 82f06465448a1056ef8ec1da875f0647a4d3868caa5be557d4c30b9ddb526d12 not found: ID does not exist" Apr 04 03:07:04 crc kubenswrapper[4681]: I0404 03:07:04.474836 4681 scope.go:117] "RemoveContainer" containerID="da9c1709d69b7c88b57c1ef0f57c3cdc3eba7be8456d3959db0c0865b0afe94c" Apr 04 03:07:04 crc kubenswrapper[4681]: E0404 03:07:04.475372 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da9c1709d69b7c88b57c1ef0f57c3cdc3eba7be8456d3959db0c0865b0afe94c\": container with ID starting with da9c1709d69b7c88b57c1ef0f57c3cdc3eba7be8456d3959db0c0865b0afe94c not found: ID does not exist" containerID="da9c1709d69b7c88b57c1ef0f57c3cdc3eba7be8456d3959db0c0865b0afe94c" Apr 04 03:07:04 crc kubenswrapper[4681]: I0404 03:07:04.475428 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da9c1709d69b7c88b57c1ef0f57c3cdc3eba7be8456d3959db0c0865b0afe94c"} err="failed to get container status \"da9c1709d69b7c88b57c1ef0f57c3cdc3eba7be8456d3959db0c0865b0afe94c\": rpc error: code = NotFound desc = could not find container \"da9c1709d69b7c88b57c1ef0f57c3cdc3eba7be8456d3959db0c0865b0afe94c\": container with ID starting with da9c1709d69b7c88b57c1ef0f57c3cdc3eba7be8456d3959db0c0865b0afe94c not found: ID does not exist" Apr 04 03:07:04 crc kubenswrapper[4681]: I0404 03:07:04.475472 4681 scope.go:117] "RemoveContainer" containerID="302707cad0b2fe1fd4b89cbd09ee84ec0d5f6de9e1b9b82823f2d3c076c9242c" Apr 04 03:07:04 crc kubenswrapper[4681]: E0404 03:07:04.476470 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"302707cad0b2fe1fd4b89cbd09ee84ec0d5f6de9e1b9b82823f2d3c076c9242c\": container with ID starting with 302707cad0b2fe1fd4b89cbd09ee84ec0d5f6de9e1b9b82823f2d3c076c9242c not found: ID does not exist" containerID="302707cad0b2fe1fd4b89cbd09ee84ec0d5f6de9e1b9b82823f2d3c076c9242c" Apr 04 03:07:04 crc kubenswrapper[4681]: I0404 03:07:04.476518 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"302707cad0b2fe1fd4b89cbd09ee84ec0d5f6de9e1b9b82823f2d3c076c9242c"} err="failed to get container status \"302707cad0b2fe1fd4b89cbd09ee84ec0d5f6de9e1b9b82823f2d3c076c9242c\": rpc error: code = NotFound desc = could not find container \"302707cad0b2fe1fd4b89cbd09ee84ec0d5f6de9e1b9b82823f2d3c076c9242c\": container with ID starting with 302707cad0b2fe1fd4b89cbd09ee84ec0d5f6de9e1b9b82823f2d3c076c9242c not found: ID does not exist" Apr 04 03:07:05 crc kubenswrapper[4681]: I0404 03:07:05.212140 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d32f28c-6d28-492f-aa47-82e662256e1d" path="/var/lib/kubelet/pods/3d32f28c-6d28-492f-aa47-82e662256e1d/volumes" Apr 04 03:07:26 crc kubenswrapper[4681]: I0404 03:07:26.524137 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 03:07:26 crc kubenswrapper[4681]: I0404 03:07:26.524567 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 03:07:56 crc kubenswrapper[4681]: I0404 03:07:56.524383 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 03:07:56 crc kubenswrapper[4681]: I0404 03:07:56.524888 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 03:08:00 crc kubenswrapper[4681]: I0404 03:08:00.153668 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587868-wtpjm"] Apr 04 03:08:00 crc kubenswrapper[4681]: E0404 03:08:00.154730 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d32f28c-6d28-492f-aa47-82e662256e1d" containerName="registry-server" Apr 04 03:08:00 crc kubenswrapper[4681]: I0404 03:08:00.154748 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d32f28c-6d28-492f-aa47-82e662256e1d" containerName="registry-server" Apr 04 03:08:00 crc kubenswrapper[4681]: E0404 03:08:00.154776 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d32f28c-6d28-492f-aa47-82e662256e1d" containerName="extract-utilities" Apr 04 03:08:00 crc kubenswrapper[4681]: I0404 03:08:00.154784 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d32f28c-6d28-492f-aa47-82e662256e1d" containerName="extract-utilities" Apr 04 03:08:00 crc kubenswrapper[4681]: E0404 03:08:00.154811 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d32f28c-6d28-492f-aa47-82e662256e1d" containerName="extract-content" Apr 04 03:08:00 crc kubenswrapper[4681]: I0404 03:08:00.154819 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d32f28c-6d28-492f-aa47-82e662256e1d" containerName="extract-content" Apr 04 03:08:00 crc kubenswrapper[4681]: I0404 03:08:00.155077 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d32f28c-6d28-492f-aa47-82e662256e1d" containerName="registry-server" Apr 04 03:08:00 crc kubenswrapper[4681]: I0404 03:08:00.155964 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587868-wtpjm" Apr 04 03:08:00 crc kubenswrapper[4681]: I0404 03:08:00.165449 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 03:08:00 crc kubenswrapper[4681]: I0404 03:08:00.165535 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 03:08:00 crc kubenswrapper[4681]: I0404 03:08:00.171874 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 03:08:00 crc kubenswrapper[4681]: I0404 03:08:00.175920 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587868-wtpjm"] Apr 04 03:08:00 crc kubenswrapper[4681]: I0404 03:08:00.247060 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6h4p\" (UniqueName: \"kubernetes.io/projected/d34424aa-2b02-4f44-8435-809f2227636f-kube-api-access-s6h4p\") pod \"auto-csr-approver-29587868-wtpjm\" (UID: \"d34424aa-2b02-4f44-8435-809f2227636f\") " pod="openshift-infra/auto-csr-approver-29587868-wtpjm" Apr 04 03:08:00 crc kubenswrapper[4681]: I0404 03:08:00.351604 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6h4p\" (UniqueName: \"kubernetes.io/projected/d34424aa-2b02-4f44-8435-809f2227636f-kube-api-access-s6h4p\") pod \"auto-csr-approver-29587868-wtpjm\" (UID: \"d34424aa-2b02-4f44-8435-809f2227636f\") " pod="openshift-infra/auto-csr-approver-29587868-wtpjm" Apr 04 03:08:00 crc kubenswrapper[4681]: I0404 03:08:00.371923 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6h4p\" (UniqueName: \"kubernetes.io/projected/d34424aa-2b02-4f44-8435-809f2227636f-kube-api-access-s6h4p\") pod \"auto-csr-approver-29587868-wtpjm\" (UID: \"d34424aa-2b02-4f44-8435-809f2227636f\") " pod="openshift-infra/auto-csr-approver-29587868-wtpjm" Apr 04 03:08:00 crc kubenswrapper[4681]: I0404 03:08:00.472817 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587868-wtpjm" Apr 04 03:08:00 crc kubenswrapper[4681]: I0404 03:08:00.918107 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587868-wtpjm"] Apr 04 03:08:01 crc kubenswrapper[4681]: I0404 03:08:01.935427 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587868-wtpjm" event={"ID":"d34424aa-2b02-4f44-8435-809f2227636f","Type":"ContainerStarted","Data":"8cdb609b0a408ecf60626775adbb0bfafd549f7499071f5a5e86122cc59782aa"} Apr 04 03:08:02 crc kubenswrapper[4681]: I0404 03:08:02.959523 4681 generic.go:334] "Generic (PLEG): container finished" podID="d34424aa-2b02-4f44-8435-809f2227636f" containerID="5b80ef6271bb54226cc87ed89ee2df5ff45497a6b91483848fc3bae066d1c985" exitCode=0 Apr 04 03:08:02 crc kubenswrapper[4681]: I0404 03:08:02.959719 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587868-wtpjm" event={"ID":"d34424aa-2b02-4f44-8435-809f2227636f","Type":"ContainerDied","Data":"5b80ef6271bb54226cc87ed89ee2df5ff45497a6b91483848fc3bae066d1c985"} Apr 04 03:08:04 crc kubenswrapper[4681]: I0404 03:08:04.380388 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587868-wtpjm" Apr 04 03:08:04 crc kubenswrapper[4681]: I0404 03:08:04.470183 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6h4p\" (UniqueName: \"kubernetes.io/projected/d34424aa-2b02-4f44-8435-809f2227636f-kube-api-access-s6h4p\") pod \"d34424aa-2b02-4f44-8435-809f2227636f\" (UID: \"d34424aa-2b02-4f44-8435-809f2227636f\") " Apr 04 03:08:04 crc kubenswrapper[4681]: I0404 03:08:04.476163 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d34424aa-2b02-4f44-8435-809f2227636f-kube-api-access-s6h4p" (OuterVolumeSpecName: "kube-api-access-s6h4p") pod "d34424aa-2b02-4f44-8435-809f2227636f" (UID: "d34424aa-2b02-4f44-8435-809f2227636f"). InnerVolumeSpecName "kube-api-access-s6h4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:08:04 crc kubenswrapper[4681]: I0404 03:08:04.572649 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6h4p\" (UniqueName: \"kubernetes.io/projected/d34424aa-2b02-4f44-8435-809f2227636f-kube-api-access-s6h4p\") on node \"crc\" DevicePath \"\"" Apr 04 03:08:04 crc kubenswrapper[4681]: I0404 03:08:04.981040 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587868-wtpjm" event={"ID":"d34424aa-2b02-4f44-8435-809f2227636f","Type":"ContainerDied","Data":"8cdb609b0a408ecf60626775adbb0bfafd549f7499071f5a5e86122cc59782aa"} Apr 04 03:08:04 crc kubenswrapper[4681]: I0404 03:08:04.981105 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cdb609b0a408ecf60626775adbb0bfafd549f7499071f5a5e86122cc59782aa" Apr 04 03:08:04 crc kubenswrapper[4681]: I0404 03:08:04.981138 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587868-wtpjm" Apr 04 03:08:05 crc kubenswrapper[4681]: I0404 03:08:05.460926 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587862-2ndtx"] Apr 04 03:08:05 crc kubenswrapper[4681]: I0404 03:08:05.474065 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587862-2ndtx"] Apr 04 03:08:07 crc kubenswrapper[4681]: I0404 03:08:07.217085 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1100f700-c0d5-4d9d-ae38-8d2fae681121" path="/var/lib/kubelet/pods/1100f700-c0d5-4d9d-ae38-8d2fae681121/volumes" Apr 04 03:08:26 crc kubenswrapper[4681]: I0404 03:08:26.523997 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 03:08:26 crc kubenswrapper[4681]: I0404 03:08:26.524674 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 03:08:26 crc kubenswrapper[4681]: I0404 03:08:26.524733 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 03:08:26 crc kubenswrapper[4681]: I0404 03:08:26.525747 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8ccfb1040792a13cb64edd32b371118fd1ec9007ac80c528dda7fcb030de9137"} pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Apr 04 03:08:26 crc kubenswrapper[4681]: I0404 03:08:26.525838 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" containerID="cri-o://8ccfb1040792a13cb64edd32b371118fd1ec9007ac80c528dda7fcb030de9137" gracePeriod=600 Apr 04 03:08:27 crc kubenswrapper[4681]: I0404 03:08:27.246630 4681 generic.go:334] "Generic (PLEG): container finished" podID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerID="8ccfb1040792a13cb64edd32b371118fd1ec9007ac80c528dda7fcb030de9137" exitCode=0 Apr 04 03:08:27 crc kubenswrapper[4681]: I0404 03:08:27.247304 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerDied","Data":"8ccfb1040792a13cb64edd32b371118fd1ec9007ac80c528dda7fcb030de9137"} Apr 04 03:08:27 crc kubenswrapper[4681]: I0404 03:08:27.247347 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerStarted","Data":"4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c"} Apr 04 03:08:27 crc kubenswrapper[4681]: I0404 03:08:27.247371 4681 scope.go:117] "RemoveContainer" containerID="447590cb991b3a8f7b114ff69052f3e9ff022024bc6e1ab610dde7ea4d8598b6" Apr 04 03:08:32 crc kubenswrapper[4681]: I0404 03:08:32.049147 4681 scope.go:117] "RemoveContainer" containerID="bcf930e8a6a4fb53230557eec2722ed14e620ca19efc39ae3b69687351819c7a" Apr 04 03:10:00 crc kubenswrapper[4681]: I0404 03:10:00.153480 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587870-jv4fw"] Apr 04 03:10:00 crc kubenswrapper[4681]: E0404 03:10:00.154379 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d34424aa-2b02-4f44-8435-809f2227636f" containerName="oc" Apr 04 03:10:00 crc kubenswrapper[4681]: I0404 03:10:00.154393 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d34424aa-2b02-4f44-8435-809f2227636f" containerName="oc" Apr 04 03:10:00 crc kubenswrapper[4681]: I0404 03:10:00.154609 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d34424aa-2b02-4f44-8435-809f2227636f" containerName="oc" Apr 04 03:10:00 crc kubenswrapper[4681]: I0404 03:10:00.155333 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587870-jv4fw" Apr 04 03:10:00 crc kubenswrapper[4681]: I0404 03:10:00.158059 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 03:10:00 crc kubenswrapper[4681]: I0404 03:10:00.158246 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 03:10:00 crc kubenswrapper[4681]: I0404 03:10:00.160501 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 03:10:00 crc kubenswrapper[4681]: I0404 03:10:00.182723 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587870-jv4fw"] Apr 04 03:10:00 crc kubenswrapper[4681]: I0404 03:10:00.285022 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw8wx\" (UniqueName: \"kubernetes.io/projected/782b5369-6e4b-462d-aa6d-28d68fdcf0f9-kube-api-access-bw8wx\") pod \"auto-csr-approver-29587870-jv4fw\" (UID: \"782b5369-6e4b-462d-aa6d-28d68fdcf0f9\") " pod="openshift-infra/auto-csr-approver-29587870-jv4fw" Apr 04 03:10:00 crc kubenswrapper[4681]: I0404 03:10:00.386825 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw8wx\" (UniqueName: \"kubernetes.io/projected/782b5369-6e4b-462d-aa6d-28d68fdcf0f9-kube-api-access-bw8wx\") pod \"auto-csr-approver-29587870-jv4fw\" (UID: \"782b5369-6e4b-462d-aa6d-28d68fdcf0f9\") " pod="openshift-infra/auto-csr-approver-29587870-jv4fw" Apr 04 03:10:00 crc kubenswrapper[4681]: I0404 03:10:00.497083 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw8wx\" (UniqueName: \"kubernetes.io/projected/782b5369-6e4b-462d-aa6d-28d68fdcf0f9-kube-api-access-bw8wx\") pod \"auto-csr-approver-29587870-jv4fw\" (UID: \"782b5369-6e4b-462d-aa6d-28d68fdcf0f9\") " pod="openshift-infra/auto-csr-approver-29587870-jv4fw" Apr 04 03:10:00 crc kubenswrapper[4681]: I0404 03:10:00.795820 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587870-jv4fw" Apr 04 03:10:01 crc kubenswrapper[4681]: I0404 03:10:01.222628 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587870-jv4fw"] Apr 04 03:10:01 crc kubenswrapper[4681]: I0404 03:10:01.229667 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 04 03:10:01 crc kubenswrapper[4681]: I0404 03:10:01.295952 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587870-jv4fw" event={"ID":"782b5369-6e4b-462d-aa6d-28d68fdcf0f9","Type":"ContainerStarted","Data":"7f6c88a0a04adc62c7082aff8936d788ef5f41087a6e53fb782b8c44ea3e7024"} Apr 04 03:10:03 crc kubenswrapper[4681]: I0404 03:10:03.318132 4681 generic.go:334] "Generic (PLEG): container finished" podID="782b5369-6e4b-462d-aa6d-28d68fdcf0f9" containerID="a3a1f8a6516931036eb3310d915057c52f8e9a46892a6c3533612c8b2a672a0f" exitCode=0 Apr 04 03:10:03 crc kubenswrapper[4681]: I0404 03:10:03.318348 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587870-jv4fw" event={"ID":"782b5369-6e4b-462d-aa6d-28d68fdcf0f9","Type":"ContainerDied","Data":"a3a1f8a6516931036eb3310d915057c52f8e9a46892a6c3533612c8b2a672a0f"} Apr 04 03:10:04 crc kubenswrapper[4681]: I0404 03:10:04.827533 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587870-jv4fw" Apr 04 03:10:04 crc kubenswrapper[4681]: I0404 03:10:04.983447 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw8wx\" (UniqueName: \"kubernetes.io/projected/782b5369-6e4b-462d-aa6d-28d68fdcf0f9-kube-api-access-bw8wx\") pod \"782b5369-6e4b-462d-aa6d-28d68fdcf0f9\" (UID: \"782b5369-6e4b-462d-aa6d-28d68fdcf0f9\") " Apr 04 03:10:04 crc kubenswrapper[4681]: I0404 03:10:04.992845 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/782b5369-6e4b-462d-aa6d-28d68fdcf0f9-kube-api-access-bw8wx" (OuterVolumeSpecName: "kube-api-access-bw8wx") pod "782b5369-6e4b-462d-aa6d-28d68fdcf0f9" (UID: "782b5369-6e4b-462d-aa6d-28d68fdcf0f9"). InnerVolumeSpecName "kube-api-access-bw8wx". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:10:05 crc kubenswrapper[4681]: I0404 03:10:05.085917 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw8wx\" (UniqueName: \"kubernetes.io/projected/782b5369-6e4b-462d-aa6d-28d68fdcf0f9-kube-api-access-bw8wx\") on node \"crc\" DevicePath \"\"" Apr 04 03:10:05 crc kubenswrapper[4681]: I0404 03:10:05.342302 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587870-jv4fw" event={"ID":"782b5369-6e4b-462d-aa6d-28d68fdcf0f9","Type":"ContainerDied","Data":"7f6c88a0a04adc62c7082aff8936d788ef5f41087a6e53fb782b8c44ea3e7024"} Apr 04 03:10:05 crc kubenswrapper[4681]: I0404 03:10:05.342356 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f6c88a0a04adc62c7082aff8936d788ef5f41087a6e53fb782b8c44ea3e7024" Apr 04 03:10:05 crc kubenswrapper[4681]: I0404 03:10:05.342331 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587870-jv4fw" Apr 04 03:10:05 crc kubenswrapper[4681]: I0404 03:10:05.907100 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587864-hxkjw"] Apr 04 03:10:05 crc kubenswrapper[4681]: I0404 03:10:05.918390 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587864-hxkjw"] Apr 04 03:10:07 crc kubenswrapper[4681]: I0404 03:10:07.222741 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99d53d60-1a13-43df-bb02-ad3df25effab" path="/var/lib/kubelet/pods/99d53d60-1a13-43df-bb02-ad3df25effab/volumes" Apr 04 03:10:12 crc kubenswrapper[4681]: I0404 03:10:12.712949 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jd9kg"] Apr 04 03:10:12 crc kubenswrapper[4681]: E0404 03:10:12.714288 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="782b5369-6e4b-462d-aa6d-28d68fdcf0f9" containerName="oc" Apr 04 03:10:12 crc kubenswrapper[4681]: I0404 03:10:12.714312 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="782b5369-6e4b-462d-aa6d-28d68fdcf0f9" containerName="oc" Apr 04 03:10:12 crc kubenswrapper[4681]: I0404 03:10:12.714700 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="782b5369-6e4b-462d-aa6d-28d68fdcf0f9" containerName="oc" Apr 04 03:10:12 crc kubenswrapper[4681]: I0404 03:10:12.717447 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jd9kg" Apr 04 03:10:12 crc kubenswrapper[4681]: I0404 03:10:12.734894 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jd9kg"] Apr 04 03:10:12 crc kubenswrapper[4681]: I0404 03:10:12.861298 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr58p\" (UniqueName: \"kubernetes.io/projected/43392b9a-ea52-4347-bb88-c702e694a0c0-kube-api-access-tr58p\") pod \"community-operators-jd9kg\" (UID: \"43392b9a-ea52-4347-bb88-c702e694a0c0\") " pod="openshift-marketplace/community-operators-jd9kg" Apr 04 03:10:12 crc kubenswrapper[4681]: I0404 03:10:12.861531 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43392b9a-ea52-4347-bb88-c702e694a0c0-catalog-content\") pod \"community-operators-jd9kg\" (UID: \"43392b9a-ea52-4347-bb88-c702e694a0c0\") " pod="openshift-marketplace/community-operators-jd9kg" Apr 04 03:10:12 crc kubenswrapper[4681]: I0404 03:10:12.861574 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43392b9a-ea52-4347-bb88-c702e694a0c0-utilities\") pod \"community-operators-jd9kg\" (UID: \"43392b9a-ea52-4347-bb88-c702e694a0c0\") " pod="openshift-marketplace/community-operators-jd9kg" Apr 04 03:10:12 crc kubenswrapper[4681]: I0404 03:10:12.963883 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43392b9a-ea52-4347-bb88-c702e694a0c0-catalog-content\") pod \"community-operators-jd9kg\" (UID: \"43392b9a-ea52-4347-bb88-c702e694a0c0\") " pod="openshift-marketplace/community-operators-jd9kg" Apr 04 03:10:12 crc kubenswrapper[4681]: I0404 03:10:12.963942 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43392b9a-ea52-4347-bb88-c702e694a0c0-utilities\") pod \"community-operators-jd9kg\" (UID: \"43392b9a-ea52-4347-bb88-c702e694a0c0\") " pod="openshift-marketplace/community-operators-jd9kg" Apr 04 03:10:12 crc kubenswrapper[4681]: I0404 03:10:12.964104 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr58p\" (UniqueName: \"kubernetes.io/projected/43392b9a-ea52-4347-bb88-c702e694a0c0-kube-api-access-tr58p\") pod \"community-operators-jd9kg\" (UID: \"43392b9a-ea52-4347-bb88-c702e694a0c0\") " pod="openshift-marketplace/community-operators-jd9kg" Apr 04 03:10:12 crc kubenswrapper[4681]: I0404 03:10:12.964448 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43392b9a-ea52-4347-bb88-c702e694a0c0-utilities\") pod \"community-operators-jd9kg\" (UID: \"43392b9a-ea52-4347-bb88-c702e694a0c0\") " pod="openshift-marketplace/community-operators-jd9kg" Apr 04 03:10:12 crc kubenswrapper[4681]: I0404 03:10:12.964448 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43392b9a-ea52-4347-bb88-c702e694a0c0-catalog-content\") pod \"community-operators-jd9kg\" (UID: \"43392b9a-ea52-4347-bb88-c702e694a0c0\") " pod="openshift-marketplace/community-operators-jd9kg" Apr 04 03:10:13 crc kubenswrapper[4681]: I0404 03:10:13.195386 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr58p\" (UniqueName: \"kubernetes.io/projected/43392b9a-ea52-4347-bb88-c702e694a0c0-kube-api-access-tr58p\") pod \"community-operators-jd9kg\" (UID: \"43392b9a-ea52-4347-bb88-c702e694a0c0\") " pod="openshift-marketplace/community-operators-jd9kg" Apr 04 03:10:13 crc kubenswrapper[4681]: I0404 03:10:13.352751 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jd9kg" Apr 04 03:10:13 crc kubenswrapper[4681]: I0404 03:10:13.845255 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jd9kg"] Apr 04 03:10:14 crc kubenswrapper[4681]: I0404 03:10:14.452167 4681 generic.go:334] "Generic (PLEG): container finished" podID="43392b9a-ea52-4347-bb88-c702e694a0c0" containerID="b1772ba6f39977298925439ff9f23e4f0d839dd24db81f88c9f1a81100070817" exitCode=0 Apr 04 03:10:14 crc kubenswrapper[4681]: I0404 03:10:14.452226 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jd9kg" event={"ID":"43392b9a-ea52-4347-bb88-c702e694a0c0","Type":"ContainerDied","Data":"b1772ba6f39977298925439ff9f23e4f0d839dd24db81f88c9f1a81100070817"} Apr 04 03:10:14 crc kubenswrapper[4681]: I0404 03:10:14.452302 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jd9kg" event={"ID":"43392b9a-ea52-4347-bb88-c702e694a0c0","Type":"ContainerStarted","Data":"fd88a755633973d1d13e1f53d8adde446f978ecaa20d14e253399be7e9a9506d"} Apr 04 03:10:16 crc kubenswrapper[4681]: I0404 03:10:16.480855 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jd9kg" event={"ID":"43392b9a-ea52-4347-bb88-c702e694a0c0","Type":"ContainerStarted","Data":"81b37cda75d34c518cee579d38480275856548976cc47a9ae4fa3372c8192b98"} Apr 04 03:10:17 crc kubenswrapper[4681]: I0404 03:10:17.491752 4681 generic.go:334] "Generic (PLEG): container finished" podID="43392b9a-ea52-4347-bb88-c702e694a0c0" containerID="81b37cda75d34c518cee579d38480275856548976cc47a9ae4fa3372c8192b98" exitCode=0 Apr 04 03:10:17 crc kubenswrapper[4681]: I0404 03:10:17.492414 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jd9kg" event={"ID":"43392b9a-ea52-4347-bb88-c702e694a0c0","Type":"ContainerDied","Data":"81b37cda75d34c518cee579d38480275856548976cc47a9ae4fa3372c8192b98"} Apr 04 03:10:18 crc kubenswrapper[4681]: I0404 03:10:18.518881 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jd9kg" event={"ID":"43392b9a-ea52-4347-bb88-c702e694a0c0","Type":"ContainerStarted","Data":"a843391f84386b727ebb78e46271ae60728018411b6bc81a9f8138a6220afaa2"} Apr 04 03:10:18 crc kubenswrapper[4681]: I0404 03:10:18.554501 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jd9kg" podStartSLOduration=3.029802818 podStartE2EDuration="6.55448423s" podCreationTimestamp="2026-04-04 03:10:12 +0000 UTC" firstStartedPulling="2026-04-04 03:10:14.454220142 +0000 UTC m=+4494.119995272" lastFinishedPulling="2026-04-04 03:10:17.978901564 +0000 UTC m=+4497.644676684" observedRunningTime="2026-04-04 03:10:18.546704977 +0000 UTC m=+4498.212480097" watchObservedRunningTime="2026-04-04 03:10:18.55448423 +0000 UTC m=+4498.220259350" Apr 04 03:10:23 crc kubenswrapper[4681]: I0404 03:10:23.353785 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jd9kg" Apr 04 03:10:23 crc kubenswrapper[4681]: I0404 03:10:23.354393 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jd9kg" Apr 04 03:10:23 crc kubenswrapper[4681]: I0404 03:10:23.399843 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jd9kg" Apr 04 03:10:23 crc kubenswrapper[4681]: I0404 03:10:23.619172 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jd9kg" Apr 04 03:10:23 crc kubenswrapper[4681]: I0404 03:10:23.668611 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jd9kg"] Apr 04 03:10:25 crc kubenswrapper[4681]: I0404 03:10:25.591222 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jd9kg" podUID="43392b9a-ea52-4347-bb88-c702e694a0c0" containerName="registry-server" containerID="cri-o://a843391f84386b727ebb78e46271ae60728018411b6bc81a9f8138a6220afaa2" gracePeriod=2 Apr 04 03:10:26 crc kubenswrapper[4681]: I0404 03:10:26.084992 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jd9kg" Apr 04 03:10:26 crc kubenswrapper[4681]: I0404 03:10:26.172033 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43392b9a-ea52-4347-bb88-c702e694a0c0-catalog-content\") pod \"43392b9a-ea52-4347-bb88-c702e694a0c0\" (UID: \"43392b9a-ea52-4347-bb88-c702e694a0c0\") " Apr 04 03:10:26 crc kubenswrapper[4681]: I0404 03:10:26.172118 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43392b9a-ea52-4347-bb88-c702e694a0c0-utilities\") pod \"43392b9a-ea52-4347-bb88-c702e694a0c0\" (UID: \"43392b9a-ea52-4347-bb88-c702e694a0c0\") " Apr 04 03:10:26 crc kubenswrapper[4681]: I0404 03:10:26.172259 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr58p\" (UniqueName: \"kubernetes.io/projected/43392b9a-ea52-4347-bb88-c702e694a0c0-kube-api-access-tr58p\") pod \"43392b9a-ea52-4347-bb88-c702e694a0c0\" (UID: \"43392b9a-ea52-4347-bb88-c702e694a0c0\") " Apr 04 03:10:26 crc kubenswrapper[4681]: I0404 03:10:26.173080 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43392b9a-ea52-4347-bb88-c702e694a0c0-utilities" (OuterVolumeSpecName: "utilities") pod "43392b9a-ea52-4347-bb88-c702e694a0c0" (UID: "43392b9a-ea52-4347-bb88-c702e694a0c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:10:26 crc kubenswrapper[4681]: I0404 03:10:26.182889 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43392b9a-ea52-4347-bb88-c702e694a0c0-kube-api-access-tr58p" (OuterVolumeSpecName: "kube-api-access-tr58p") pod "43392b9a-ea52-4347-bb88-c702e694a0c0" (UID: "43392b9a-ea52-4347-bb88-c702e694a0c0"). InnerVolumeSpecName "kube-api-access-tr58p". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:10:26 crc kubenswrapper[4681]: I0404 03:10:26.232495 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43392b9a-ea52-4347-bb88-c702e694a0c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "43392b9a-ea52-4347-bb88-c702e694a0c0" (UID: "43392b9a-ea52-4347-bb88-c702e694a0c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:10:26 crc kubenswrapper[4681]: I0404 03:10:26.274728 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43392b9a-ea52-4347-bb88-c702e694a0c0-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 03:10:26 crc kubenswrapper[4681]: I0404 03:10:26.274780 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43392b9a-ea52-4347-bb88-c702e694a0c0-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 03:10:26 crc kubenswrapper[4681]: I0404 03:10:26.274795 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr58p\" (UniqueName: \"kubernetes.io/projected/43392b9a-ea52-4347-bb88-c702e694a0c0-kube-api-access-tr58p\") on node \"crc\" DevicePath \"\"" Apr 04 03:10:26 crc kubenswrapper[4681]: I0404 03:10:26.524160 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 03:10:26 crc kubenswrapper[4681]: I0404 03:10:26.525153 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 03:10:26 crc kubenswrapper[4681]: I0404 03:10:26.609642 4681 generic.go:334] "Generic (PLEG): container finished" podID="43392b9a-ea52-4347-bb88-c702e694a0c0" containerID="a843391f84386b727ebb78e46271ae60728018411b6bc81a9f8138a6220afaa2" exitCode=0 Apr 04 03:10:26 crc kubenswrapper[4681]: I0404 03:10:26.609709 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jd9kg" Apr 04 03:10:26 crc kubenswrapper[4681]: I0404 03:10:26.609710 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jd9kg" event={"ID":"43392b9a-ea52-4347-bb88-c702e694a0c0","Type":"ContainerDied","Data":"a843391f84386b727ebb78e46271ae60728018411b6bc81a9f8138a6220afaa2"} Apr 04 03:10:26 crc kubenswrapper[4681]: I0404 03:10:26.610524 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jd9kg" event={"ID":"43392b9a-ea52-4347-bb88-c702e694a0c0","Type":"ContainerDied","Data":"fd88a755633973d1d13e1f53d8adde446f978ecaa20d14e253399be7e9a9506d"} Apr 04 03:10:26 crc kubenswrapper[4681]: I0404 03:10:26.610565 4681 scope.go:117] "RemoveContainer" containerID="a843391f84386b727ebb78e46271ae60728018411b6bc81a9f8138a6220afaa2" Apr 04 03:10:26 crc kubenswrapper[4681]: I0404 03:10:26.644612 4681 scope.go:117] "RemoveContainer" containerID="81b37cda75d34c518cee579d38480275856548976cc47a9ae4fa3372c8192b98" Apr 04 03:10:26 crc kubenswrapper[4681]: I0404 03:10:26.659961 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jd9kg"] Apr 04 03:10:26 crc kubenswrapper[4681]: I0404 03:10:26.678247 4681 scope.go:117] "RemoveContainer" containerID="b1772ba6f39977298925439ff9f23e4f0d839dd24db81f88c9f1a81100070817" Apr 04 03:10:26 crc kubenswrapper[4681]: I0404 03:10:26.681700 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jd9kg"] Apr 04 03:10:26 crc kubenswrapper[4681]: I0404 03:10:26.724587 4681 scope.go:117] "RemoveContainer" containerID="a843391f84386b727ebb78e46271ae60728018411b6bc81a9f8138a6220afaa2" Apr 04 03:10:26 crc kubenswrapper[4681]: E0404 03:10:26.727656 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a843391f84386b727ebb78e46271ae60728018411b6bc81a9f8138a6220afaa2\": container with ID starting with a843391f84386b727ebb78e46271ae60728018411b6bc81a9f8138a6220afaa2 not found: ID does not exist" containerID="a843391f84386b727ebb78e46271ae60728018411b6bc81a9f8138a6220afaa2" Apr 04 03:10:26 crc kubenswrapper[4681]: I0404 03:10:26.727786 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a843391f84386b727ebb78e46271ae60728018411b6bc81a9f8138a6220afaa2"} err="failed to get container status \"a843391f84386b727ebb78e46271ae60728018411b6bc81a9f8138a6220afaa2\": rpc error: code = NotFound desc = could not find container \"a843391f84386b727ebb78e46271ae60728018411b6bc81a9f8138a6220afaa2\": container with ID starting with a843391f84386b727ebb78e46271ae60728018411b6bc81a9f8138a6220afaa2 not found: ID does not exist" Apr 04 03:10:26 crc kubenswrapper[4681]: I0404 03:10:26.727900 4681 scope.go:117] "RemoveContainer" containerID="81b37cda75d34c518cee579d38480275856548976cc47a9ae4fa3372c8192b98" Apr 04 03:10:26 crc kubenswrapper[4681]: E0404 03:10:26.728454 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81b37cda75d34c518cee579d38480275856548976cc47a9ae4fa3372c8192b98\": container with ID starting with 81b37cda75d34c518cee579d38480275856548976cc47a9ae4fa3372c8192b98 not found: ID does not exist" containerID="81b37cda75d34c518cee579d38480275856548976cc47a9ae4fa3372c8192b98" Apr 04 03:10:26 crc kubenswrapper[4681]: I0404 03:10:26.728514 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81b37cda75d34c518cee579d38480275856548976cc47a9ae4fa3372c8192b98"} err="failed to get container status \"81b37cda75d34c518cee579d38480275856548976cc47a9ae4fa3372c8192b98\": rpc error: code = NotFound desc = could not find container \"81b37cda75d34c518cee579d38480275856548976cc47a9ae4fa3372c8192b98\": container with ID starting with 81b37cda75d34c518cee579d38480275856548976cc47a9ae4fa3372c8192b98 not found: ID does not exist" Apr 04 03:10:26 crc kubenswrapper[4681]: I0404 03:10:26.728553 4681 scope.go:117] "RemoveContainer" containerID="b1772ba6f39977298925439ff9f23e4f0d839dd24db81f88c9f1a81100070817" Apr 04 03:10:26 crc kubenswrapper[4681]: E0404 03:10:26.728863 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1772ba6f39977298925439ff9f23e4f0d839dd24db81f88c9f1a81100070817\": container with ID starting with b1772ba6f39977298925439ff9f23e4f0d839dd24db81f88c9f1a81100070817 not found: ID does not exist" containerID="b1772ba6f39977298925439ff9f23e4f0d839dd24db81f88c9f1a81100070817" Apr 04 03:10:26 crc kubenswrapper[4681]: I0404 03:10:26.728894 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1772ba6f39977298925439ff9f23e4f0d839dd24db81f88c9f1a81100070817"} err="failed to get container status \"b1772ba6f39977298925439ff9f23e4f0d839dd24db81f88c9f1a81100070817\": rpc error: code = NotFound desc = could not find container \"b1772ba6f39977298925439ff9f23e4f0d839dd24db81f88c9f1a81100070817\": container with ID starting with b1772ba6f39977298925439ff9f23e4f0d839dd24db81f88c9f1a81100070817 not found: ID does not exist" Apr 04 03:10:27 crc kubenswrapper[4681]: I0404 03:10:27.218361 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43392b9a-ea52-4347-bb88-c702e694a0c0" path="/var/lib/kubelet/pods/43392b9a-ea52-4347-bb88-c702e694a0c0/volumes" Apr 04 03:10:32 crc kubenswrapper[4681]: I0404 03:10:32.166373 4681 scope.go:117] "RemoveContainer" containerID="74494d1307a5e5b8717dc3d20c445d60b75d86adaeea5e8eeeb22f6d1858034f" Apr 04 03:10:56 crc kubenswrapper[4681]: I0404 03:10:56.524457 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 03:10:56 crc kubenswrapper[4681]: I0404 03:10:56.525073 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 03:11:26 crc kubenswrapper[4681]: I0404 03:11:26.525271 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 03:11:26 crc kubenswrapper[4681]: I0404 03:11:26.525893 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 03:11:26 crc kubenswrapper[4681]: I0404 03:11:26.525947 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 03:11:26 crc kubenswrapper[4681]: I0404 03:11:26.526902 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c"} pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Apr 04 03:11:26 crc kubenswrapper[4681]: I0404 03:11:26.526986 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" containerID="cri-o://4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c" gracePeriod=600 Apr 04 03:11:26 crc kubenswrapper[4681]: E0404 03:11:26.652743 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:11:27 crc kubenswrapper[4681]: I0404 03:11:27.286802 4681 generic.go:334] "Generic (PLEG): container finished" podID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerID="4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c" exitCode=0 Apr 04 03:11:27 crc kubenswrapper[4681]: I0404 03:11:27.286845 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerDied","Data":"4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c"} Apr 04 03:11:27 crc kubenswrapper[4681]: I0404 03:11:27.287090 4681 scope.go:117] "RemoveContainer" containerID="8ccfb1040792a13cb64edd32b371118fd1ec9007ac80c528dda7fcb030de9137" Apr 04 03:11:27 crc kubenswrapper[4681]: I0404 03:11:27.287711 4681 scope.go:117] "RemoveContainer" containerID="4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c" Apr 04 03:11:27 crc kubenswrapper[4681]: E0404 03:11:27.287963 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:11:32 crc kubenswrapper[4681]: I0404 03:11:32.262360 4681 scope.go:117] "RemoveContainer" containerID="54307e4060e74a0c4eeab286d99f0db44f65646f7dc934d5f258a39dc9a6ffce" Apr 04 03:11:32 crc kubenswrapper[4681]: I0404 03:11:32.285804 4681 scope.go:117] "RemoveContainer" containerID="b659e1917e896c118d2b0246dae8417f796f42f48fbe87d01359c43496c6351a" Apr 04 03:11:32 crc kubenswrapper[4681]: I0404 03:11:32.356945 4681 scope.go:117] "RemoveContainer" containerID="5cdec67b564488b051caa31386396d5317f72de9a1076cd8ee96e43c57e51ab2" Apr 04 03:11:38 crc kubenswrapper[4681]: I0404 03:11:38.143771 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cv2fk"] Apr 04 03:11:38 crc kubenswrapper[4681]: E0404 03:11:38.145457 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43392b9a-ea52-4347-bb88-c702e694a0c0" containerName="extract-content" Apr 04 03:11:38 crc kubenswrapper[4681]: I0404 03:11:38.145480 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="43392b9a-ea52-4347-bb88-c702e694a0c0" containerName="extract-content" Apr 04 03:11:38 crc kubenswrapper[4681]: E0404 03:11:38.145517 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43392b9a-ea52-4347-bb88-c702e694a0c0" containerName="extract-utilities" Apr 04 03:11:38 crc kubenswrapper[4681]: I0404 03:11:38.145527 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="43392b9a-ea52-4347-bb88-c702e694a0c0" containerName="extract-utilities" Apr 04 03:11:38 crc kubenswrapper[4681]: E0404 03:11:38.145551 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43392b9a-ea52-4347-bb88-c702e694a0c0" containerName="registry-server" Apr 04 03:11:38 crc kubenswrapper[4681]: I0404 03:11:38.145561 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="43392b9a-ea52-4347-bb88-c702e694a0c0" containerName="registry-server" Apr 04 03:11:38 crc kubenswrapper[4681]: I0404 03:11:38.145841 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="43392b9a-ea52-4347-bb88-c702e694a0c0" containerName="registry-server" Apr 04 03:11:38 crc kubenswrapper[4681]: I0404 03:11:38.147479 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cv2fk" Apr 04 03:11:38 crc kubenswrapper[4681]: I0404 03:11:38.162551 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cv2fk"] Apr 04 03:11:38 crc kubenswrapper[4681]: I0404 03:11:38.238843 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b88dbce-6fbe-477f-b7e8-74da9f9c2650-catalog-content\") pod \"redhat-marketplace-cv2fk\" (UID: \"8b88dbce-6fbe-477f-b7e8-74da9f9c2650\") " pod="openshift-marketplace/redhat-marketplace-cv2fk" Apr 04 03:11:38 crc kubenswrapper[4681]: I0404 03:11:38.238999 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnb44\" (UniqueName: \"kubernetes.io/projected/8b88dbce-6fbe-477f-b7e8-74da9f9c2650-kube-api-access-mnb44\") pod \"redhat-marketplace-cv2fk\" (UID: \"8b88dbce-6fbe-477f-b7e8-74da9f9c2650\") " pod="openshift-marketplace/redhat-marketplace-cv2fk" Apr 04 03:11:38 crc kubenswrapper[4681]: I0404 03:11:38.239060 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b88dbce-6fbe-477f-b7e8-74da9f9c2650-utilities\") pod \"redhat-marketplace-cv2fk\" (UID: \"8b88dbce-6fbe-477f-b7e8-74da9f9c2650\") " pod="openshift-marketplace/redhat-marketplace-cv2fk" Apr 04 03:11:38 crc kubenswrapper[4681]: I0404 03:11:38.340820 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b88dbce-6fbe-477f-b7e8-74da9f9c2650-catalog-content\") pod \"redhat-marketplace-cv2fk\" (UID: \"8b88dbce-6fbe-477f-b7e8-74da9f9c2650\") " pod="openshift-marketplace/redhat-marketplace-cv2fk" Apr 04 03:11:38 crc kubenswrapper[4681]: I0404 03:11:38.340980 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnb44\" (UniqueName: \"kubernetes.io/projected/8b88dbce-6fbe-477f-b7e8-74da9f9c2650-kube-api-access-mnb44\") pod \"redhat-marketplace-cv2fk\" (UID: \"8b88dbce-6fbe-477f-b7e8-74da9f9c2650\") " pod="openshift-marketplace/redhat-marketplace-cv2fk" Apr 04 03:11:38 crc kubenswrapper[4681]: I0404 03:11:38.341032 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b88dbce-6fbe-477f-b7e8-74da9f9c2650-utilities\") pod \"redhat-marketplace-cv2fk\" (UID: \"8b88dbce-6fbe-477f-b7e8-74da9f9c2650\") " pod="openshift-marketplace/redhat-marketplace-cv2fk" Apr 04 03:11:38 crc kubenswrapper[4681]: I0404 03:11:38.341374 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b88dbce-6fbe-477f-b7e8-74da9f9c2650-catalog-content\") pod \"redhat-marketplace-cv2fk\" (UID: \"8b88dbce-6fbe-477f-b7e8-74da9f9c2650\") " pod="openshift-marketplace/redhat-marketplace-cv2fk" Apr 04 03:11:38 crc kubenswrapper[4681]: I0404 03:11:38.341559 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b88dbce-6fbe-477f-b7e8-74da9f9c2650-utilities\") pod \"redhat-marketplace-cv2fk\" (UID: \"8b88dbce-6fbe-477f-b7e8-74da9f9c2650\") " pod="openshift-marketplace/redhat-marketplace-cv2fk" Apr 04 03:11:38 crc kubenswrapper[4681]: I0404 03:11:38.369076 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnb44\" (UniqueName: \"kubernetes.io/projected/8b88dbce-6fbe-477f-b7e8-74da9f9c2650-kube-api-access-mnb44\") pod \"redhat-marketplace-cv2fk\" (UID: \"8b88dbce-6fbe-477f-b7e8-74da9f9c2650\") " pod="openshift-marketplace/redhat-marketplace-cv2fk" Apr 04 03:11:38 crc kubenswrapper[4681]: I0404 03:11:38.470691 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cv2fk" Apr 04 03:11:38 crc kubenswrapper[4681]: I0404 03:11:38.989860 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cv2fk"] Apr 04 03:11:39 crc kubenswrapper[4681]: I0404 03:11:39.408157 4681 generic.go:334] "Generic (PLEG): container finished" podID="8b88dbce-6fbe-477f-b7e8-74da9f9c2650" containerID="c02093c214c9dd0cd7790f7a6ab86f909e778b4bed0f61c04b70ab82d015b690" exitCode=0 Apr 04 03:11:39 crc kubenswrapper[4681]: I0404 03:11:39.408213 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cv2fk" event={"ID":"8b88dbce-6fbe-477f-b7e8-74da9f9c2650","Type":"ContainerDied","Data":"c02093c214c9dd0cd7790f7a6ab86f909e778b4bed0f61c04b70ab82d015b690"} Apr 04 03:11:39 crc kubenswrapper[4681]: I0404 03:11:39.408246 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cv2fk" event={"ID":"8b88dbce-6fbe-477f-b7e8-74da9f9c2650","Type":"ContainerStarted","Data":"4a19447053f31162758302c3c3c9dd7e52409deb1a990cdaac39c227da6b84c6"} Apr 04 03:11:40 crc kubenswrapper[4681]: I0404 03:11:40.420148 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cv2fk" event={"ID":"8b88dbce-6fbe-477f-b7e8-74da9f9c2650","Type":"ContainerStarted","Data":"78dcd769cf5a96667d3012aea1be5dce654b723bbd14f6b6f8c84aaf4eac14b9"} Apr 04 03:11:41 crc kubenswrapper[4681]: I0404 03:11:41.211027 4681 scope.go:117] "RemoveContainer" containerID="4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c" Apr 04 03:11:41 crc kubenswrapper[4681]: E0404 03:11:41.211684 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:11:41 crc kubenswrapper[4681]: I0404 03:11:41.430853 4681 generic.go:334] "Generic (PLEG): container finished" podID="8b88dbce-6fbe-477f-b7e8-74da9f9c2650" containerID="78dcd769cf5a96667d3012aea1be5dce654b723bbd14f6b6f8c84aaf4eac14b9" exitCode=0 Apr 04 03:11:41 crc kubenswrapper[4681]: I0404 03:11:41.430905 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cv2fk" event={"ID":"8b88dbce-6fbe-477f-b7e8-74da9f9c2650","Type":"ContainerDied","Data":"78dcd769cf5a96667d3012aea1be5dce654b723bbd14f6b6f8c84aaf4eac14b9"} Apr 04 03:11:42 crc kubenswrapper[4681]: I0404 03:11:42.447187 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cv2fk" event={"ID":"8b88dbce-6fbe-477f-b7e8-74da9f9c2650","Type":"ContainerStarted","Data":"5a6c5ac33c9d7a8a30df697107d21b8e63a40c3b5eed9f482ecbc111ead50b91"} Apr 04 03:11:42 crc kubenswrapper[4681]: I0404 03:11:42.471691 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cv2fk" podStartSLOduration=2.053560003 podStartE2EDuration="4.471671474s" podCreationTimestamp="2026-04-04 03:11:38 +0000 UTC" firstStartedPulling="2026-04-04 03:11:39.410189617 +0000 UTC m=+4579.075964737" lastFinishedPulling="2026-04-04 03:11:41.828301088 +0000 UTC m=+4581.494076208" observedRunningTime="2026-04-04 03:11:42.469456353 +0000 UTC m=+4582.135231473" watchObservedRunningTime="2026-04-04 03:11:42.471671474 +0000 UTC m=+4582.137446594" Apr 04 03:11:48 crc kubenswrapper[4681]: I0404 03:11:48.471276 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cv2fk" Apr 04 03:11:48 crc kubenswrapper[4681]: I0404 03:11:48.471725 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cv2fk" Apr 04 03:11:48 crc kubenswrapper[4681]: I0404 03:11:48.517020 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cv2fk" Apr 04 03:11:48 crc kubenswrapper[4681]: I0404 03:11:48.582974 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cv2fk" Apr 04 03:11:48 crc kubenswrapper[4681]: I0404 03:11:48.757192 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cv2fk"] Apr 04 03:11:50 crc kubenswrapper[4681]: I0404 03:11:50.521385 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cv2fk" podUID="8b88dbce-6fbe-477f-b7e8-74da9f9c2650" containerName="registry-server" containerID="cri-o://5a6c5ac33c9d7a8a30df697107d21b8e63a40c3b5eed9f482ecbc111ead50b91" gracePeriod=2 Apr 04 03:11:51 crc kubenswrapper[4681]: I0404 03:11:51.548330 4681 generic.go:334] "Generic (PLEG): container finished" podID="8b88dbce-6fbe-477f-b7e8-74da9f9c2650" containerID="5a6c5ac33c9d7a8a30df697107d21b8e63a40c3b5eed9f482ecbc111ead50b91" exitCode=0 Apr 04 03:11:51 crc kubenswrapper[4681]: I0404 03:11:51.548379 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cv2fk" event={"ID":"8b88dbce-6fbe-477f-b7e8-74da9f9c2650","Type":"ContainerDied","Data":"5a6c5ac33c9d7a8a30df697107d21b8e63a40c3b5eed9f482ecbc111ead50b91"} Apr 04 03:11:51 crc kubenswrapper[4681]: I0404 03:11:51.548409 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cv2fk" event={"ID":"8b88dbce-6fbe-477f-b7e8-74da9f9c2650","Type":"ContainerDied","Data":"4a19447053f31162758302c3c3c9dd7e52409deb1a990cdaac39c227da6b84c6"} Apr 04 03:11:51 crc kubenswrapper[4681]: I0404 03:11:51.548426 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a19447053f31162758302c3c3c9dd7e52409deb1a990cdaac39c227da6b84c6" Apr 04 03:11:51 crc kubenswrapper[4681]: I0404 03:11:51.632765 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cv2fk" Apr 04 03:11:51 crc kubenswrapper[4681]: I0404 03:11:51.742684 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b88dbce-6fbe-477f-b7e8-74da9f9c2650-catalog-content\") pod \"8b88dbce-6fbe-477f-b7e8-74da9f9c2650\" (UID: \"8b88dbce-6fbe-477f-b7e8-74da9f9c2650\") " Apr 04 03:11:51 crc kubenswrapper[4681]: I0404 03:11:51.743069 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnb44\" (UniqueName: \"kubernetes.io/projected/8b88dbce-6fbe-477f-b7e8-74da9f9c2650-kube-api-access-mnb44\") pod \"8b88dbce-6fbe-477f-b7e8-74da9f9c2650\" (UID: \"8b88dbce-6fbe-477f-b7e8-74da9f9c2650\") " Apr 04 03:11:51 crc kubenswrapper[4681]: I0404 03:11:51.743219 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b88dbce-6fbe-477f-b7e8-74da9f9c2650-utilities\") pod \"8b88dbce-6fbe-477f-b7e8-74da9f9c2650\" (UID: \"8b88dbce-6fbe-477f-b7e8-74da9f9c2650\") " Apr 04 03:11:51 crc kubenswrapper[4681]: I0404 03:11:51.744440 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b88dbce-6fbe-477f-b7e8-74da9f9c2650-utilities" (OuterVolumeSpecName: "utilities") pod "8b88dbce-6fbe-477f-b7e8-74da9f9c2650" (UID: "8b88dbce-6fbe-477f-b7e8-74da9f9c2650"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:11:51 crc kubenswrapper[4681]: I0404 03:11:51.750440 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b88dbce-6fbe-477f-b7e8-74da9f9c2650-kube-api-access-mnb44" (OuterVolumeSpecName: "kube-api-access-mnb44") pod "8b88dbce-6fbe-477f-b7e8-74da9f9c2650" (UID: "8b88dbce-6fbe-477f-b7e8-74da9f9c2650"). InnerVolumeSpecName "kube-api-access-mnb44". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:11:51 crc kubenswrapper[4681]: I0404 03:11:51.781125 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b88dbce-6fbe-477f-b7e8-74da9f9c2650-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b88dbce-6fbe-477f-b7e8-74da9f9c2650" (UID: "8b88dbce-6fbe-477f-b7e8-74da9f9c2650"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:11:51 crc kubenswrapper[4681]: I0404 03:11:51.845835 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b88dbce-6fbe-477f-b7e8-74da9f9c2650-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 03:11:51 crc kubenswrapper[4681]: I0404 03:11:51.845885 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b88dbce-6fbe-477f-b7e8-74da9f9c2650-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 03:11:51 crc kubenswrapper[4681]: I0404 03:11:51.845906 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnb44\" (UniqueName: \"kubernetes.io/projected/8b88dbce-6fbe-477f-b7e8-74da9f9c2650-kube-api-access-mnb44\") on node \"crc\" DevicePath \"\"" Apr 04 03:11:52 crc kubenswrapper[4681]: I0404 03:11:52.201225 4681 scope.go:117] "RemoveContainer" containerID="4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c" Apr 04 03:11:52 crc kubenswrapper[4681]: E0404 03:11:52.201686 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:11:52 crc kubenswrapper[4681]: I0404 03:11:52.562529 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cv2fk" Apr 04 03:11:52 crc kubenswrapper[4681]: I0404 03:11:52.619427 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cv2fk"] Apr 04 03:11:52 crc kubenswrapper[4681]: I0404 03:11:52.631283 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cv2fk"] Apr 04 03:11:53 crc kubenswrapper[4681]: I0404 03:11:53.220355 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b88dbce-6fbe-477f-b7e8-74da9f9c2650" path="/var/lib/kubelet/pods/8b88dbce-6fbe-477f-b7e8-74da9f9c2650/volumes" Apr 04 03:12:00 crc kubenswrapper[4681]: I0404 03:12:00.146394 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587872-szqf6"] Apr 04 03:12:00 crc kubenswrapper[4681]: E0404 03:12:00.148035 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b88dbce-6fbe-477f-b7e8-74da9f9c2650" containerName="registry-server" Apr 04 03:12:00 crc kubenswrapper[4681]: I0404 03:12:00.148072 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b88dbce-6fbe-477f-b7e8-74da9f9c2650" containerName="registry-server" Apr 04 03:12:00 crc kubenswrapper[4681]: E0404 03:12:00.148101 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b88dbce-6fbe-477f-b7e8-74da9f9c2650" containerName="extract-utilities" Apr 04 03:12:00 crc kubenswrapper[4681]: I0404 03:12:00.148118 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b88dbce-6fbe-477f-b7e8-74da9f9c2650" containerName="extract-utilities" Apr 04 03:12:00 crc kubenswrapper[4681]: E0404 03:12:00.148151 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b88dbce-6fbe-477f-b7e8-74da9f9c2650" containerName="extract-content" Apr 04 03:12:00 crc kubenswrapper[4681]: I0404 03:12:00.148173 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b88dbce-6fbe-477f-b7e8-74da9f9c2650" containerName="extract-content" Apr 04 03:12:00 crc kubenswrapper[4681]: I0404 03:12:00.148715 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b88dbce-6fbe-477f-b7e8-74da9f9c2650" containerName="registry-server" Apr 04 03:12:00 crc kubenswrapper[4681]: I0404 03:12:00.150242 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587872-szqf6" Apr 04 03:12:00 crc kubenswrapper[4681]: I0404 03:12:00.152591 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 03:12:00 crc kubenswrapper[4681]: I0404 03:12:00.153541 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 03:12:00 crc kubenswrapper[4681]: I0404 03:12:00.153938 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 03:12:00 crc kubenswrapper[4681]: I0404 03:12:00.162819 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587872-szqf6"] Apr 04 03:12:00 crc kubenswrapper[4681]: I0404 03:12:00.353445 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4g64\" (UniqueName: \"kubernetes.io/projected/88c2ed7d-8b5c-4223-9944-63035448641d-kube-api-access-k4g64\") pod \"auto-csr-approver-29587872-szqf6\" (UID: \"88c2ed7d-8b5c-4223-9944-63035448641d\") " pod="openshift-infra/auto-csr-approver-29587872-szqf6" Apr 04 03:12:00 crc kubenswrapper[4681]: I0404 03:12:00.455334 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4g64\" (UniqueName: \"kubernetes.io/projected/88c2ed7d-8b5c-4223-9944-63035448641d-kube-api-access-k4g64\") pod \"auto-csr-approver-29587872-szqf6\" (UID: \"88c2ed7d-8b5c-4223-9944-63035448641d\") " pod="openshift-infra/auto-csr-approver-29587872-szqf6" Apr 04 03:12:00 crc kubenswrapper[4681]: I0404 03:12:00.482068 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4g64\" (UniqueName: \"kubernetes.io/projected/88c2ed7d-8b5c-4223-9944-63035448641d-kube-api-access-k4g64\") pod \"auto-csr-approver-29587872-szqf6\" (UID: \"88c2ed7d-8b5c-4223-9944-63035448641d\") " pod="openshift-infra/auto-csr-approver-29587872-szqf6" Apr 04 03:12:00 crc kubenswrapper[4681]: I0404 03:12:00.766758 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587872-szqf6" Apr 04 03:12:01 crc kubenswrapper[4681]: I0404 03:12:01.239654 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587872-szqf6"] Apr 04 03:12:01 crc kubenswrapper[4681]: W0404 03:12:01.246585 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88c2ed7d_8b5c_4223_9944_63035448641d.slice/crio-da16a8d0263be34282909e154fd8288e6cc9c8065caa3b18d4221f94d46dc890 WatchSource:0}: Error finding container da16a8d0263be34282909e154fd8288e6cc9c8065caa3b18d4221f94d46dc890: Status 404 returned error can't find the container with id da16a8d0263be34282909e154fd8288e6cc9c8065caa3b18d4221f94d46dc890 Apr 04 03:12:01 crc kubenswrapper[4681]: I0404 03:12:01.645838 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587872-szqf6" event={"ID":"88c2ed7d-8b5c-4223-9944-63035448641d","Type":"ContainerStarted","Data":"da16a8d0263be34282909e154fd8288e6cc9c8065caa3b18d4221f94d46dc890"} Apr 04 03:12:02 crc kubenswrapper[4681]: I0404 03:12:02.667934 4681 generic.go:334] "Generic (PLEG): container finished" podID="88c2ed7d-8b5c-4223-9944-63035448641d" containerID="c69f65257de90b7a6fff0d476cbc58d665da26297e079d0a4109c07e39c7277e" exitCode=0 Apr 04 03:12:02 crc kubenswrapper[4681]: I0404 03:12:02.667999 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587872-szqf6" event={"ID":"88c2ed7d-8b5c-4223-9944-63035448641d","Type":"ContainerDied","Data":"c69f65257de90b7a6fff0d476cbc58d665da26297e079d0a4109c07e39c7277e"} Apr 04 03:12:03 crc kubenswrapper[4681]: I0404 03:12:03.201662 4681 scope.go:117] "RemoveContainer" containerID="4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c" Apr 04 03:12:03 crc kubenswrapper[4681]: E0404 03:12:03.202491 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:12:04 crc kubenswrapper[4681]: I0404 03:12:04.062184 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587872-szqf6" Apr 04 03:12:04 crc kubenswrapper[4681]: I0404 03:12:04.231750 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4g64\" (UniqueName: \"kubernetes.io/projected/88c2ed7d-8b5c-4223-9944-63035448641d-kube-api-access-k4g64\") pod \"88c2ed7d-8b5c-4223-9944-63035448641d\" (UID: \"88c2ed7d-8b5c-4223-9944-63035448641d\") " Apr 04 03:12:04 crc kubenswrapper[4681]: I0404 03:12:04.244748 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88c2ed7d-8b5c-4223-9944-63035448641d-kube-api-access-k4g64" (OuterVolumeSpecName: "kube-api-access-k4g64") pod "88c2ed7d-8b5c-4223-9944-63035448641d" (UID: "88c2ed7d-8b5c-4223-9944-63035448641d"). InnerVolumeSpecName "kube-api-access-k4g64". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:12:04 crc kubenswrapper[4681]: I0404 03:12:04.334861 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4g64\" (UniqueName: \"kubernetes.io/projected/88c2ed7d-8b5c-4223-9944-63035448641d-kube-api-access-k4g64\") on node \"crc\" DevicePath \"\"" Apr 04 03:12:04 crc kubenswrapper[4681]: I0404 03:12:04.697956 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587872-szqf6" event={"ID":"88c2ed7d-8b5c-4223-9944-63035448641d","Type":"ContainerDied","Data":"da16a8d0263be34282909e154fd8288e6cc9c8065caa3b18d4221f94d46dc890"} Apr 04 03:12:04 crc kubenswrapper[4681]: I0404 03:12:04.697997 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da16a8d0263be34282909e154fd8288e6cc9c8065caa3b18d4221f94d46dc890" Apr 04 03:12:04 crc kubenswrapper[4681]: I0404 03:12:04.698048 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587872-szqf6" Apr 04 03:12:05 crc kubenswrapper[4681]: I0404 03:12:05.148568 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587866-xzk8f"] Apr 04 03:12:05 crc kubenswrapper[4681]: I0404 03:12:05.164016 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587866-xzk8f"] Apr 04 03:12:05 crc kubenswrapper[4681]: I0404 03:12:05.212131 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66dd4c7b-3b2d-4332-bdc7-d0a542c4f96a" path="/var/lib/kubelet/pods/66dd4c7b-3b2d-4332-bdc7-d0a542c4f96a/volumes" Apr 04 03:12:14 crc kubenswrapper[4681]: I0404 03:12:14.201094 4681 scope.go:117] "RemoveContainer" containerID="4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c" Apr 04 03:12:14 crc kubenswrapper[4681]: E0404 03:12:14.201865 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:12:28 crc kubenswrapper[4681]: I0404 03:12:28.201214 4681 scope.go:117] "RemoveContainer" containerID="4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c" Apr 04 03:12:28 crc kubenswrapper[4681]: E0404 03:12:28.202017 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:12:32 crc kubenswrapper[4681]: I0404 03:12:32.451590 4681 scope.go:117] "RemoveContainer" containerID="968c9ec1393fc59719838aa316c9ec172a9cc596e987ee2663f02f05c0b57bbd" Apr 04 03:12:43 crc kubenswrapper[4681]: I0404 03:12:43.201920 4681 scope.go:117] "RemoveContainer" containerID="4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c" Apr 04 03:12:43 crc kubenswrapper[4681]: E0404 03:12:43.203143 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:12:56 crc kubenswrapper[4681]: I0404 03:12:56.204525 4681 scope.go:117] "RemoveContainer" containerID="4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c" Apr 04 03:12:56 crc kubenswrapper[4681]: E0404 03:12:56.205796 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:13:07 crc kubenswrapper[4681]: I0404 03:13:07.201037 4681 scope.go:117] "RemoveContainer" containerID="4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c" Apr 04 03:13:07 crc kubenswrapper[4681]: E0404 03:13:07.201809 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:13:19 crc kubenswrapper[4681]: I0404 03:13:19.200837 4681 scope.go:117] "RemoveContainer" containerID="4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c" Apr 04 03:13:19 crc kubenswrapper[4681]: E0404 03:13:19.201598 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:13:32 crc kubenswrapper[4681]: I0404 03:13:32.201375 4681 scope.go:117] "RemoveContainer" containerID="4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c" Apr 04 03:13:32 crc kubenswrapper[4681]: E0404 03:13:32.203424 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:13:47 crc kubenswrapper[4681]: I0404 03:13:47.201441 4681 scope.go:117] "RemoveContainer" containerID="4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c" Apr 04 03:13:47 crc kubenswrapper[4681]: E0404 03:13:47.202128 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:14:00 crc kubenswrapper[4681]: I0404 03:14:00.163062 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587874-wn64z"] Apr 04 03:14:00 crc kubenswrapper[4681]: E0404 03:14:00.165663 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88c2ed7d-8b5c-4223-9944-63035448641d" containerName="oc" Apr 04 03:14:00 crc kubenswrapper[4681]: I0404 03:14:00.165793 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="88c2ed7d-8b5c-4223-9944-63035448641d" containerName="oc" Apr 04 03:14:00 crc kubenswrapper[4681]: I0404 03:14:00.166164 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="88c2ed7d-8b5c-4223-9944-63035448641d" containerName="oc" Apr 04 03:14:00 crc kubenswrapper[4681]: I0404 03:14:00.167231 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587874-wn64z" Apr 04 03:14:00 crc kubenswrapper[4681]: I0404 03:14:00.170479 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 03:14:00 crc kubenswrapper[4681]: I0404 03:14:00.170783 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 03:14:00 crc kubenswrapper[4681]: I0404 03:14:00.170926 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 03:14:00 crc kubenswrapper[4681]: I0404 03:14:00.180699 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587874-wn64z"] Apr 04 03:14:00 crc kubenswrapper[4681]: I0404 03:14:00.200972 4681 scope.go:117] "RemoveContainer" containerID="4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c" Apr 04 03:14:00 crc kubenswrapper[4681]: E0404 03:14:00.201203 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:14:00 crc kubenswrapper[4681]: I0404 03:14:00.265434 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd5gq\" (UniqueName: \"kubernetes.io/projected/c7d49235-c130-4b50-84f1-5ed2a85cb778-kube-api-access-bd5gq\") pod \"auto-csr-approver-29587874-wn64z\" (UID: \"c7d49235-c130-4b50-84f1-5ed2a85cb778\") " pod="openshift-infra/auto-csr-approver-29587874-wn64z" Apr 04 03:14:00 crc kubenswrapper[4681]: I0404 03:14:00.368819 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd5gq\" (UniqueName: \"kubernetes.io/projected/c7d49235-c130-4b50-84f1-5ed2a85cb778-kube-api-access-bd5gq\") pod \"auto-csr-approver-29587874-wn64z\" (UID: \"c7d49235-c130-4b50-84f1-5ed2a85cb778\") " pod="openshift-infra/auto-csr-approver-29587874-wn64z" Apr 04 03:14:00 crc kubenswrapper[4681]: I0404 03:14:00.795438 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd5gq\" (UniqueName: \"kubernetes.io/projected/c7d49235-c130-4b50-84f1-5ed2a85cb778-kube-api-access-bd5gq\") pod \"auto-csr-approver-29587874-wn64z\" (UID: \"c7d49235-c130-4b50-84f1-5ed2a85cb778\") " pod="openshift-infra/auto-csr-approver-29587874-wn64z" Apr 04 03:14:01 crc kubenswrapper[4681]: I0404 03:14:01.088406 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587874-wn64z" Apr 04 03:14:01 crc kubenswrapper[4681]: I0404 03:14:01.565329 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587874-wn64z"] Apr 04 03:14:01 crc kubenswrapper[4681]: I0404 03:14:01.885923 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587874-wn64z" event={"ID":"c7d49235-c130-4b50-84f1-5ed2a85cb778","Type":"ContainerStarted","Data":"cb0db37acf803932d98c07efcc6225a8aacb1247a5f8c132a925a1ac9bc721ba"} Apr 04 03:14:03 crc kubenswrapper[4681]: I0404 03:14:03.911964 4681 generic.go:334] "Generic (PLEG): container finished" podID="c7d49235-c130-4b50-84f1-5ed2a85cb778" containerID="9a2846c06d31064b6a46a267596d91425583cb92e44b703452beae784df7cf89" exitCode=0 Apr 04 03:14:03 crc kubenswrapper[4681]: I0404 03:14:03.912067 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587874-wn64z" event={"ID":"c7d49235-c130-4b50-84f1-5ed2a85cb778","Type":"ContainerDied","Data":"9a2846c06d31064b6a46a267596d91425583cb92e44b703452beae784df7cf89"} Apr 04 03:14:05 crc kubenswrapper[4681]: I0404 03:14:05.272066 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587874-wn64z" Apr 04 03:14:05 crc kubenswrapper[4681]: I0404 03:14:05.283788 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bd5gq\" (UniqueName: \"kubernetes.io/projected/c7d49235-c130-4b50-84f1-5ed2a85cb778-kube-api-access-bd5gq\") pod \"c7d49235-c130-4b50-84f1-5ed2a85cb778\" (UID: \"c7d49235-c130-4b50-84f1-5ed2a85cb778\") " Apr 04 03:14:05 crc kubenswrapper[4681]: I0404 03:14:05.290836 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7d49235-c130-4b50-84f1-5ed2a85cb778-kube-api-access-bd5gq" (OuterVolumeSpecName: "kube-api-access-bd5gq") pod "c7d49235-c130-4b50-84f1-5ed2a85cb778" (UID: "c7d49235-c130-4b50-84f1-5ed2a85cb778"). InnerVolumeSpecName "kube-api-access-bd5gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:14:05 crc kubenswrapper[4681]: I0404 03:14:05.385899 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bd5gq\" (UniqueName: \"kubernetes.io/projected/c7d49235-c130-4b50-84f1-5ed2a85cb778-kube-api-access-bd5gq\") on node \"crc\" DevicePath \"\"" Apr 04 03:14:05 crc kubenswrapper[4681]: I0404 03:14:05.935423 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587874-wn64z" event={"ID":"c7d49235-c130-4b50-84f1-5ed2a85cb778","Type":"ContainerDied","Data":"cb0db37acf803932d98c07efcc6225a8aacb1247a5f8c132a925a1ac9bc721ba"} Apr 04 03:14:05 crc kubenswrapper[4681]: I0404 03:14:05.935500 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb0db37acf803932d98c07efcc6225a8aacb1247a5f8c132a925a1ac9bc721ba" Apr 04 03:14:05 crc kubenswrapper[4681]: I0404 03:14:05.935496 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587874-wn64z" Apr 04 03:14:06 crc kubenswrapper[4681]: I0404 03:14:06.373727 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587868-wtpjm"] Apr 04 03:14:06 crc kubenswrapper[4681]: I0404 03:14:06.385381 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587868-wtpjm"] Apr 04 03:14:07 crc kubenswrapper[4681]: I0404 03:14:07.234904 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d34424aa-2b02-4f44-8435-809f2227636f" path="/var/lib/kubelet/pods/d34424aa-2b02-4f44-8435-809f2227636f/volumes" Apr 04 03:14:14 crc kubenswrapper[4681]: I0404 03:14:14.200824 4681 scope.go:117] "RemoveContainer" containerID="4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c" Apr 04 03:14:14 crc kubenswrapper[4681]: E0404 03:14:14.201711 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:14:25 crc kubenswrapper[4681]: I0404 03:14:25.201178 4681 scope.go:117] "RemoveContainer" containerID="4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c" Apr 04 03:14:25 crc kubenswrapper[4681]: E0404 03:14:25.202012 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:14:32 crc kubenswrapper[4681]: I0404 03:14:32.572092 4681 scope.go:117] "RemoveContainer" containerID="5b80ef6271bb54226cc87ed89ee2df5ff45497a6b91483848fc3bae066d1c985" Apr 04 03:14:37 crc kubenswrapper[4681]: I0404 03:14:37.200571 4681 scope.go:117] "RemoveContainer" containerID="4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c" Apr 04 03:14:37 crc kubenswrapper[4681]: E0404 03:14:37.202123 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:14:49 crc kubenswrapper[4681]: I0404 03:14:49.201947 4681 scope.go:117] "RemoveContainer" containerID="4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c" Apr 04 03:14:49 crc kubenswrapper[4681]: E0404 03:14:49.203375 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:15:00 crc kubenswrapper[4681]: I0404 03:15:00.169694 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587875-dcgj5"] Apr 04 03:15:00 crc kubenswrapper[4681]: E0404 03:15:00.171637 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7d49235-c130-4b50-84f1-5ed2a85cb778" containerName="oc" Apr 04 03:15:00 crc kubenswrapper[4681]: I0404 03:15:00.171718 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7d49235-c130-4b50-84f1-5ed2a85cb778" containerName="oc" Apr 04 03:15:00 crc kubenswrapper[4681]: I0404 03:15:00.171955 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7d49235-c130-4b50-84f1-5ed2a85cb778" containerName="oc" Apr 04 03:15:00 crc kubenswrapper[4681]: I0404 03:15:00.173115 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587875-dcgj5" Apr 04 03:15:00 crc kubenswrapper[4681]: I0404 03:15:00.176597 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Apr 04 03:15:00 crc kubenswrapper[4681]: I0404 03:15:00.176977 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Apr 04 03:15:00 crc kubenswrapper[4681]: I0404 03:15:00.187649 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587875-dcgj5"] Apr 04 03:15:00 crc kubenswrapper[4681]: I0404 03:15:00.200734 4681 scope.go:117] "RemoveContainer" containerID="4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c" Apr 04 03:15:00 crc kubenswrapper[4681]: E0404 03:15:00.201147 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:15:00 crc kubenswrapper[4681]: I0404 03:15:00.211772 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6fab752-2d68-40be-9ce6-c8146c76a53f-config-volume\") pod \"collect-profiles-29587875-dcgj5\" (UID: \"d6fab752-2d68-40be-9ce6-c8146c76a53f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587875-dcgj5" Apr 04 03:15:00 crc kubenswrapper[4681]: I0404 03:15:00.211908 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d6fab752-2d68-40be-9ce6-c8146c76a53f-secret-volume\") pod \"collect-profiles-29587875-dcgj5\" (UID: \"d6fab752-2d68-40be-9ce6-c8146c76a53f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587875-dcgj5" Apr 04 03:15:00 crc kubenswrapper[4681]: I0404 03:15:00.211931 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmlsj\" (UniqueName: \"kubernetes.io/projected/d6fab752-2d68-40be-9ce6-c8146c76a53f-kube-api-access-hmlsj\") pod \"collect-profiles-29587875-dcgj5\" (UID: \"d6fab752-2d68-40be-9ce6-c8146c76a53f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587875-dcgj5" Apr 04 03:15:00 crc kubenswrapper[4681]: I0404 03:15:00.313923 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6fab752-2d68-40be-9ce6-c8146c76a53f-config-volume\") pod \"collect-profiles-29587875-dcgj5\" (UID: \"d6fab752-2d68-40be-9ce6-c8146c76a53f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587875-dcgj5" Apr 04 03:15:00 crc kubenswrapper[4681]: I0404 03:15:00.314291 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d6fab752-2d68-40be-9ce6-c8146c76a53f-secret-volume\") pod \"collect-profiles-29587875-dcgj5\" (UID: \"d6fab752-2d68-40be-9ce6-c8146c76a53f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587875-dcgj5" Apr 04 03:15:00 crc kubenswrapper[4681]: I0404 03:15:00.314318 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmlsj\" (UniqueName: \"kubernetes.io/projected/d6fab752-2d68-40be-9ce6-c8146c76a53f-kube-api-access-hmlsj\") pod \"collect-profiles-29587875-dcgj5\" (UID: \"d6fab752-2d68-40be-9ce6-c8146c76a53f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587875-dcgj5" Apr 04 03:15:00 crc kubenswrapper[4681]: I0404 03:15:00.314778 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6fab752-2d68-40be-9ce6-c8146c76a53f-config-volume\") pod \"collect-profiles-29587875-dcgj5\" (UID: \"d6fab752-2d68-40be-9ce6-c8146c76a53f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587875-dcgj5" Apr 04 03:15:00 crc kubenswrapper[4681]: I0404 03:15:00.321016 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d6fab752-2d68-40be-9ce6-c8146c76a53f-secret-volume\") pod \"collect-profiles-29587875-dcgj5\" (UID: \"d6fab752-2d68-40be-9ce6-c8146c76a53f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587875-dcgj5" Apr 04 03:15:00 crc kubenswrapper[4681]: I0404 03:15:00.329362 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmlsj\" (UniqueName: \"kubernetes.io/projected/d6fab752-2d68-40be-9ce6-c8146c76a53f-kube-api-access-hmlsj\") pod \"collect-profiles-29587875-dcgj5\" (UID: \"d6fab752-2d68-40be-9ce6-c8146c76a53f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587875-dcgj5" Apr 04 03:15:00 crc kubenswrapper[4681]: I0404 03:15:00.502910 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587875-dcgj5" Apr 04 03:15:00 crc kubenswrapper[4681]: I0404 03:15:00.941751 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587875-dcgj5"] Apr 04 03:15:01 crc kubenswrapper[4681]: I0404 03:15:01.483078 4681 generic.go:334] "Generic (PLEG): container finished" podID="d6fab752-2d68-40be-9ce6-c8146c76a53f" containerID="730d5a884c75bee50776cf5a79c05491a6d9926b6cd8cb3425d02a126bb6504c" exitCode=0 Apr 04 03:15:01 crc kubenswrapper[4681]: I0404 03:15:01.483130 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587875-dcgj5" event={"ID":"d6fab752-2d68-40be-9ce6-c8146c76a53f","Type":"ContainerDied","Data":"730d5a884c75bee50776cf5a79c05491a6d9926b6cd8cb3425d02a126bb6504c"} Apr 04 03:15:01 crc kubenswrapper[4681]: I0404 03:15:01.483160 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587875-dcgj5" event={"ID":"d6fab752-2d68-40be-9ce6-c8146c76a53f","Type":"ContainerStarted","Data":"8d67061e21b1922700b51ceb6d8049960ebbb39722ae3196b9de3a7572946d3a"} Apr 04 03:15:02 crc kubenswrapper[4681]: I0404 03:15:02.924857 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587875-dcgj5" Apr 04 03:15:03 crc kubenswrapper[4681]: I0404 03:15:03.073067 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d6fab752-2d68-40be-9ce6-c8146c76a53f-secret-volume\") pod \"d6fab752-2d68-40be-9ce6-c8146c76a53f\" (UID: \"d6fab752-2d68-40be-9ce6-c8146c76a53f\") " Apr 04 03:15:03 crc kubenswrapper[4681]: I0404 03:15:03.073174 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmlsj\" (UniqueName: \"kubernetes.io/projected/d6fab752-2d68-40be-9ce6-c8146c76a53f-kube-api-access-hmlsj\") pod \"d6fab752-2d68-40be-9ce6-c8146c76a53f\" (UID: \"d6fab752-2d68-40be-9ce6-c8146c76a53f\") " Apr 04 03:15:03 crc kubenswrapper[4681]: I0404 03:15:03.073283 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6fab752-2d68-40be-9ce6-c8146c76a53f-config-volume\") pod \"d6fab752-2d68-40be-9ce6-c8146c76a53f\" (UID: \"d6fab752-2d68-40be-9ce6-c8146c76a53f\") " Apr 04 03:15:03 crc kubenswrapper[4681]: I0404 03:15:03.074405 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6fab752-2d68-40be-9ce6-c8146c76a53f-config-volume" (OuterVolumeSpecName: "config-volume") pod "d6fab752-2d68-40be-9ce6-c8146c76a53f" (UID: "d6fab752-2d68-40be-9ce6-c8146c76a53f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 03:15:03 crc kubenswrapper[4681]: I0404 03:15:03.088639 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6fab752-2d68-40be-9ce6-c8146c76a53f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d6fab752-2d68-40be-9ce6-c8146c76a53f" (UID: "d6fab752-2d68-40be-9ce6-c8146c76a53f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 03:15:03 crc kubenswrapper[4681]: I0404 03:15:03.088717 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6fab752-2d68-40be-9ce6-c8146c76a53f-kube-api-access-hmlsj" (OuterVolumeSpecName: "kube-api-access-hmlsj") pod "d6fab752-2d68-40be-9ce6-c8146c76a53f" (UID: "d6fab752-2d68-40be-9ce6-c8146c76a53f"). InnerVolumeSpecName "kube-api-access-hmlsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:15:03 crc kubenswrapper[4681]: I0404 03:15:03.175824 4681 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d6fab752-2d68-40be-9ce6-c8146c76a53f-secret-volume\") on node \"crc\" DevicePath \"\"" Apr 04 03:15:03 crc kubenswrapper[4681]: I0404 03:15:03.175880 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmlsj\" (UniqueName: \"kubernetes.io/projected/d6fab752-2d68-40be-9ce6-c8146c76a53f-kube-api-access-hmlsj\") on node \"crc\" DevicePath \"\"" Apr 04 03:15:03 crc kubenswrapper[4681]: I0404 03:15:03.175890 4681 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6fab752-2d68-40be-9ce6-c8146c76a53f-config-volume\") on node \"crc\" DevicePath \"\"" Apr 04 03:15:03 crc kubenswrapper[4681]: I0404 03:15:03.506562 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587875-dcgj5" event={"ID":"d6fab752-2d68-40be-9ce6-c8146c76a53f","Type":"ContainerDied","Data":"8d67061e21b1922700b51ceb6d8049960ebbb39722ae3196b9de3a7572946d3a"} Apr 04 03:15:03 crc kubenswrapper[4681]: I0404 03:15:03.507002 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d67061e21b1922700b51ceb6d8049960ebbb39722ae3196b9de3a7572946d3a" Apr 04 03:15:03 crc kubenswrapper[4681]: I0404 03:15:03.506652 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587875-dcgj5" Apr 04 03:15:04 crc kubenswrapper[4681]: I0404 03:15:04.002238 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587830-cb9k5"] Apr 04 03:15:04 crc kubenswrapper[4681]: I0404 03:15:04.012641 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587830-cb9k5"] Apr 04 03:15:05 crc kubenswrapper[4681]: I0404 03:15:05.216857 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cba58d4-a8c6-4a88-8c02-d6b12c7b935c" path="/var/lib/kubelet/pods/9cba58d4-a8c6-4a88-8c02-d6b12c7b935c/volumes" Apr 04 03:15:11 crc kubenswrapper[4681]: I0404 03:15:11.211612 4681 scope.go:117] "RemoveContainer" containerID="4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c" Apr 04 03:15:11 crc kubenswrapper[4681]: E0404 03:15:11.212368 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:15:24 crc kubenswrapper[4681]: I0404 03:15:24.201729 4681 scope.go:117] "RemoveContainer" containerID="4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c" Apr 04 03:15:24 crc kubenswrapper[4681]: E0404 03:15:24.206899 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:15:28 crc kubenswrapper[4681]: I0404 03:15:28.242471 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vpc9k"] Apr 04 03:15:28 crc kubenswrapper[4681]: E0404 03:15:28.243397 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6fab752-2d68-40be-9ce6-c8146c76a53f" containerName="collect-profiles" Apr 04 03:15:28 crc kubenswrapper[4681]: I0404 03:15:28.243409 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6fab752-2d68-40be-9ce6-c8146c76a53f" containerName="collect-profiles" Apr 04 03:15:28 crc kubenswrapper[4681]: I0404 03:15:28.243619 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6fab752-2d68-40be-9ce6-c8146c76a53f" containerName="collect-profiles" Apr 04 03:15:28 crc kubenswrapper[4681]: I0404 03:15:28.245241 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vpc9k" Apr 04 03:15:28 crc kubenswrapper[4681]: I0404 03:15:28.255040 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vpc9k"] Apr 04 03:15:28 crc kubenswrapper[4681]: I0404 03:15:28.358001 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn6j9\" (UniqueName: \"kubernetes.io/projected/0f34d6f5-55c6-463a-acbd-566ad39051b8-kube-api-access-rn6j9\") pod \"certified-operators-vpc9k\" (UID: \"0f34d6f5-55c6-463a-acbd-566ad39051b8\") " pod="openshift-marketplace/certified-operators-vpc9k" Apr 04 03:15:28 crc kubenswrapper[4681]: I0404 03:15:28.358370 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f34d6f5-55c6-463a-acbd-566ad39051b8-utilities\") pod \"certified-operators-vpc9k\" (UID: \"0f34d6f5-55c6-463a-acbd-566ad39051b8\") " pod="openshift-marketplace/certified-operators-vpc9k" Apr 04 03:15:28 crc kubenswrapper[4681]: I0404 03:15:28.358535 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f34d6f5-55c6-463a-acbd-566ad39051b8-catalog-content\") pod \"certified-operators-vpc9k\" (UID: \"0f34d6f5-55c6-463a-acbd-566ad39051b8\") " pod="openshift-marketplace/certified-operators-vpc9k" Apr 04 03:15:28 crc kubenswrapper[4681]: I0404 03:15:28.464193 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f34d6f5-55c6-463a-acbd-566ad39051b8-catalog-content\") pod \"certified-operators-vpc9k\" (UID: \"0f34d6f5-55c6-463a-acbd-566ad39051b8\") " pod="openshift-marketplace/certified-operators-vpc9k" Apr 04 03:15:28 crc kubenswrapper[4681]: I0404 03:15:28.464395 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn6j9\" (UniqueName: \"kubernetes.io/projected/0f34d6f5-55c6-463a-acbd-566ad39051b8-kube-api-access-rn6j9\") pod \"certified-operators-vpc9k\" (UID: \"0f34d6f5-55c6-463a-acbd-566ad39051b8\") " pod="openshift-marketplace/certified-operators-vpc9k" Apr 04 03:15:28 crc kubenswrapper[4681]: I0404 03:15:28.464447 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f34d6f5-55c6-463a-acbd-566ad39051b8-utilities\") pod \"certified-operators-vpc9k\" (UID: \"0f34d6f5-55c6-463a-acbd-566ad39051b8\") " pod="openshift-marketplace/certified-operators-vpc9k" Apr 04 03:15:28 crc kubenswrapper[4681]: I0404 03:15:28.464789 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f34d6f5-55c6-463a-acbd-566ad39051b8-catalog-content\") pod \"certified-operators-vpc9k\" (UID: \"0f34d6f5-55c6-463a-acbd-566ad39051b8\") " pod="openshift-marketplace/certified-operators-vpc9k" Apr 04 03:15:28 crc kubenswrapper[4681]: I0404 03:15:28.464897 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f34d6f5-55c6-463a-acbd-566ad39051b8-utilities\") pod \"certified-operators-vpc9k\" (UID: \"0f34d6f5-55c6-463a-acbd-566ad39051b8\") " pod="openshift-marketplace/certified-operators-vpc9k" Apr 04 03:15:28 crc kubenswrapper[4681]: I0404 03:15:28.496417 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn6j9\" (UniqueName: \"kubernetes.io/projected/0f34d6f5-55c6-463a-acbd-566ad39051b8-kube-api-access-rn6j9\") pod \"certified-operators-vpc9k\" (UID: \"0f34d6f5-55c6-463a-acbd-566ad39051b8\") " pod="openshift-marketplace/certified-operators-vpc9k" Apr 04 03:15:28 crc kubenswrapper[4681]: I0404 03:15:28.570964 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vpc9k" Apr 04 03:15:29 crc kubenswrapper[4681]: I0404 03:15:29.159163 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vpc9k"] Apr 04 03:15:29 crc kubenswrapper[4681]: I0404 03:15:29.829977 4681 generic.go:334] "Generic (PLEG): container finished" podID="0f34d6f5-55c6-463a-acbd-566ad39051b8" containerID="8c7c323cbee7d6d6a3a9105f4fb615b8f0efcc10fc58b19e4bec976c183bea4f" exitCode=0 Apr 04 03:15:29 crc kubenswrapper[4681]: I0404 03:15:29.830044 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vpc9k" event={"ID":"0f34d6f5-55c6-463a-acbd-566ad39051b8","Type":"ContainerDied","Data":"8c7c323cbee7d6d6a3a9105f4fb615b8f0efcc10fc58b19e4bec976c183bea4f"} Apr 04 03:15:29 crc kubenswrapper[4681]: I0404 03:15:29.830795 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vpc9k" event={"ID":"0f34d6f5-55c6-463a-acbd-566ad39051b8","Type":"ContainerStarted","Data":"4ec9ff6875421eb3e54b0cff531335468690d6d812603a07d830d5ab775e460b"} Apr 04 03:15:29 crc kubenswrapper[4681]: I0404 03:15:29.832680 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 04 03:15:30 crc kubenswrapper[4681]: I0404 03:15:30.843001 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vpc9k" event={"ID":"0f34d6f5-55c6-463a-acbd-566ad39051b8","Type":"ContainerStarted","Data":"3d770119d2718800bb1402a8c286ac05771cf61fbf6749d7a5988f6e0020da51"} Apr 04 03:15:32 crc kubenswrapper[4681]: I0404 03:15:32.663616 4681 scope.go:117] "RemoveContainer" containerID="17bde040bacf8d803af894efea6d1b5d9a45ddfae0ea83ffc10114c826450491" Apr 04 03:15:33 crc kubenswrapper[4681]: I0404 03:15:33.875994 4681 generic.go:334] "Generic (PLEG): container finished" podID="0f34d6f5-55c6-463a-acbd-566ad39051b8" containerID="3d770119d2718800bb1402a8c286ac05771cf61fbf6749d7a5988f6e0020da51" exitCode=0 Apr 04 03:15:33 crc kubenswrapper[4681]: I0404 03:15:33.876116 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vpc9k" event={"ID":"0f34d6f5-55c6-463a-acbd-566ad39051b8","Type":"ContainerDied","Data":"3d770119d2718800bb1402a8c286ac05771cf61fbf6749d7a5988f6e0020da51"} Apr 04 03:15:34 crc kubenswrapper[4681]: I0404 03:15:34.890144 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vpc9k" event={"ID":"0f34d6f5-55c6-463a-acbd-566ad39051b8","Type":"ContainerStarted","Data":"955f03b3138e2a100a4b3932e4f748bd8cd60e8ad4ec4fe0781605bf67d48f57"} Apr 04 03:15:34 crc kubenswrapper[4681]: I0404 03:15:34.919052 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vpc9k" podStartSLOduration=2.483015611 podStartE2EDuration="6.919036236s" podCreationTimestamp="2026-04-04 03:15:28 +0000 UTC" firstStartedPulling="2026-04-04 03:15:29.832165421 +0000 UTC m=+4809.497940571" lastFinishedPulling="2026-04-04 03:15:34.268186076 +0000 UTC m=+4813.933961196" observedRunningTime="2026-04-04 03:15:34.905743093 +0000 UTC m=+4814.571518213" watchObservedRunningTime="2026-04-04 03:15:34.919036236 +0000 UTC m=+4814.584811356" Apr 04 03:15:38 crc kubenswrapper[4681]: I0404 03:15:38.201503 4681 scope.go:117] "RemoveContainer" containerID="4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c" Apr 04 03:15:38 crc kubenswrapper[4681]: E0404 03:15:38.202307 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:15:38 crc kubenswrapper[4681]: I0404 03:15:38.571818 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vpc9k" Apr 04 03:15:38 crc kubenswrapper[4681]: I0404 03:15:38.571882 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vpc9k" Apr 04 03:15:38 crc kubenswrapper[4681]: I0404 03:15:38.641331 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vpc9k" Apr 04 03:15:48 crc kubenswrapper[4681]: I0404 03:15:48.640972 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vpc9k" Apr 04 03:15:48 crc kubenswrapper[4681]: I0404 03:15:48.700103 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vpc9k"] Apr 04 03:15:49 crc kubenswrapper[4681]: I0404 03:15:49.043680 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vpc9k" podUID="0f34d6f5-55c6-463a-acbd-566ad39051b8" containerName="registry-server" containerID="cri-o://955f03b3138e2a100a4b3932e4f748bd8cd60e8ad4ec4fe0781605bf67d48f57" gracePeriod=2 Apr 04 03:15:49 crc kubenswrapper[4681]: I0404 03:15:49.545118 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vpc9k" Apr 04 03:15:49 crc kubenswrapper[4681]: I0404 03:15:49.629676 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f34d6f5-55c6-463a-acbd-566ad39051b8-utilities\") pod \"0f34d6f5-55c6-463a-acbd-566ad39051b8\" (UID: \"0f34d6f5-55c6-463a-acbd-566ad39051b8\") " Apr 04 03:15:49 crc kubenswrapper[4681]: I0404 03:15:49.629937 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn6j9\" (UniqueName: \"kubernetes.io/projected/0f34d6f5-55c6-463a-acbd-566ad39051b8-kube-api-access-rn6j9\") pod \"0f34d6f5-55c6-463a-acbd-566ad39051b8\" (UID: \"0f34d6f5-55c6-463a-acbd-566ad39051b8\") " Apr 04 03:15:49 crc kubenswrapper[4681]: I0404 03:15:49.629974 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f34d6f5-55c6-463a-acbd-566ad39051b8-catalog-content\") pod \"0f34d6f5-55c6-463a-acbd-566ad39051b8\" (UID: \"0f34d6f5-55c6-463a-acbd-566ad39051b8\") " Apr 04 03:15:49 crc kubenswrapper[4681]: I0404 03:15:49.634851 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f34d6f5-55c6-463a-acbd-566ad39051b8-utilities" (OuterVolumeSpecName: "utilities") pod "0f34d6f5-55c6-463a-acbd-566ad39051b8" (UID: "0f34d6f5-55c6-463a-acbd-566ad39051b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:15:49 crc kubenswrapper[4681]: I0404 03:15:49.643600 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f34d6f5-55c6-463a-acbd-566ad39051b8-kube-api-access-rn6j9" (OuterVolumeSpecName: "kube-api-access-rn6j9") pod "0f34d6f5-55c6-463a-acbd-566ad39051b8" (UID: "0f34d6f5-55c6-463a-acbd-566ad39051b8"). InnerVolumeSpecName "kube-api-access-rn6j9". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:15:49 crc kubenswrapper[4681]: I0404 03:15:49.711484 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f34d6f5-55c6-463a-acbd-566ad39051b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f34d6f5-55c6-463a-acbd-566ad39051b8" (UID: "0f34d6f5-55c6-463a-acbd-566ad39051b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:15:49 crc kubenswrapper[4681]: I0404 03:15:49.734220 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f34d6f5-55c6-463a-acbd-566ad39051b8-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 03:15:49 crc kubenswrapper[4681]: I0404 03:15:49.734257 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn6j9\" (UniqueName: \"kubernetes.io/projected/0f34d6f5-55c6-463a-acbd-566ad39051b8-kube-api-access-rn6j9\") on node \"crc\" DevicePath \"\"" Apr 04 03:15:49 crc kubenswrapper[4681]: I0404 03:15:49.734300 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f34d6f5-55c6-463a-acbd-566ad39051b8-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 03:15:50 crc kubenswrapper[4681]: I0404 03:15:50.058207 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vpc9k" Apr 04 03:15:50 crc kubenswrapper[4681]: I0404 03:15:50.058216 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vpc9k" event={"ID":"0f34d6f5-55c6-463a-acbd-566ad39051b8","Type":"ContainerDied","Data":"955f03b3138e2a100a4b3932e4f748bd8cd60e8ad4ec4fe0781605bf67d48f57"} Apr 04 03:15:50 crc kubenswrapper[4681]: I0404 03:15:50.058327 4681 scope.go:117] "RemoveContainer" containerID="955f03b3138e2a100a4b3932e4f748bd8cd60e8ad4ec4fe0781605bf67d48f57" Apr 04 03:15:50 crc kubenswrapper[4681]: I0404 03:15:50.058064 4681 generic.go:334] "Generic (PLEG): container finished" podID="0f34d6f5-55c6-463a-acbd-566ad39051b8" containerID="955f03b3138e2a100a4b3932e4f748bd8cd60e8ad4ec4fe0781605bf67d48f57" exitCode=0 Apr 04 03:15:50 crc kubenswrapper[4681]: I0404 03:15:50.058456 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vpc9k" event={"ID":"0f34d6f5-55c6-463a-acbd-566ad39051b8","Type":"ContainerDied","Data":"4ec9ff6875421eb3e54b0cff531335468690d6d812603a07d830d5ab775e460b"} Apr 04 03:15:50 crc kubenswrapper[4681]: I0404 03:15:50.090007 4681 scope.go:117] "RemoveContainer" containerID="3d770119d2718800bb1402a8c286ac05771cf61fbf6749d7a5988f6e0020da51" Apr 04 03:15:50 crc kubenswrapper[4681]: I0404 03:15:50.099160 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vpc9k"] Apr 04 03:15:50 crc kubenswrapper[4681]: I0404 03:15:50.113733 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vpc9k"] Apr 04 03:15:50 crc kubenswrapper[4681]: I0404 03:15:50.125645 4681 scope.go:117] "RemoveContainer" containerID="8c7c323cbee7d6d6a3a9105f4fb615b8f0efcc10fc58b19e4bec976c183bea4f" Apr 04 03:15:50 crc kubenswrapper[4681]: I0404 03:15:50.174502 4681 scope.go:117] "RemoveContainer" containerID="955f03b3138e2a100a4b3932e4f748bd8cd60e8ad4ec4fe0781605bf67d48f57" Apr 04 03:15:50 crc kubenswrapper[4681]: E0404 03:15:50.174968 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"955f03b3138e2a100a4b3932e4f748bd8cd60e8ad4ec4fe0781605bf67d48f57\": container with ID starting with 955f03b3138e2a100a4b3932e4f748bd8cd60e8ad4ec4fe0781605bf67d48f57 not found: ID does not exist" containerID="955f03b3138e2a100a4b3932e4f748bd8cd60e8ad4ec4fe0781605bf67d48f57" Apr 04 03:15:50 crc kubenswrapper[4681]: I0404 03:15:50.175003 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"955f03b3138e2a100a4b3932e4f748bd8cd60e8ad4ec4fe0781605bf67d48f57"} err="failed to get container status \"955f03b3138e2a100a4b3932e4f748bd8cd60e8ad4ec4fe0781605bf67d48f57\": rpc error: code = NotFound desc = could not find container \"955f03b3138e2a100a4b3932e4f748bd8cd60e8ad4ec4fe0781605bf67d48f57\": container with ID starting with 955f03b3138e2a100a4b3932e4f748bd8cd60e8ad4ec4fe0781605bf67d48f57 not found: ID does not exist" Apr 04 03:15:50 crc kubenswrapper[4681]: I0404 03:15:50.175035 4681 scope.go:117] "RemoveContainer" containerID="3d770119d2718800bb1402a8c286ac05771cf61fbf6749d7a5988f6e0020da51" Apr 04 03:15:50 crc kubenswrapper[4681]: E0404 03:15:50.175490 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d770119d2718800bb1402a8c286ac05771cf61fbf6749d7a5988f6e0020da51\": container with ID starting with 3d770119d2718800bb1402a8c286ac05771cf61fbf6749d7a5988f6e0020da51 not found: ID does not exist" containerID="3d770119d2718800bb1402a8c286ac05771cf61fbf6749d7a5988f6e0020da51" Apr 04 03:15:50 crc kubenswrapper[4681]: I0404 03:15:50.175525 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d770119d2718800bb1402a8c286ac05771cf61fbf6749d7a5988f6e0020da51"} err="failed to get container status \"3d770119d2718800bb1402a8c286ac05771cf61fbf6749d7a5988f6e0020da51\": rpc error: code = NotFound desc = could not find container \"3d770119d2718800bb1402a8c286ac05771cf61fbf6749d7a5988f6e0020da51\": container with ID starting with 3d770119d2718800bb1402a8c286ac05771cf61fbf6749d7a5988f6e0020da51 not found: ID does not exist" Apr 04 03:15:50 crc kubenswrapper[4681]: I0404 03:15:50.175544 4681 scope.go:117] "RemoveContainer" containerID="8c7c323cbee7d6d6a3a9105f4fb615b8f0efcc10fc58b19e4bec976c183bea4f" Apr 04 03:15:50 crc kubenswrapper[4681]: E0404 03:15:50.175811 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c7c323cbee7d6d6a3a9105f4fb615b8f0efcc10fc58b19e4bec976c183bea4f\": container with ID starting with 8c7c323cbee7d6d6a3a9105f4fb615b8f0efcc10fc58b19e4bec976c183bea4f not found: ID does not exist" containerID="8c7c323cbee7d6d6a3a9105f4fb615b8f0efcc10fc58b19e4bec976c183bea4f" Apr 04 03:15:50 crc kubenswrapper[4681]: I0404 03:15:50.175832 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c7c323cbee7d6d6a3a9105f4fb615b8f0efcc10fc58b19e4bec976c183bea4f"} err="failed to get container status \"8c7c323cbee7d6d6a3a9105f4fb615b8f0efcc10fc58b19e4bec976c183bea4f\": rpc error: code = NotFound desc = could not find container \"8c7c323cbee7d6d6a3a9105f4fb615b8f0efcc10fc58b19e4bec976c183bea4f\": container with ID starting with 8c7c323cbee7d6d6a3a9105f4fb615b8f0efcc10fc58b19e4bec976c183bea4f not found: ID does not exist" Apr 04 03:15:51 crc kubenswrapper[4681]: I0404 03:15:51.220328 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f34d6f5-55c6-463a-acbd-566ad39051b8" path="/var/lib/kubelet/pods/0f34d6f5-55c6-463a-acbd-566ad39051b8/volumes" Apr 04 03:15:52 crc kubenswrapper[4681]: I0404 03:15:52.202140 4681 scope.go:117] "RemoveContainer" containerID="4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c" Apr 04 03:15:52 crc kubenswrapper[4681]: E0404 03:15:52.202647 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:16:00 crc kubenswrapper[4681]: I0404 03:16:00.150793 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587876-qbq7b"] Apr 04 03:16:00 crc kubenswrapper[4681]: E0404 03:16:00.152510 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f34d6f5-55c6-463a-acbd-566ad39051b8" containerName="extract-utilities" Apr 04 03:16:00 crc kubenswrapper[4681]: I0404 03:16:00.152536 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f34d6f5-55c6-463a-acbd-566ad39051b8" containerName="extract-utilities" Apr 04 03:16:00 crc kubenswrapper[4681]: E0404 03:16:00.152593 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f34d6f5-55c6-463a-acbd-566ad39051b8" containerName="extract-content" Apr 04 03:16:00 crc kubenswrapper[4681]: I0404 03:16:00.152603 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f34d6f5-55c6-463a-acbd-566ad39051b8" containerName="extract-content" Apr 04 03:16:00 crc kubenswrapper[4681]: E0404 03:16:00.152631 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f34d6f5-55c6-463a-acbd-566ad39051b8" containerName="registry-server" Apr 04 03:16:00 crc kubenswrapper[4681]: I0404 03:16:00.152642 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f34d6f5-55c6-463a-acbd-566ad39051b8" containerName="registry-server" Apr 04 03:16:00 crc kubenswrapper[4681]: I0404 03:16:00.152934 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f34d6f5-55c6-463a-acbd-566ad39051b8" containerName="registry-server" Apr 04 03:16:00 crc kubenswrapper[4681]: I0404 03:16:00.154140 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587876-qbq7b" Apr 04 03:16:00 crc kubenswrapper[4681]: I0404 03:16:00.156600 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 03:16:00 crc kubenswrapper[4681]: I0404 03:16:00.156607 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 03:16:00 crc kubenswrapper[4681]: I0404 03:16:00.157001 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 03:16:00 crc kubenswrapper[4681]: I0404 03:16:00.181503 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587876-qbq7b"] Apr 04 03:16:00 crc kubenswrapper[4681]: I0404 03:16:00.257760 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lk7t\" (UniqueName: \"kubernetes.io/projected/396a9979-459d-4574-854b-b1d49f26194e-kube-api-access-4lk7t\") pod \"auto-csr-approver-29587876-qbq7b\" (UID: \"396a9979-459d-4574-854b-b1d49f26194e\") " pod="openshift-infra/auto-csr-approver-29587876-qbq7b" Apr 04 03:16:00 crc kubenswrapper[4681]: I0404 03:16:00.359412 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lk7t\" (UniqueName: \"kubernetes.io/projected/396a9979-459d-4574-854b-b1d49f26194e-kube-api-access-4lk7t\") pod \"auto-csr-approver-29587876-qbq7b\" (UID: \"396a9979-459d-4574-854b-b1d49f26194e\") " pod="openshift-infra/auto-csr-approver-29587876-qbq7b" Apr 04 03:16:00 crc kubenswrapper[4681]: I0404 03:16:00.378711 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lk7t\" (UniqueName: \"kubernetes.io/projected/396a9979-459d-4574-854b-b1d49f26194e-kube-api-access-4lk7t\") pod \"auto-csr-approver-29587876-qbq7b\" (UID: \"396a9979-459d-4574-854b-b1d49f26194e\") " pod="openshift-infra/auto-csr-approver-29587876-qbq7b" Apr 04 03:16:00 crc kubenswrapper[4681]: I0404 03:16:00.479440 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587876-qbq7b" Apr 04 03:16:00 crc kubenswrapper[4681]: I0404 03:16:00.915721 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587876-qbq7b"] Apr 04 03:16:01 crc kubenswrapper[4681]: I0404 03:16:01.220410 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587876-qbq7b" event={"ID":"396a9979-459d-4574-854b-b1d49f26194e","Type":"ContainerStarted","Data":"4a0e24c39acbf207b9f8684b95762a2b8625f309f1cc0720c21eb815e19ccfda"} Apr 04 03:16:02 crc kubenswrapper[4681]: I0404 03:16:02.212750 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587876-qbq7b" event={"ID":"396a9979-459d-4574-854b-b1d49f26194e","Type":"ContainerStarted","Data":"393b2692bcb0bd034d2c8189711144a3f00fbf79d97563b9be470f6761676d8a"} Apr 04 03:16:02 crc kubenswrapper[4681]: I0404 03:16:02.230335 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29587876-qbq7b" podStartSLOduration=1.347768318 podStartE2EDuration="2.230311444s" podCreationTimestamp="2026-04-04 03:16:00 +0000 UTC" firstStartedPulling="2026-04-04 03:16:00.921751467 +0000 UTC m=+4840.587526587" lastFinishedPulling="2026-04-04 03:16:01.804294593 +0000 UTC m=+4841.470069713" observedRunningTime="2026-04-04 03:16:02.227023334 +0000 UTC m=+4841.892798454" watchObservedRunningTime="2026-04-04 03:16:02.230311444 +0000 UTC m=+4841.896086564" Apr 04 03:16:03 crc kubenswrapper[4681]: I0404 03:16:03.226961 4681 generic.go:334] "Generic (PLEG): container finished" podID="396a9979-459d-4574-854b-b1d49f26194e" containerID="393b2692bcb0bd034d2c8189711144a3f00fbf79d97563b9be470f6761676d8a" exitCode=0 Apr 04 03:16:03 crc kubenswrapper[4681]: I0404 03:16:03.227028 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587876-qbq7b" event={"ID":"396a9979-459d-4574-854b-b1d49f26194e","Type":"ContainerDied","Data":"393b2692bcb0bd034d2c8189711144a3f00fbf79d97563b9be470f6761676d8a"} Apr 04 03:16:04 crc kubenswrapper[4681]: I0404 03:16:04.202143 4681 scope.go:117] "RemoveContainer" containerID="4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c" Apr 04 03:16:04 crc kubenswrapper[4681]: E0404 03:16:04.202824 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:16:04 crc kubenswrapper[4681]: I0404 03:16:04.583110 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587876-qbq7b" Apr 04 03:16:04 crc kubenswrapper[4681]: I0404 03:16:04.648375 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lk7t\" (UniqueName: \"kubernetes.io/projected/396a9979-459d-4574-854b-b1d49f26194e-kube-api-access-4lk7t\") pod \"396a9979-459d-4574-854b-b1d49f26194e\" (UID: \"396a9979-459d-4574-854b-b1d49f26194e\") " Apr 04 03:16:04 crc kubenswrapper[4681]: I0404 03:16:04.656346 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/396a9979-459d-4574-854b-b1d49f26194e-kube-api-access-4lk7t" (OuterVolumeSpecName: "kube-api-access-4lk7t") pod "396a9979-459d-4574-854b-b1d49f26194e" (UID: "396a9979-459d-4574-854b-b1d49f26194e"). InnerVolumeSpecName "kube-api-access-4lk7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:16:04 crc kubenswrapper[4681]: I0404 03:16:04.750981 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lk7t\" (UniqueName: \"kubernetes.io/projected/396a9979-459d-4574-854b-b1d49f26194e-kube-api-access-4lk7t\") on node \"crc\" DevicePath \"\"" Apr 04 03:16:05 crc kubenswrapper[4681]: I0404 03:16:05.253066 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587876-qbq7b" event={"ID":"396a9979-459d-4574-854b-b1d49f26194e","Type":"ContainerDied","Data":"4a0e24c39acbf207b9f8684b95762a2b8625f309f1cc0720c21eb815e19ccfda"} Apr 04 03:16:05 crc kubenswrapper[4681]: I0404 03:16:05.254152 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a0e24c39acbf207b9f8684b95762a2b8625f309f1cc0720c21eb815e19ccfda" Apr 04 03:16:05 crc kubenswrapper[4681]: I0404 03:16:05.254129 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587876-qbq7b" Apr 04 03:16:05 crc kubenswrapper[4681]: I0404 03:16:05.311230 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587870-jv4fw"] Apr 04 03:16:05 crc kubenswrapper[4681]: I0404 03:16:05.321708 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587870-jv4fw"] Apr 04 03:16:07 crc kubenswrapper[4681]: I0404 03:16:07.220638 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="782b5369-6e4b-462d-aa6d-28d68fdcf0f9" path="/var/lib/kubelet/pods/782b5369-6e4b-462d-aa6d-28d68fdcf0f9/volumes" Apr 04 03:16:18 crc kubenswrapper[4681]: I0404 03:16:18.201617 4681 scope.go:117] "RemoveContainer" containerID="4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c" Apr 04 03:16:18 crc kubenswrapper[4681]: E0404 03:16:18.202381 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:16:29 crc kubenswrapper[4681]: I0404 03:16:29.201392 4681 scope.go:117] "RemoveContainer" containerID="4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c" Apr 04 03:16:30 crc kubenswrapper[4681]: I0404 03:16:30.548734 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerStarted","Data":"cb35f78733973682832c145e9adcab8aa8004fd64ad6ea23073eaf4ca12de0a0"} Apr 04 03:16:32 crc kubenswrapper[4681]: I0404 03:16:32.947969 4681 scope.go:117] "RemoveContainer" containerID="a3a1f8a6516931036eb3310d915057c52f8e9a46892a6c3533612c8b2a672a0f" Apr 04 03:17:25 crc kubenswrapper[4681]: I0404 03:17:25.760228 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f6fhq"] Apr 04 03:17:25 crc kubenswrapper[4681]: E0404 03:17:25.761293 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="396a9979-459d-4574-854b-b1d49f26194e" containerName="oc" Apr 04 03:17:25 crc kubenswrapper[4681]: I0404 03:17:25.761308 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="396a9979-459d-4574-854b-b1d49f26194e" containerName="oc" Apr 04 03:17:25 crc kubenswrapper[4681]: I0404 03:17:25.761562 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="396a9979-459d-4574-854b-b1d49f26194e" containerName="oc" Apr 04 03:17:25 crc kubenswrapper[4681]: I0404 03:17:25.763406 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f6fhq" Apr 04 03:17:25 crc kubenswrapper[4681]: I0404 03:17:25.775348 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f6fhq"] Apr 04 03:17:25 crc kubenswrapper[4681]: I0404 03:17:25.933388 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eefa93e-c921-4823-ae08-626a462af47d-catalog-content\") pod \"redhat-operators-f6fhq\" (UID: \"7eefa93e-c921-4823-ae08-626a462af47d\") " pod="openshift-marketplace/redhat-operators-f6fhq" Apr 04 03:17:25 crc kubenswrapper[4681]: I0404 03:17:25.933443 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p4kd\" (UniqueName: \"kubernetes.io/projected/7eefa93e-c921-4823-ae08-626a462af47d-kube-api-access-6p4kd\") pod \"redhat-operators-f6fhq\" (UID: \"7eefa93e-c921-4823-ae08-626a462af47d\") " pod="openshift-marketplace/redhat-operators-f6fhq" Apr 04 03:17:25 crc kubenswrapper[4681]: I0404 03:17:25.933578 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eefa93e-c921-4823-ae08-626a462af47d-utilities\") pod \"redhat-operators-f6fhq\" (UID: \"7eefa93e-c921-4823-ae08-626a462af47d\") " pod="openshift-marketplace/redhat-operators-f6fhq" Apr 04 03:17:26 crc kubenswrapper[4681]: I0404 03:17:26.035497 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eefa93e-c921-4823-ae08-626a462af47d-utilities\") pod \"redhat-operators-f6fhq\" (UID: \"7eefa93e-c921-4823-ae08-626a462af47d\") " pod="openshift-marketplace/redhat-operators-f6fhq" Apr 04 03:17:26 crc kubenswrapper[4681]: I0404 03:17:26.035621 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eefa93e-c921-4823-ae08-626a462af47d-catalog-content\") pod \"redhat-operators-f6fhq\" (UID: \"7eefa93e-c921-4823-ae08-626a462af47d\") " pod="openshift-marketplace/redhat-operators-f6fhq" Apr 04 03:17:26 crc kubenswrapper[4681]: I0404 03:17:26.035667 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p4kd\" (UniqueName: \"kubernetes.io/projected/7eefa93e-c921-4823-ae08-626a462af47d-kube-api-access-6p4kd\") pod \"redhat-operators-f6fhq\" (UID: \"7eefa93e-c921-4823-ae08-626a462af47d\") " pod="openshift-marketplace/redhat-operators-f6fhq" Apr 04 03:17:26 crc kubenswrapper[4681]: I0404 03:17:26.036117 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eefa93e-c921-4823-ae08-626a462af47d-catalog-content\") pod \"redhat-operators-f6fhq\" (UID: \"7eefa93e-c921-4823-ae08-626a462af47d\") " pod="openshift-marketplace/redhat-operators-f6fhq" Apr 04 03:17:26 crc kubenswrapper[4681]: I0404 03:17:26.036117 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eefa93e-c921-4823-ae08-626a462af47d-utilities\") pod \"redhat-operators-f6fhq\" (UID: \"7eefa93e-c921-4823-ae08-626a462af47d\") " pod="openshift-marketplace/redhat-operators-f6fhq" Apr 04 03:17:26 crc kubenswrapper[4681]: I0404 03:17:26.062355 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p4kd\" (UniqueName: \"kubernetes.io/projected/7eefa93e-c921-4823-ae08-626a462af47d-kube-api-access-6p4kd\") pod \"redhat-operators-f6fhq\" (UID: \"7eefa93e-c921-4823-ae08-626a462af47d\") " pod="openshift-marketplace/redhat-operators-f6fhq" Apr 04 03:17:26 crc kubenswrapper[4681]: I0404 03:17:26.089150 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f6fhq" Apr 04 03:17:26 crc kubenswrapper[4681]: I0404 03:17:26.561651 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f6fhq"] Apr 04 03:17:27 crc kubenswrapper[4681]: I0404 03:17:27.142972 4681 generic.go:334] "Generic (PLEG): container finished" podID="7eefa93e-c921-4823-ae08-626a462af47d" containerID="7a6628639a02d6f5763e27742cb917d1b8db4c8ef994c1a49c8cefbba24ad36d" exitCode=0 Apr 04 03:17:27 crc kubenswrapper[4681]: I0404 03:17:27.143014 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f6fhq" event={"ID":"7eefa93e-c921-4823-ae08-626a462af47d","Type":"ContainerDied","Data":"7a6628639a02d6f5763e27742cb917d1b8db4c8ef994c1a49c8cefbba24ad36d"} Apr 04 03:17:27 crc kubenswrapper[4681]: I0404 03:17:27.143261 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f6fhq" event={"ID":"7eefa93e-c921-4823-ae08-626a462af47d","Type":"ContainerStarted","Data":"77ee88c2d8148891f1cfdd2dccf74b4c5fc48ed28bfcddbe8d58f29eef516df4"} Apr 04 03:17:28 crc kubenswrapper[4681]: I0404 03:17:28.181595 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f6fhq" event={"ID":"7eefa93e-c921-4823-ae08-626a462af47d","Type":"ContainerStarted","Data":"e2a2e5127a4de16e16a06c62eb9c5ce057e0c4f9e4a585b86aa5a38a1596b3cc"} Apr 04 03:17:34 crc kubenswrapper[4681]: I0404 03:17:34.310858 4681 generic.go:334] "Generic (PLEG): container finished" podID="7eefa93e-c921-4823-ae08-626a462af47d" containerID="e2a2e5127a4de16e16a06c62eb9c5ce057e0c4f9e4a585b86aa5a38a1596b3cc" exitCode=0 Apr 04 03:17:34 crc kubenswrapper[4681]: I0404 03:17:34.311109 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f6fhq" event={"ID":"7eefa93e-c921-4823-ae08-626a462af47d","Type":"ContainerDied","Data":"e2a2e5127a4de16e16a06c62eb9c5ce057e0c4f9e4a585b86aa5a38a1596b3cc"} Apr 04 03:17:35 crc kubenswrapper[4681]: I0404 03:17:35.326959 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f6fhq" event={"ID":"7eefa93e-c921-4823-ae08-626a462af47d","Type":"ContainerStarted","Data":"eaf424eac85fea72b0462c3124035e96194e6a483295a91fce46711aa7810e30"} Apr 04 03:17:35 crc kubenswrapper[4681]: I0404 03:17:35.357695 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f6fhq" podStartSLOduration=2.76963518 podStartE2EDuration="10.357675482s" podCreationTimestamp="2026-04-04 03:17:25 +0000 UTC" firstStartedPulling="2026-04-04 03:17:27.144307866 +0000 UTC m=+4926.810082986" lastFinishedPulling="2026-04-04 03:17:34.732348158 +0000 UTC m=+4934.398123288" observedRunningTime="2026-04-04 03:17:35.350818124 +0000 UTC m=+4935.016593264" watchObservedRunningTime="2026-04-04 03:17:35.357675482 +0000 UTC m=+4935.023450612" Apr 04 03:17:36 crc kubenswrapper[4681]: I0404 03:17:36.089567 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f6fhq" Apr 04 03:17:36 crc kubenswrapper[4681]: I0404 03:17:36.089657 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f6fhq" Apr 04 03:17:37 crc kubenswrapper[4681]: I0404 03:17:37.143417 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f6fhq" podUID="7eefa93e-c921-4823-ae08-626a462af47d" containerName="registry-server" probeResult="failure" output=< Apr 04 03:17:37 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Apr 04 03:17:37 crc kubenswrapper[4681]: > Apr 04 03:17:46 crc kubenswrapper[4681]: I0404 03:17:46.134394 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f6fhq" Apr 04 03:17:46 crc kubenswrapper[4681]: I0404 03:17:46.183053 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f6fhq" Apr 04 03:17:46 crc kubenswrapper[4681]: I0404 03:17:46.376377 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f6fhq"] Apr 04 03:17:47 crc kubenswrapper[4681]: I0404 03:17:47.448661 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f6fhq" podUID="7eefa93e-c921-4823-ae08-626a462af47d" containerName="registry-server" containerID="cri-o://eaf424eac85fea72b0462c3124035e96194e6a483295a91fce46711aa7810e30" gracePeriod=2 Apr 04 03:17:48 crc kubenswrapper[4681]: I0404 03:17:48.032144 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f6fhq" Apr 04 03:17:48 crc kubenswrapper[4681]: I0404 03:17:48.118199 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eefa93e-c921-4823-ae08-626a462af47d-catalog-content\") pod \"7eefa93e-c921-4823-ae08-626a462af47d\" (UID: \"7eefa93e-c921-4823-ae08-626a462af47d\") " Apr 04 03:17:48 crc kubenswrapper[4681]: I0404 03:17:48.118333 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6p4kd\" (UniqueName: \"kubernetes.io/projected/7eefa93e-c921-4823-ae08-626a462af47d-kube-api-access-6p4kd\") pod \"7eefa93e-c921-4823-ae08-626a462af47d\" (UID: \"7eefa93e-c921-4823-ae08-626a462af47d\") " Apr 04 03:17:48 crc kubenswrapper[4681]: I0404 03:17:48.118434 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eefa93e-c921-4823-ae08-626a462af47d-utilities\") pod \"7eefa93e-c921-4823-ae08-626a462af47d\" (UID: \"7eefa93e-c921-4823-ae08-626a462af47d\") " Apr 04 03:17:48 crc kubenswrapper[4681]: I0404 03:17:48.119493 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7eefa93e-c921-4823-ae08-626a462af47d-utilities" (OuterVolumeSpecName: "utilities") pod "7eefa93e-c921-4823-ae08-626a462af47d" (UID: "7eefa93e-c921-4823-ae08-626a462af47d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:17:48 crc kubenswrapper[4681]: I0404 03:17:48.127316 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eefa93e-c921-4823-ae08-626a462af47d-kube-api-access-6p4kd" (OuterVolumeSpecName: "kube-api-access-6p4kd") pod "7eefa93e-c921-4823-ae08-626a462af47d" (UID: "7eefa93e-c921-4823-ae08-626a462af47d"). InnerVolumeSpecName "kube-api-access-6p4kd". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:17:48 crc kubenswrapper[4681]: I0404 03:17:48.221534 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6p4kd\" (UniqueName: \"kubernetes.io/projected/7eefa93e-c921-4823-ae08-626a462af47d-kube-api-access-6p4kd\") on node \"crc\" DevicePath \"\"" Apr 04 03:17:48 crc kubenswrapper[4681]: I0404 03:17:48.221581 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7eefa93e-c921-4823-ae08-626a462af47d-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 03:17:48 crc kubenswrapper[4681]: I0404 03:17:48.259475 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7eefa93e-c921-4823-ae08-626a462af47d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7eefa93e-c921-4823-ae08-626a462af47d" (UID: "7eefa93e-c921-4823-ae08-626a462af47d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:17:48 crc kubenswrapper[4681]: I0404 03:17:48.324172 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7eefa93e-c921-4823-ae08-626a462af47d-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 03:17:48 crc kubenswrapper[4681]: I0404 03:17:48.461705 4681 generic.go:334] "Generic (PLEG): container finished" podID="7eefa93e-c921-4823-ae08-626a462af47d" containerID="eaf424eac85fea72b0462c3124035e96194e6a483295a91fce46711aa7810e30" exitCode=0 Apr 04 03:17:48 crc kubenswrapper[4681]: I0404 03:17:48.461805 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f6fhq" event={"ID":"7eefa93e-c921-4823-ae08-626a462af47d","Type":"ContainerDied","Data":"eaf424eac85fea72b0462c3124035e96194e6a483295a91fce46711aa7810e30"} Apr 04 03:17:48 crc kubenswrapper[4681]: I0404 03:17:48.461844 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f6fhq" event={"ID":"7eefa93e-c921-4823-ae08-626a462af47d","Type":"ContainerDied","Data":"77ee88c2d8148891f1cfdd2dccf74b4c5fc48ed28bfcddbe8d58f29eef516df4"} Apr 04 03:17:48 crc kubenswrapper[4681]: I0404 03:17:48.461859 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f6fhq" Apr 04 03:17:48 crc kubenswrapper[4681]: I0404 03:17:48.461866 4681 scope.go:117] "RemoveContainer" containerID="eaf424eac85fea72b0462c3124035e96194e6a483295a91fce46711aa7810e30" Apr 04 03:17:48 crc kubenswrapper[4681]: I0404 03:17:48.514505 4681 scope.go:117] "RemoveContainer" containerID="e2a2e5127a4de16e16a06c62eb9c5ce057e0c4f9e4a585b86aa5a38a1596b3cc" Apr 04 03:17:48 crc kubenswrapper[4681]: I0404 03:17:48.531578 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f6fhq"] Apr 04 03:17:48 crc kubenswrapper[4681]: I0404 03:17:48.545207 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f6fhq"] Apr 04 03:17:48 crc kubenswrapper[4681]: I0404 03:17:48.553033 4681 scope.go:117] "RemoveContainer" containerID="7a6628639a02d6f5763e27742cb917d1b8db4c8ef994c1a49c8cefbba24ad36d" Apr 04 03:17:48 crc kubenswrapper[4681]: I0404 03:17:48.597158 4681 scope.go:117] "RemoveContainer" containerID="eaf424eac85fea72b0462c3124035e96194e6a483295a91fce46711aa7810e30" Apr 04 03:17:48 crc kubenswrapper[4681]: E0404 03:17:48.598212 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaf424eac85fea72b0462c3124035e96194e6a483295a91fce46711aa7810e30\": container with ID starting with eaf424eac85fea72b0462c3124035e96194e6a483295a91fce46711aa7810e30 not found: ID does not exist" containerID="eaf424eac85fea72b0462c3124035e96194e6a483295a91fce46711aa7810e30" Apr 04 03:17:48 crc kubenswrapper[4681]: I0404 03:17:48.598363 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaf424eac85fea72b0462c3124035e96194e6a483295a91fce46711aa7810e30"} err="failed to get container status \"eaf424eac85fea72b0462c3124035e96194e6a483295a91fce46711aa7810e30\": rpc error: code = NotFound desc = could not find container \"eaf424eac85fea72b0462c3124035e96194e6a483295a91fce46711aa7810e30\": container with ID starting with eaf424eac85fea72b0462c3124035e96194e6a483295a91fce46711aa7810e30 not found: ID does not exist" Apr 04 03:17:48 crc kubenswrapper[4681]: I0404 03:17:48.598556 4681 scope.go:117] "RemoveContainer" containerID="e2a2e5127a4de16e16a06c62eb9c5ce057e0c4f9e4a585b86aa5a38a1596b3cc" Apr 04 03:17:48 crc kubenswrapper[4681]: E0404 03:17:48.599050 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2a2e5127a4de16e16a06c62eb9c5ce057e0c4f9e4a585b86aa5a38a1596b3cc\": container with ID starting with e2a2e5127a4de16e16a06c62eb9c5ce057e0c4f9e4a585b86aa5a38a1596b3cc not found: ID does not exist" containerID="e2a2e5127a4de16e16a06c62eb9c5ce057e0c4f9e4a585b86aa5a38a1596b3cc" Apr 04 03:17:48 crc kubenswrapper[4681]: I0404 03:17:48.599081 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2a2e5127a4de16e16a06c62eb9c5ce057e0c4f9e4a585b86aa5a38a1596b3cc"} err="failed to get container status \"e2a2e5127a4de16e16a06c62eb9c5ce057e0c4f9e4a585b86aa5a38a1596b3cc\": rpc error: code = NotFound desc = could not find container \"e2a2e5127a4de16e16a06c62eb9c5ce057e0c4f9e4a585b86aa5a38a1596b3cc\": container with ID starting with e2a2e5127a4de16e16a06c62eb9c5ce057e0c4f9e4a585b86aa5a38a1596b3cc not found: ID does not exist" Apr 04 03:17:48 crc kubenswrapper[4681]: I0404 03:17:48.599096 4681 scope.go:117] "RemoveContainer" containerID="7a6628639a02d6f5763e27742cb917d1b8db4c8ef994c1a49c8cefbba24ad36d" Apr 04 03:17:48 crc kubenswrapper[4681]: E0404 03:17:48.599312 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a6628639a02d6f5763e27742cb917d1b8db4c8ef994c1a49c8cefbba24ad36d\": container with ID starting with 7a6628639a02d6f5763e27742cb917d1b8db4c8ef994c1a49c8cefbba24ad36d not found: ID does not exist" containerID="7a6628639a02d6f5763e27742cb917d1b8db4c8ef994c1a49c8cefbba24ad36d" Apr 04 03:17:48 crc kubenswrapper[4681]: I0404 03:17:48.599332 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a6628639a02d6f5763e27742cb917d1b8db4c8ef994c1a49c8cefbba24ad36d"} err="failed to get container status \"7a6628639a02d6f5763e27742cb917d1b8db4c8ef994c1a49c8cefbba24ad36d\": rpc error: code = NotFound desc = could not find container \"7a6628639a02d6f5763e27742cb917d1b8db4c8ef994c1a49c8cefbba24ad36d\": container with ID starting with 7a6628639a02d6f5763e27742cb917d1b8db4c8ef994c1a49c8cefbba24ad36d not found: ID does not exist" Apr 04 03:17:49 crc kubenswrapper[4681]: I0404 03:17:49.222729 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7eefa93e-c921-4823-ae08-626a462af47d" path="/var/lib/kubelet/pods/7eefa93e-c921-4823-ae08-626a462af47d/volumes" Apr 04 03:18:00 crc kubenswrapper[4681]: I0404 03:18:00.154837 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587878-h48b9"] Apr 04 03:18:00 crc kubenswrapper[4681]: E0404 03:18:00.155768 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eefa93e-c921-4823-ae08-626a462af47d" containerName="extract-utilities" Apr 04 03:18:00 crc kubenswrapper[4681]: I0404 03:18:00.155783 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eefa93e-c921-4823-ae08-626a462af47d" containerName="extract-utilities" Apr 04 03:18:00 crc kubenswrapper[4681]: E0404 03:18:00.155815 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eefa93e-c921-4823-ae08-626a462af47d" containerName="extract-content" Apr 04 03:18:00 crc kubenswrapper[4681]: I0404 03:18:00.155822 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eefa93e-c921-4823-ae08-626a462af47d" containerName="extract-content" Apr 04 03:18:00 crc kubenswrapper[4681]: E0404 03:18:00.155838 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eefa93e-c921-4823-ae08-626a462af47d" containerName="registry-server" Apr 04 03:18:00 crc kubenswrapper[4681]: I0404 03:18:00.155843 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eefa93e-c921-4823-ae08-626a462af47d" containerName="registry-server" Apr 04 03:18:00 crc kubenswrapper[4681]: I0404 03:18:00.156058 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eefa93e-c921-4823-ae08-626a462af47d" containerName="registry-server" Apr 04 03:18:00 crc kubenswrapper[4681]: I0404 03:18:00.156815 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587878-h48b9" Apr 04 03:18:00 crc kubenswrapper[4681]: I0404 03:18:00.165133 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587878-h48b9"] Apr 04 03:18:00 crc kubenswrapper[4681]: I0404 03:18:00.192853 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 03:18:00 crc kubenswrapper[4681]: I0404 03:18:00.193148 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 03:18:00 crc kubenswrapper[4681]: I0404 03:18:00.193330 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 03:18:00 crc kubenswrapper[4681]: I0404 03:18:00.296477 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx8px\" (UniqueName: \"kubernetes.io/projected/58f60a49-0f7c-44a7-97de-ee6d4969cd2d-kube-api-access-gx8px\") pod \"auto-csr-approver-29587878-h48b9\" (UID: \"58f60a49-0f7c-44a7-97de-ee6d4969cd2d\") " pod="openshift-infra/auto-csr-approver-29587878-h48b9" Apr 04 03:18:00 crc kubenswrapper[4681]: I0404 03:18:00.398586 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx8px\" (UniqueName: \"kubernetes.io/projected/58f60a49-0f7c-44a7-97de-ee6d4969cd2d-kube-api-access-gx8px\") pod \"auto-csr-approver-29587878-h48b9\" (UID: \"58f60a49-0f7c-44a7-97de-ee6d4969cd2d\") " pod="openshift-infra/auto-csr-approver-29587878-h48b9" Apr 04 03:18:00 crc kubenswrapper[4681]: I0404 03:18:00.429125 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx8px\" (UniqueName: \"kubernetes.io/projected/58f60a49-0f7c-44a7-97de-ee6d4969cd2d-kube-api-access-gx8px\") pod \"auto-csr-approver-29587878-h48b9\" (UID: \"58f60a49-0f7c-44a7-97de-ee6d4969cd2d\") " pod="openshift-infra/auto-csr-approver-29587878-h48b9" Apr 04 03:18:00 crc kubenswrapper[4681]: I0404 03:18:00.508950 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587878-h48b9" Apr 04 03:18:00 crc kubenswrapper[4681]: I0404 03:18:00.975015 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587878-h48b9"] Apr 04 03:18:01 crc kubenswrapper[4681]: I0404 03:18:01.622718 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587878-h48b9" event={"ID":"58f60a49-0f7c-44a7-97de-ee6d4969cd2d","Type":"ContainerStarted","Data":"2523740d67bf565baf8f39ec67800aa2a73ee0e59ff141836db6de6d1a27b532"} Apr 04 03:18:02 crc kubenswrapper[4681]: I0404 03:18:02.633916 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587878-h48b9" event={"ID":"58f60a49-0f7c-44a7-97de-ee6d4969cd2d","Type":"ContainerStarted","Data":"e9ff57a5084e294d69d77326a46690db24fb076f24493b05674c3d8229658243"} Apr 04 03:18:03 crc kubenswrapper[4681]: I0404 03:18:03.647913 4681 generic.go:334] "Generic (PLEG): container finished" podID="58f60a49-0f7c-44a7-97de-ee6d4969cd2d" containerID="e9ff57a5084e294d69d77326a46690db24fb076f24493b05674c3d8229658243" exitCode=0 Apr 04 03:18:03 crc kubenswrapper[4681]: I0404 03:18:03.648014 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587878-h48b9" event={"ID":"58f60a49-0f7c-44a7-97de-ee6d4969cd2d","Type":"ContainerDied","Data":"e9ff57a5084e294d69d77326a46690db24fb076f24493b05674c3d8229658243"} Apr 04 03:18:04 crc kubenswrapper[4681]: I0404 03:18:04.318959 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587878-h48b9" Apr 04 03:18:04 crc kubenswrapper[4681]: I0404 03:18:04.385134 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx8px\" (UniqueName: \"kubernetes.io/projected/58f60a49-0f7c-44a7-97de-ee6d4969cd2d-kube-api-access-gx8px\") pod \"58f60a49-0f7c-44a7-97de-ee6d4969cd2d\" (UID: \"58f60a49-0f7c-44a7-97de-ee6d4969cd2d\") " Apr 04 03:18:04 crc kubenswrapper[4681]: I0404 03:18:04.398590 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58f60a49-0f7c-44a7-97de-ee6d4969cd2d-kube-api-access-gx8px" (OuterVolumeSpecName: "kube-api-access-gx8px") pod "58f60a49-0f7c-44a7-97de-ee6d4969cd2d" (UID: "58f60a49-0f7c-44a7-97de-ee6d4969cd2d"). InnerVolumeSpecName "kube-api-access-gx8px". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:18:04 crc kubenswrapper[4681]: I0404 03:18:04.488181 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx8px\" (UniqueName: \"kubernetes.io/projected/58f60a49-0f7c-44a7-97de-ee6d4969cd2d-kube-api-access-gx8px\") on node \"crc\" DevicePath \"\"" Apr 04 03:18:04 crc kubenswrapper[4681]: I0404 03:18:04.668231 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587878-h48b9" event={"ID":"58f60a49-0f7c-44a7-97de-ee6d4969cd2d","Type":"ContainerDied","Data":"2523740d67bf565baf8f39ec67800aa2a73ee0e59ff141836db6de6d1a27b532"} Apr 04 03:18:04 crc kubenswrapper[4681]: I0404 03:18:04.668971 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2523740d67bf565baf8f39ec67800aa2a73ee0e59ff141836db6de6d1a27b532" Apr 04 03:18:04 crc kubenswrapper[4681]: I0404 03:18:04.668387 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587878-h48b9" Apr 04 03:18:05 crc kubenswrapper[4681]: I0404 03:18:05.387628 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587872-szqf6"] Apr 04 03:18:05 crc kubenswrapper[4681]: I0404 03:18:05.397879 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587872-szqf6"] Apr 04 03:18:07 crc kubenswrapper[4681]: I0404 03:18:07.221149 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88c2ed7d-8b5c-4223-9944-63035448641d" path="/var/lib/kubelet/pods/88c2ed7d-8b5c-4223-9944-63035448641d/volumes" Apr 04 03:18:33 crc kubenswrapper[4681]: I0404 03:18:33.070986 4681 scope.go:117] "RemoveContainer" containerID="5a6c5ac33c9d7a8a30df697107d21b8e63a40c3b5eed9f482ecbc111ead50b91" Apr 04 03:18:33 crc kubenswrapper[4681]: I0404 03:18:33.104437 4681 scope.go:117] "RemoveContainer" containerID="78dcd769cf5a96667d3012aea1be5dce654b723bbd14f6b6f8c84aaf4eac14b9" Apr 04 03:18:33 crc kubenswrapper[4681]: I0404 03:18:33.165332 4681 scope.go:117] "RemoveContainer" containerID="c69f65257de90b7a6fff0d476cbc58d665da26297e079d0a4109c07e39c7277e" Apr 04 03:18:33 crc kubenswrapper[4681]: I0404 03:18:33.241410 4681 scope.go:117] "RemoveContainer" containerID="c02093c214c9dd0cd7790f7a6ab86f909e778b4bed0f61c04b70ab82d015b690" Apr 04 03:18:56 crc kubenswrapper[4681]: I0404 03:18:56.524115 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 03:18:56 crc kubenswrapper[4681]: I0404 03:18:56.525086 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 03:19:26 crc kubenswrapper[4681]: I0404 03:19:26.524197 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 03:19:26 crc kubenswrapper[4681]: I0404 03:19:26.524865 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 03:19:56 crc kubenswrapper[4681]: I0404 03:19:56.524629 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 03:19:56 crc kubenswrapper[4681]: I0404 03:19:56.525116 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 03:19:56 crc kubenswrapper[4681]: I0404 03:19:56.525179 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 03:19:56 crc kubenswrapper[4681]: I0404 03:19:56.526500 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cb35f78733973682832c145e9adcab8aa8004fd64ad6ea23073eaf4ca12de0a0"} pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Apr 04 03:19:56 crc kubenswrapper[4681]: I0404 03:19:56.526583 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" containerID="cri-o://cb35f78733973682832c145e9adcab8aa8004fd64ad6ea23073eaf4ca12de0a0" gracePeriod=600 Apr 04 03:19:56 crc kubenswrapper[4681]: I0404 03:19:56.886495 4681 generic.go:334] "Generic (PLEG): container finished" podID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerID="cb35f78733973682832c145e9adcab8aa8004fd64ad6ea23073eaf4ca12de0a0" exitCode=0 Apr 04 03:19:56 crc kubenswrapper[4681]: I0404 03:19:56.886590 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerDied","Data":"cb35f78733973682832c145e9adcab8aa8004fd64ad6ea23073eaf4ca12de0a0"} Apr 04 03:19:56 crc kubenswrapper[4681]: I0404 03:19:56.886918 4681 scope.go:117] "RemoveContainer" containerID="4732b8e8c28dc28f6f0429db09a31ccd60f221a7c340a003c828673da3225b4c" Apr 04 03:19:57 crc kubenswrapper[4681]: I0404 03:19:57.897486 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerStarted","Data":"b6e3c8e3bb709da31661bbcc7d5fde86a5510b8de30a5bd05cb1e0ec6a6ed0b4"} Apr 04 03:20:00 crc kubenswrapper[4681]: I0404 03:20:00.151894 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587880-7tjfw"] Apr 04 03:20:00 crc kubenswrapper[4681]: E0404 03:20:00.153044 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58f60a49-0f7c-44a7-97de-ee6d4969cd2d" containerName="oc" Apr 04 03:20:00 crc kubenswrapper[4681]: I0404 03:20:00.153064 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="58f60a49-0f7c-44a7-97de-ee6d4969cd2d" containerName="oc" Apr 04 03:20:00 crc kubenswrapper[4681]: I0404 03:20:00.153334 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="58f60a49-0f7c-44a7-97de-ee6d4969cd2d" containerName="oc" Apr 04 03:20:00 crc kubenswrapper[4681]: I0404 03:20:00.154213 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587880-7tjfw" Apr 04 03:20:00 crc kubenswrapper[4681]: I0404 03:20:00.156525 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 03:20:00 crc kubenswrapper[4681]: I0404 03:20:00.157020 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 03:20:00 crc kubenswrapper[4681]: I0404 03:20:00.157178 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 03:20:00 crc kubenswrapper[4681]: I0404 03:20:00.163140 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587880-7tjfw"] Apr 04 03:20:00 crc kubenswrapper[4681]: I0404 03:20:00.216105 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7jm5\" (UniqueName: \"kubernetes.io/projected/ce4ed2c2-8871-48dc-b6ac-e2b62ea8dff1-kube-api-access-f7jm5\") pod \"auto-csr-approver-29587880-7tjfw\" (UID: \"ce4ed2c2-8871-48dc-b6ac-e2b62ea8dff1\") " pod="openshift-infra/auto-csr-approver-29587880-7tjfw" Apr 04 03:20:00 crc kubenswrapper[4681]: I0404 03:20:00.317528 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7jm5\" (UniqueName: \"kubernetes.io/projected/ce4ed2c2-8871-48dc-b6ac-e2b62ea8dff1-kube-api-access-f7jm5\") pod \"auto-csr-approver-29587880-7tjfw\" (UID: \"ce4ed2c2-8871-48dc-b6ac-e2b62ea8dff1\") " pod="openshift-infra/auto-csr-approver-29587880-7tjfw" Apr 04 03:20:00 crc kubenswrapper[4681]: I0404 03:20:00.343189 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7jm5\" (UniqueName: \"kubernetes.io/projected/ce4ed2c2-8871-48dc-b6ac-e2b62ea8dff1-kube-api-access-f7jm5\") pod \"auto-csr-approver-29587880-7tjfw\" (UID: \"ce4ed2c2-8871-48dc-b6ac-e2b62ea8dff1\") " pod="openshift-infra/auto-csr-approver-29587880-7tjfw" Apr 04 03:20:00 crc kubenswrapper[4681]: I0404 03:20:00.474063 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587880-7tjfw" Apr 04 03:20:00 crc kubenswrapper[4681]: I0404 03:20:00.922831 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587880-7tjfw"] Apr 04 03:20:00 crc kubenswrapper[4681]: I0404 03:20:00.930950 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587880-7tjfw" event={"ID":"ce4ed2c2-8871-48dc-b6ac-e2b62ea8dff1","Type":"ContainerStarted","Data":"97cf83f82ff049da2b2817a572e636cc1813db77803d769286f187ce2a40796e"} Apr 04 03:20:02 crc kubenswrapper[4681]: I0404 03:20:02.951293 4681 generic.go:334] "Generic (PLEG): container finished" podID="ce4ed2c2-8871-48dc-b6ac-e2b62ea8dff1" containerID="bc0f83bd9af793949035455275163ccea35299d1a4871ebaa666d569079fec85" exitCode=0 Apr 04 03:20:02 crc kubenswrapper[4681]: I0404 03:20:02.951428 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587880-7tjfw" event={"ID":"ce4ed2c2-8871-48dc-b6ac-e2b62ea8dff1","Type":"ContainerDied","Data":"bc0f83bd9af793949035455275163ccea35299d1a4871ebaa666d569079fec85"} Apr 04 03:20:04 crc kubenswrapper[4681]: I0404 03:20:04.282779 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587880-7tjfw" Apr 04 03:20:04 crc kubenswrapper[4681]: I0404 03:20:04.404227 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7jm5\" (UniqueName: \"kubernetes.io/projected/ce4ed2c2-8871-48dc-b6ac-e2b62ea8dff1-kube-api-access-f7jm5\") pod \"ce4ed2c2-8871-48dc-b6ac-e2b62ea8dff1\" (UID: \"ce4ed2c2-8871-48dc-b6ac-e2b62ea8dff1\") " Apr 04 03:20:04 crc kubenswrapper[4681]: I0404 03:20:04.412598 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce4ed2c2-8871-48dc-b6ac-e2b62ea8dff1-kube-api-access-f7jm5" (OuterVolumeSpecName: "kube-api-access-f7jm5") pod "ce4ed2c2-8871-48dc-b6ac-e2b62ea8dff1" (UID: "ce4ed2c2-8871-48dc-b6ac-e2b62ea8dff1"). InnerVolumeSpecName "kube-api-access-f7jm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:20:04 crc kubenswrapper[4681]: I0404 03:20:04.507514 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7jm5\" (UniqueName: \"kubernetes.io/projected/ce4ed2c2-8871-48dc-b6ac-e2b62ea8dff1-kube-api-access-f7jm5\") on node \"crc\" DevicePath \"\"" Apr 04 03:20:04 crc kubenswrapper[4681]: I0404 03:20:04.973373 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587880-7tjfw" event={"ID":"ce4ed2c2-8871-48dc-b6ac-e2b62ea8dff1","Type":"ContainerDied","Data":"97cf83f82ff049da2b2817a572e636cc1813db77803d769286f187ce2a40796e"} Apr 04 03:20:04 crc kubenswrapper[4681]: I0404 03:20:04.973438 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97cf83f82ff049da2b2817a572e636cc1813db77803d769286f187ce2a40796e" Apr 04 03:20:04 crc kubenswrapper[4681]: I0404 03:20:04.973687 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587880-7tjfw" Apr 04 03:20:05 crc kubenswrapper[4681]: I0404 03:20:05.358879 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587874-wn64z"] Apr 04 03:20:05 crc kubenswrapper[4681]: I0404 03:20:05.370022 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587874-wn64z"] Apr 04 03:20:07 crc kubenswrapper[4681]: I0404 03:20:07.219611 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7d49235-c130-4b50-84f1-5ed2a85cb778" path="/var/lib/kubelet/pods/c7d49235-c130-4b50-84f1-5ed2a85cb778/volumes" Apr 04 03:20:33 crc kubenswrapper[4681]: I0404 03:20:33.506952 4681 scope.go:117] "RemoveContainer" containerID="9a2846c06d31064b6a46a267596d91425583cb92e44b703452beae784df7cf89" Apr 04 03:20:39 crc kubenswrapper[4681]: I0404 03:20:39.282383 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kx7kg"] Apr 04 03:20:39 crc kubenswrapper[4681]: E0404 03:20:39.283614 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce4ed2c2-8871-48dc-b6ac-e2b62ea8dff1" containerName="oc" Apr 04 03:20:39 crc kubenswrapper[4681]: I0404 03:20:39.283633 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce4ed2c2-8871-48dc-b6ac-e2b62ea8dff1" containerName="oc" Apr 04 03:20:39 crc kubenswrapper[4681]: I0404 03:20:39.283919 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce4ed2c2-8871-48dc-b6ac-e2b62ea8dff1" containerName="oc" Apr 04 03:20:39 crc kubenswrapper[4681]: I0404 03:20:39.286123 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kx7kg" Apr 04 03:20:39 crc kubenswrapper[4681]: I0404 03:20:39.303197 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kx7kg"] Apr 04 03:20:39 crc kubenswrapper[4681]: I0404 03:20:39.360198 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0b37201-9ed7-4bf9-aa47-b29b927ddb05-utilities\") pod \"community-operators-kx7kg\" (UID: \"a0b37201-9ed7-4bf9-aa47-b29b927ddb05\") " pod="openshift-marketplace/community-operators-kx7kg" Apr 04 03:20:39 crc kubenswrapper[4681]: I0404 03:20:39.360330 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0b37201-9ed7-4bf9-aa47-b29b927ddb05-catalog-content\") pod \"community-operators-kx7kg\" (UID: \"a0b37201-9ed7-4bf9-aa47-b29b927ddb05\") " pod="openshift-marketplace/community-operators-kx7kg" Apr 04 03:20:39 crc kubenswrapper[4681]: I0404 03:20:39.360469 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpdpx\" (UniqueName: \"kubernetes.io/projected/a0b37201-9ed7-4bf9-aa47-b29b927ddb05-kube-api-access-zpdpx\") pod \"community-operators-kx7kg\" (UID: \"a0b37201-9ed7-4bf9-aa47-b29b927ddb05\") " pod="openshift-marketplace/community-operators-kx7kg" Apr 04 03:20:39 crc kubenswrapper[4681]: I0404 03:20:39.462658 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpdpx\" (UniqueName: \"kubernetes.io/projected/a0b37201-9ed7-4bf9-aa47-b29b927ddb05-kube-api-access-zpdpx\") pod \"community-operators-kx7kg\" (UID: \"a0b37201-9ed7-4bf9-aa47-b29b927ddb05\") " pod="openshift-marketplace/community-operators-kx7kg" Apr 04 03:20:39 crc kubenswrapper[4681]: I0404 03:20:39.462819 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0b37201-9ed7-4bf9-aa47-b29b927ddb05-utilities\") pod \"community-operators-kx7kg\" (UID: \"a0b37201-9ed7-4bf9-aa47-b29b927ddb05\") " pod="openshift-marketplace/community-operators-kx7kg" Apr 04 03:20:39 crc kubenswrapper[4681]: I0404 03:20:39.462876 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0b37201-9ed7-4bf9-aa47-b29b927ddb05-catalog-content\") pod \"community-operators-kx7kg\" (UID: \"a0b37201-9ed7-4bf9-aa47-b29b927ddb05\") " pod="openshift-marketplace/community-operators-kx7kg" Apr 04 03:20:39 crc kubenswrapper[4681]: I0404 03:20:39.463498 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0b37201-9ed7-4bf9-aa47-b29b927ddb05-catalog-content\") pod \"community-operators-kx7kg\" (UID: \"a0b37201-9ed7-4bf9-aa47-b29b927ddb05\") " pod="openshift-marketplace/community-operators-kx7kg" Apr 04 03:20:39 crc kubenswrapper[4681]: I0404 03:20:39.464112 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0b37201-9ed7-4bf9-aa47-b29b927ddb05-utilities\") pod \"community-operators-kx7kg\" (UID: \"a0b37201-9ed7-4bf9-aa47-b29b927ddb05\") " pod="openshift-marketplace/community-operators-kx7kg" Apr 04 03:20:39 crc kubenswrapper[4681]: I0404 03:20:39.488354 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpdpx\" (UniqueName: \"kubernetes.io/projected/a0b37201-9ed7-4bf9-aa47-b29b927ddb05-kube-api-access-zpdpx\") pod \"community-operators-kx7kg\" (UID: \"a0b37201-9ed7-4bf9-aa47-b29b927ddb05\") " pod="openshift-marketplace/community-operators-kx7kg" Apr 04 03:20:39 crc kubenswrapper[4681]: I0404 03:20:39.611432 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kx7kg" Apr 04 03:20:40 crc kubenswrapper[4681]: W0404 03:20:40.185557 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0b37201_9ed7_4bf9_aa47_b29b927ddb05.slice/crio-c8215dd7d1ea6f8d8099678075b77422970a9044df79e1c6cd410118c14102d4 WatchSource:0}: Error finding container c8215dd7d1ea6f8d8099678075b77422970a9044df79e1c6cd410118c14102d4: Status 404 returned error can't find the container with id c8215dd7d1ea6f8d8099678075b77422970a9044df79e1c6cd410118c14102d4 Apr 04 03:20:40 crc kubenswrapper[4681]: I0404 03:20:40.190634 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kx7kg"] Apr 04 03:20:40 crc kubenswrapper[4681]: I0404 03:20:40.398470 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kx7kg" event={"ID":"a0b37201-9ed7-4bf9-aa47-b29b927ddb05","Type":"ContainerStarted","Data":"78210d3706e63b1750deec53eb5aa1a062e43de6fe16ae065ca27086489e5051"} Apr 04 03:20:40 crc kubenswrapper[4681]: I0404 03:20:40.398525 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kx7kg" event={"ID":"a0b37201-9ed7-4bf9-aa47-b29b927ddb05","Type":"ContainerStarted","Data":"c8215dd7d1ea6f8d8099678075b77422970a9044df79e1c6cd410118c14102d4"} Apr 04 03:20:41 crc kubenswrapper[4681]: I0404 03:20:41.408724 4681 generic.go:334] "Generic (PLEG): container finished" podID="a0b37201-9ed7-4bf9-aa47-b29b927ddb05" containerID="78210d3706e63b1750deec53eb5aa1a062e43de6fe16ae065ca27086489e5051" exitCode=0 Apr 04 03:20:41 crc kubenswrapper[4681]: I0404 03:20:41.408847 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kx7kg" event={"ID":"a0b37201-9ed7-4bf9-aa47-b29b927ddb05","Type":"ContainerDied","Data":"78210d3706e63b1750deec53eb5aa1a062e43de6fe16ae065ca27086489e5051"} Apr 04 03:20:41 crc kubenswrapper[4681]: I0404 03:20:41.411028 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 04 03:20:42 crc kubenswrapper[4681]: I0404 03:20:42.423887 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kx7kg" event={"ID":"a0b37201-9ed7-4bf9-aa47-b29b927ddb05","Type":"ContainerStarted","Data":"2f5664559ce93f2cf5b3430fd05007a314a001545f9992ab5d4b53a03ddd453e"} Apr 04 03:20:44 crc kubenswrapper[4681]: I0404 03:20:44.455113 4681 generic.go:334] "Generic (PLEG): container finished" podID="a0b37201-9ed7-4bf9-aa47-b29b927ddb05" containerID="2f5664559ce93f2cf5b3430fd05007a314a001545f9992ab5d4b53a03ddd453e" exitCode=0 Apr 04 03:20:44 crc kubenswrapper[4681]: I0404 03:20:44.455229 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kx7kg" event={"ID":"a0b37201-9ed7-4bf9-aa47-b29b927ddb05","Type":"ContainerDied","Data":"2f5664559ce93f2cf5b3430fd05007a314a001545f9992ab5d4b53a03ddd453e"} Apr 04 03:20:45 crc kubenswrapper[4681]: I0404 03:20:45.466912 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kx7kg" event={"ID":"a0b37201-9ed7-4bf9-aa47-b29b927ddb05","Type":"ContainerStarted","Data":"6cf0d8b4f9d7ce74a89a8c378e60fcd0f9e58663b0d4d9a1c789647aee1f3ba7"} Apr 04 03:20:45 crc kubenswrapper[4681]: I0404 03:20:45.493696 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kx7kg" podStartSLOduration=3.072461145 podStartE2EDuration="6.493678743s" podCreationTimestamp="2026-04-04 03:20:39 +0000 UTC" firstStartedPulling="2026-04-04 03:20:41.41080626 +0000 UTC m=+5121.076581380" lastFinishedPulling="2026-04-04 03:20:44.832023858 +0000 UTC m=+5124.497798978" observedRunningTime="2026-04-04 03:20:45.485434158 +0000 UTC m=+5125.151209308" watchObservedRunningTime="2026-04-04 03:20:45.493678743 +0000 UTC m=+5125.159453853" Apr 04 03:20:49 crc kubenswrapper[4681]: I0404 03:20:49.612856 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kx7kg" Apr 04 03:20:49 crc kubenswrapper[4681]: I0404 03:20:49.613463 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kx7kg" Apr 04 03:20:49 crc kubenswrapper[4681]: I0404 03:20:49.672821 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kx7kg" Apr 04 03:20:50 crc kubenswrapper[4681]: I0404 03:20:50.609963 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kx7kg" Apr 04 03:20:50 crc kubenswrapper[4681]: I0404 03:20:50.675414 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kx7kg"] Apr 04 03:20:52 crc kubenswrapper[4681]: I0404 03:20:52.553187 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kx7kg" podUID="a0b37201-9ed7-4bf9-aa47-b29b927ddb05" containerName="registry-server" containerID="cri-o://6cf0d8b4f9d7ce74a89a8c378e60fcd0f9e58663b0d4d9a1c789647aee1f3ba7" gracePeriod=2 Apr 04 03:20:53 crc kubenswrapper[4681]: I0404 03:20:53.214749 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kx7kg" Apr 04 03:20:53 crc kubenswrapper[4681]: I0404 03:20:53.290555 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0b37201-9ed7-4bf9-aa47-b29b927ddb05-utilities\") pod \"a0b37201-9ed7-4bf9-aa47-b29b927ddb05\" (UID: \"a0b37201-9ed7-4bf9-aa47-b29b927ddb05\") " Apr 04 03:20:53 crc kubenswrapper[4681]: I0404 03:20:53.290724 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpdpx\" (UniqueName: \"kubernetes.io/projected/a0b37201-9ed7-4bf9-aa47-b29b927ddb05-kube-api-access-zpdpx\") pod \"a0b37201-9ed7-4bf9-aa47-b29b927ddb05\" (UID: \"a0b37201-9ed7-4bf9-aa47-b29b927ddb05\") " Apr 04 03:20:53 crc kubenswrapper[4681]: I0404 03:20:53.290815 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0b37201-9ed7-4bf9-aa47-b29b927ddb05-catalog-content\") pod \"a0b37201-9ed7-4bf9-aa47-b29b927ddb05\" (UID: \"a0b37201-9ed7-4bf9-aa47-b29b927ddb05\") " Apr 04 03:20:53 crc kubenswrapper[4681]: I0404 03:20:53.292445 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0b37201-9ed7-4bf9-aa47-b29b927ddb05-utilities" (OuterVolumeSpecName: "utilities") pod "a0b37201-9ed7-4bf9-aa47-b29b927ddb05" (UID: "a0b37201-9ed7-4bf9-aa47-b29b927ddb05"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:20:53 crc kubenswrapper[4681]: I0404 03:20:53.303356 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0b37201-9ed7-4bf9-aa47-b29b927ddb05-kube-api-access-zpdpx" (OuterVolumeSpecName: "kube-api-access-zpdpx") pod "a0b37201-9ed7-4bf9-aa47-b29b927ddb05" (UID: "a0b37201-9ed7-4bf9-aa47-b29b927ddb05"). InnerVolumeSpecName "kube-api-access-zpdpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:20:53 crc kubenswrapper[4681]: I0404 03:20:53.364321 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0b37201-9ed7-4bf9-aa47-b29b927ddb05-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0b37201-9ed7-4bf9-aa47-b29b927ddb05" (UID: "a0b37201-9ed7-4bf9-aa47-b29b927ddb05"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:20:53 crc kubenswrapper[4681]: I0404 03:20:53.393950 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0b37201-9ed7-4bf9-aa47-b29b927ddb05-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 03:20:53 crc kubenswrapper[4681]: I0404 03:20:53.393987 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpdpx\" (UniqueName: \"kubernetes.io/projected/a0b37201-9ed7-4bf9-aa47-b29b927ddb05-kube-api-access-zpdpx\") on node \"crc\" DevicePath \"\"" Apr 04 03:20:53 crc kubenswrapper[4681]: I0404 03:20:53.393997 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0b37201-9ed7-4bf9-aa47-b29b927ddb05-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 03:20:53 crc kubenswrapper[4681]: I0404 03:20:53.565028 4681 generic.go:334] "Generic (PLEG): container finished" podID="a0b37201-9ed7-4bf9-aa47-b29b927ddb05" containerID="6cf0d8b4f9d7ce74a89a8c378e60fcd0f9e58663b0d4d9a1c789647aee1f3ba7" exitCode=0 Apr 04 03:20:53 crc kubenswrapper[4681]: I0404 03:20:53.565077 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kx7kg" event={"ID":"a0b37201-9ed7-4bf9-aa47-b29b927ddb05","Type":"ContainerDied","Data":"6cf0d8b4f9d7ce74a89a8c378e60fcd0f9e58663b0d4d9a1c789647aee1f3ba7"} Apr 04 03:20:53 crc kubenswrapper[4681]: I0404 03:20:53.565091 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kx7kg" Apr 04 03:20:53 crc kubenswrapper[4681]: I0404 03:20:53.565110 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kx7kg" event={"ID":"a0b37201-9ed7-4bf9-aa47-b29b927ddb05","Type":"ContainerDied","Data":"c8215dd7d1ea6f8d8099678075b77422970a9044df79e1c6cd410118c14102d4"} Apr 04 03:20:53 crc kubenswrapper[4681]: I0404 03:20:53.565131 4681 scope.go:117] "RemoveContainer" containerID="6cf0d8b4f9d7ce74a89a8c378e60fcd0f9e58663b0d4d9a1c789647aee1f3ba7" Apr 04 03:20:53 crc kubenswrapper[4681]: I0404 03:20:53.599621 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kx7kg"] Apr 04 03:20:53 crc kubenswrapper[4681]: I0404 03:20:53.602936 4681 scope.go:117] "RemoveContainer" containerID="2f5664559ce93f2cf5b3430fd05007a314a001545f9992ab5d4b53a03ddd453e" Apr 04 03:20:53 crc kubenswrapper[4681]: I0404 03:20:53.610533 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kx7kg"] Apr 04 03:20:53 crc kubenswrapper[4681]: I0404 03:20:53.641174 4681 scope.go:117] "RemoveContainer" containerID="78210d3706e63b1750deec53eb5aa1a062e43de6fe16ae065ca27086489e5051" Apr 04 03:20:53 crc kubenswrapper[4681]: I0404 03:20:53.681555 4681 scope.go:117] "RemoveContainer" containerID="6cf0d8b4f9d7ce74a89a8c378e60fcd0f9e58663b0d4d9a1c789647aee1f3ba7" Apr 04 03:20:53 crc kubenswrapper[4681]: E0404 03:20:53.682100 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cf0d8b4f9d7ce74a89a8c378e60fcd0f9e58663b0d4d9a1c789647aee1f3ba7\": container with ID starting with 6cf0d8b4f9d7ce74a89a8c378e60fcd0f9e58663b0d4d9a1c789647aee1f3ba7 not found: ID does not exist" containerID="6cf0d8b4f9d7ce74a89a8c378e60fcd0f9e58663b0d4d9a1c789647aee1f3ba7" Apr 04 03:20:53 crc kubenswrapper[4681]: I0404 03:20:53.682128 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cf0d8b4f9d7ce74a89a8c378e60fcd0f9e58663b0d4d9a1c789647aee1f3ba7"} err="failed to get container status \"6cf0d8b4f9d7ce74a89a8c378e60fcd0f9e58663b0d4d9a1c789647aee1f3ba7\": rpc error: code = NotFound desc = could not find container \"6cf0d8b4f9d7ce74a89a8c378e60fcd0f9e58663b0d4d9a1c789647aee1f3ba7\": container with ID starting with 6cf0d8b4f9d7ce74a89a8c378e60fcd0f9e58663b0d4d9a1c789647aee1f3ba7 not found: ID does not exist" Apr 04 03:20:53 crc kubenswrapper[4681]: I0404 03:20:53.682148 4681 scope.go:117] "RemoveContainer" containerID="2f5664559ce93f2cf5b3430fd05007a314a001545f9992ab5d4b53a03ddd453e" Apr 04 03:20:53 crc kubenswrapper[4681]: E0404 03:20:53.684511 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f5664559ce93f2cf5b3430fd05007a314a001545f9992ab5d4b53a03ddd453e\": container with ID starting with 2f5664559ce93f2cf5b3430fd05007a314a001545f9992ab5d4b53a03ddd453e not found: ID does not exist" containerID="2f5664559ce93f2cf5b3430fd05007a314a001545f9992ab5d4b53a03ddd453e" Apr 04 03:20:53 crc kubenswrapper[4681]: I0404 03:20:53.684537 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f5664559ce93f2cf5b3430fd05007a314a001545f9992ab5d4b53a03ddd453e"} err="failed to get container status \"2f5664559ce93f2cf5b3430fd05007a314a001545f9992ab5d4b53a03ddd453e\": rpc error: code = NotFound desc = could not find container \"2f5664559ce93f2cf5b3430fd05007a314a001545f9992ab5d4b53a03ddd453e\": container with ID starting with 2f5664559ce93f2cf5b3430fd05007a314a001545f9992ab5d4b53a03ddd453e not found: ID does not exist" Apr 04 03:20:53 crc kubenswrapper[4681]: I0404 03:20:53.684551 4681 scope.go:117] "RemoveContainer" containerID="78210d3706e63b1750deec53eb5aa1a062e43de6fe16ae065ca27086489e5051" Apr 04 03:20:53 crc kubenswrapper[4681]: E0404 03:20:53.684959 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78210d3706e63b1750deec53eb5aa1a062e43de6fe16ae065ca27086489e5051\": container with ID starting with 78210d3706e63b1750deec53eb5aa1a062e43de6fe16ae065ca27086489e5051 not found: ID does not exist" containerID="78210d3706e63b1750deec53eb5aa1a062e43de6fe16ae065ca27086489e5051" Apr 04 03:20:53 crc kubenswrapper[4681]: I0404 03:20:53.684979 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78210d3706e63b1750deec53eb5aa1a062e43de6fe16ae065ca27086489e5051"} err="failed to get container status \"78210d3706e63b1750deec53eb5aa1a062e43de6fe16ae065ca27086489e5051\": rpc error: code = NotFound desc = could not find container \"78210d3706e63b1750deec53eb5aa1a062e43de6fe16ae065ca27086489e5051\": container with ID starting with 78210d3706e63b1750deec53eb5aa1a062e43de6fe16ae065ca27086489e5051 not found: ID does not exist" Apr 04 03:20:55 crc kubenswrapper[4681]: I0404 03:20:55.214219 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0b37201-9ed7-4bf9-aa47-b29b927ddb05" path="/var/lib/kubelet/pods/a0b37201-9ed7-4bf9-aa47-b29b927ddb05/volumes" Apr 04 03:21:56 crc kubenswrapper[4681]: I0404 03:21:56.523762 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 03:21:56 crc kubenswrapper[4681]: I0404 03:21:56.524213 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 03:22:00 crc kubenswrapper[4681]: I0404 03:22:00.148806 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587882-t457x"] Apr 04 03:22:00 crc kubenswrapper[4681]: E0404 03:22:00.149904 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0b37201-9ed7-4bf9-aa47-b29b927ddb05" containerName="extract-content" Apr 04 03:22:00 crc kubenswrapper[4681]: I0404 03:22:00.149922 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0b37201-9ed7-4bf9-aa47-b29b927ddb05" containerName="extract-content" Apr 04 03:22:00 crc kubenswrapper[4681]: E0404 03:22:00.149943 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0b37201-9ed7-4bf9-aa47-b29b927ddb05" containerName="registry-server" Apr 04 03:22:00 crc kubenswrapper[4681]: I0404 03:22:00.149950 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0b37201-9ed7-4bf9-aa47-b29b927ddb05" containerName="registry-server" Apr 04 03:22:00 crc kubenswrapper[4681]: E0404 03:22:00.149978 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0b37201-9ed7-4bf9-aa47-b29b927ddb05" containerName="extract-utilities" Apr 04 03:22:00 crc kubenswrapper[4681]: I0404 03:22:00.149985 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0b37201-9ed7-4bf9-aa47-b29b927ddb05" containerName="extract-utilities" Apr 04 03:22:00 crc kubenswrapper[4681]: I0404 03:22:00.150253 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0b37201-9ed7-4bf9-aa47-b29b927ddb05" containerName="registry-server" Apr 04 03:22:00 crc kubenswrapper[4681]: I0404 03:22:00.151277 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587882-t457x" Apr 04 03:22:00 crc kubenswrapper[4681]: I0404 03:22:00.153848 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 03:22:00 crc kubenswrapper[4681]: I0404 03:22:00.154118 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 03:22:00 crc kubenswrapper[4681]: I0404 03:22:00.154227 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 03:22:00 crc kubenswrapper[4681]: I0404 03:22:00.159351 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587882-t457x"] Apr 04 03:22:00 crc kubenswrapper[4681]: I0404 03:22:00.298584 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zms9v\" (UniqueName: \"kubernetes.io/projected/30df2dbd-7792-4460-a15f-5d314523995f-kube-api-access-zms9v\") pod \"auto-csr-approver-29587882-t457x\" (UID: \"30df2dbd-7792-4460-a15f-5d314523995f\") " pod="openshift-infra/auto-csr-approver-29587882-t457x" Apr 04 03:22:00 crc kubenswrapper[4681]: I0404 03:22:00.401854 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zms9v\" (UniqueName: \"kubernetes.io/projected/30df2dbd-7792-4460-a15f-5d314523995f-kube-api-access-zms9v\") pod \"auto-csr-approver-29587882-t457x\" (UID: \"30df2dbd-7792-4460-a15f-5d314523995f\") " pod="openshift-infra/auto-csr-approver-29587882-t457x" Apr 04 03:22:00 crc kubenswrapper[4681]: I0404 03:22:00.433161 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zms9v\" (UniqueName: \"kubernetes.io/projected/30df2dbd-7792-4460-a15f-5d314523995f-kube-api-access-zms9v\") pod \"auto-csr-approver-29587882-t457x\" (UID: \"30df2dbd-7792-4460-a15f-5d314523995f\") " pod="openshift-infra/auto-csr-approver-29587882-t457x" Apr 04 03:22:00 crc kubenswrapper[4681]: I0404 03:22:00.473148 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587882-t457x" Apr 04 03:22:00 crc kubenswrapper[4681]: I0404 03:22:00.949920 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587882-t457x"] Apr 04 03:22:01 crc kubenswrapper[4681]: I0404 03:22:01.354615 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587882-t457x" event={"ID":"30df2dbd-7792-4460-a15f-5d314523995f","Type":"ContainerStarted","Data":"05d41390cb82c6d1c2695581bc5177ac06d8776a3ac241e6596ba907a4624bdc"} Apr 04 03:22:01 crc kubenswrapper[4681]: I0404 03:22:01.688431 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qrk76"] Apr 04 03:22:01 crc kubenswrapper[4681]: I0404 03:22:01.690615 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qrk76" Apr 04 03:22:01 crc kubenswrapper[4681]: I0404 03:22:01.714102 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qrk76"] Apr 04 03:22:01 crc kubenswrapper[4681]: I0404 03:22:01.749869 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6rmj\" (UniqueName: \"kubernetes.io/projected/aa0762fa-e3b5-448e-bc45-439b634c7e9b-kube-api-access-w6rmj\") pod \"redhat-marketplace-qrk76\" (UID: \"aa0762fa-e3b5-448e-bc45-439b634c7e9b\") " pod="openshift-marketplace/redhat-marketplace-qrk76" Apr 04 03:22:01 crc kubenswrapper[4681]: I0404 03:22:01.749940 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa0762fa-e3b5-448e-bc45-439b634c7e9b-utilities\") pod \"redhat-marketplace-qrk76\" (UID: \"aa0762fa-e3b5-448e-bc45-439b634c7e9b\") " pod="openshift-marketplace/redhat-marketplace-qrk76" Apr 04 03:22:01 crc kubenswrapper[4681]: I0404 03:22:01.750225 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa0762fa-e3b5-448e-bc45-439b634c7e9b-catalog-content\") pod \"redhat-marketplace-qrk76\" (UID: \"aa0762fa-e3b5-448e-bc45-439b634c7e9b\") " pod="openshift-marketplace/redhat-marketplace-qrk76" Apr 04 03:22:01 crc kubenswrapper[4681]: I0404 03:22:01.852489 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa0762fa-e3b5-448e-bc45-439b634c7e9b-catalog-content\") pod \"redhat-marketplace-qrk76\" (UID: \"aa0762fa-e3b5-448e-bc45-439b634c7e9b\") " pod="openshift-marketplace/redhat-marketplace-qrk76" Apr 04 03:22:01 crc kubenswrapper[4681]: I0404 03:22:01.852596 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6rmj\" (UniqueName: \"kubernetes.io/projected/aa0762fa-e3b5-448e-bc45-439b634c7e9b-kube-api-access-w6rmj\") pod \"redhat-marketplace-qrk76\" (UID: \"aa0762fa-e3b5-448e-bc45-439b634c7e9b\") " pod="openshift-marketplace/redhat-marketplace-qrk76" Apr 04 03:22:01 crc kubenswrapper[4681]: I0404 03:22:01.852636 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa0762fa-e3b5-448e-bc45-439b634c7e9b-utilities\") pod \"redhat-marketplace-qrk76\" (UID: \"aa0762fa-e3b5-448e-bc45-439b634c7e9b\") " pod="openshift-marketplace/redhat-marketplace-qrk76" Apr 04 03:22:01 crc kubenswrapper[4681]: I0404 03:22:01.853036 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa0762fa-e3b5-448e-bc45-439b634c7e9b-catalog-content\") pod \"redhat-marketplace-qrk76\" (UID: \"aa0762fa-e3b5-448e-bc45-439b634c7e9b\") " pod="openshift-marketplace/redhat-marketplace-qrk76" Apr 04 03:22:01 crc kubenswrapper[4681]: I0404 03:22:01.853118 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa0762fa-e3b5-448e-bc45-439b634c7e9b-utilities\") pod \"redhat-marketplace-qrk76\" (UID: \"aa0762fa-e3b5-448e-bc45-439b634c7e9b\") " pod="openshift-marketplace/redhat-marketplace-qrk76" Apr 04 03:22:01 crc kubenswrapper[4681]: I0404 03:22:01.875558 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6rmj\" (UniqueName: \"kubernetes.io/projected/aa0762fa-e3b5-448e-bc45-439b634c7e9b-kube-api-access-w6rmj\") pod \"redhat-marketplace-qrk76\" (UID: \"aa0762fa-e3b5-448e-bc45-439b634c7e9b\") " pod="openshift-marketplace/redhat-marketplace-qrk76" Apr 04 03:22:02 crc kubenswrapper[4681]: I0404 03:22:02.013527 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qrk76" Apr 04 03:22:02 crc kubenswrapper[4681]: I0404 03:22:02.365104 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587882-t457x" event={"ID":"30df2dbd-7792-4460-a15f-5d314523995f","Type":"ContainerStarted","Data":"6b7e23698ebe60a3272d07ec786229e1f661f83b5725b03a68909f53b112a709"} Apr 04 03:22:02 crc kubenswrapper[4681]: I0404 03:22:02.391804 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29587882-t457x" podStartSLOduration=1.600909125 podStartE2EDuration="2.391782429s" podCreationTimestamp="2026-04-04 03:22:00 +0000 UTC" firstStartedPulling="2026-04-04 03:22:00.964797558 +0000 UTC m=+5200.630572678" lastFinishedPulling="2026-04-04 03:22:01.755670862 +0000 UTC m=+5201.421445982" observedRunningTime="2026-04-04 03:22:02.377305564 +0000 UTC m=+5202.043080684" watchObservedRunningTime="2026-04-04 03:22:02.391782429 +0000 UTC m=+5202.057557549" Apr 04 03:22:02 crc kubenswrapper[4681]: I0404 03:22:02.521756 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qrk76"] Apr 04 03:22:03 crc kubenswrapper[4681]: I0404 03:22:03.378109 4681 generic.go:334] "Generic (PLEG): container finished" podID="aa0762fa-e3b5-448e-bc45-439b634c7e9b" containerID="e2320d68e95c3b7deb80f6cfabc09a906611ba088a8f247c6e1b34b90017add0" exitCode=0 Apr 04 03:22:03 crc kubenswrapper[4681]: I0404 03:22:03.378166 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qrk76" event={"ID":"aa0762fa-e3b5-448e-bc45-439b634c7e9b","Type":"ContainerDied","Data":"e2320d68e95c3b7deb80f6cfabc09a906611ba088a8f247c6e1b34b90017add0"} Apr 04 03:22:03 crc kubenswrapper[4681]: I0404 03:22:03.378672 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qrk76" event={"ID":"aa0762fa-e3b5-448e-bc45-439b634c7e9b","Type":"ContainerStarted","Data":"3741d390fd10bd7cd4e1f0969596c17253aebe576d66aeaf249244533f732290"} Apr 04 03:22:03 crc kubenswrapper[4681]: I0404 03:22:03.380792 4681 generic.go:334] "Generic (PLEG): container finished" podID="30df2dbd-7792-4460-a15f-5d314523995f" containerID="6b7e23698ebe60a3272d07ec786229e1f661f83b5725b03a68909f53b112a709" exitCode=0 Apr 04 03:22:03 crc kubenswrapper[4681]: I0404 03:22:03.380821 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587882-t457x" event={"ID":"30df2dbd-7792-4460-a15f-5d314523995f","Type":"ContainerDied","Data":"6b7e23698ebe60a3272d07ec786229e1f661f83b5725b03a68909f53b112a709"} Apr 04 03:22:04 crc kubenswrapper[4681]: I0404 03:22:04.396958 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qrk76" event={"ID":"aa0762fa-e3b5-448e-bc45-439b634c7e9b","Type":"ContainerStarted","Data":"502a366912638f44b07faa75eb073a6d7c5635e44d8a15e57e7e1fc0d073a3a6"} Apr 04 03:22:04 crc kubenswrapper[4681]: I0404 03:22:04.758353 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587882-t457x" Apr 04 03:22:04 crc kubenswrapper[4681]: I0404 03:22:04.919585 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zms9v\" (UniqueName: \"kubernetes.io/projected/30df2dbd-7792-4460-a15f-5d314523995f-kube-api-access-zms9v\") pod \"30df2dbd-7792-4460-a15f-5d314523995f\" (UID: \"30df2dbd-7792-4460-a15f-5d314523995f\") " Apr 04 03:22:04 crc kubenswrapper[4681]: I0404 03:22:04.926472 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30df2dbd-7792-4460-a15f-5d314523995f-kube-api-access-zms9v" (OuterVolumeSpecName: "kube-api-access-zms9v") pod "30df2dbd-7792-4460-a15f-5d314523995f" (UID: "30df2dbd-7792-4460-a15f-5d314523995f"). InnerVolumeSpecName "kube-api-access-zms9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:22:05 crc kubenswrapper[4681]: I0404 03:22:05.022478 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zms9v\" (UniqueName: \"kubernetes.io/projected/30df2dbd-7792-4460-a15f-5d314523995f-kube-api-access-zms9v\") on node \"crc\" DevicePath \"\"" Apr 04 03:22:05 crc kubenswrapper[4681]: I0404 03:22:05.420666 4681 generic.go:334] "Generic (PLEG): container finished" podID="aa0762fa-e3b5-448e-bc45-439b634c7e9b" containerID="502a366912638f44b07faa75eb073a6d7c5635e44d8a15e57e7e1fc0d073a3a6" exitCode=0 Apr 04 03:22:05 crc kubenswrapper[4681]: I0404 03:22:05.421879 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qrk76" event={"ID":"aa0762fa-e3b5-448e-bc45-439b634c7e9b","Type":"ContainerDied","Data":"502a366912638f44b07faa75eb073a6d7c5635e44d8a15e57e7e1fc0d073a3a6"} Apr 04 03:22:05 crc kubenswrapper[4681]: I0404 03:22:05.450175 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587882-t457x" event={"ID":"30df2dbd-7792-4460-a15f-5d314523995f","Type":"ContainerDied","Data":"05d41390cb82c6d1c2695581bc5177ac06d8776a3ac241e6596ba907a4624bdc"} Apr 04 03:22:05 crc kubenswrapper[4681]: I0404 03:22:05.450215 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05d41390cb82c6d1c2695581bc5177ac06d8776a3ac241e6596ba907a4624bdc" Apr 04 03:22:05 crc kubenswrapper[4681]: I0404 03:22:05.450301 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587882-t457x" Apr 04 03:22:05 crc kubenswrapper[4681]: I0404 03:22:05.476483 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587876-qbq7b"] Apr 04 03:22:05 crc kubenswrapper[4681]: I0404 03:22:05.487973 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587876-qbq7b"] Apr 04 03:22:06 crc kubenswrapper[4681]: I0404 03:22:06.463087 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qrk76" event={"ID":"aa0762fa-e3b5-448e-bc45-439b634c7e9b","Type":"ContainerStarted","Data":"c3684d901178eca1840b5b4245ae83072b1db2e65c569deba1a2a5edfd6b2ccb"} Apr 04 03:22:06 crc kubenswrapper[4681]: I0404 03:22:06.490045 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qrk76" podStartSLOduration=3.077159875 podStartE2EDuration="5.490025132s" podCreationTimestamp="2026-04-04 03:22:01 +0000 UTC" firstStartedPulling="2026-04-04 03:22:03.380226676 +0000 UTC m=+5203.046001816" lastFinishedPulling="2026-04-04 03:22:05.793091953 +0000 UTC m=+5205.458867073" observedRunningTime="2026-04-04 03:22:06.484598043 +0000 UTC m=+5206.150373163" watchObservedRunningTime="2026-04-04 03:22:06.490025132 +0000 UTC m=+5206.155800252" Apr 04 03:22:07 crc kubenswrapper[4681]: I0404 03:22:07.212901 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="396a9979-459d-4574-854b-b1d49f26194e" path="/var/lib/kubelet/pods/396a9979-459d-4574-854b-b1d49f26194e/volumes" Apr 04 03:22:12 crc kubenswrapper[4681]: I0404 03:22:12.014590 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qrk76" Apr 04 03:22:12 crc kubenswrapper[4681]: I0404 03:22:12.015825 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qrk76" Apr 04 03:22:12 crc kubenswrapper[4681]: I0404 03:22:12.069736 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qrk76" Apr 04 03:22:12 crc kubenswrapper[4681]: I0404 03:22:12.563175 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qrk76" Apr 04 03:22:12 crc kubenswrapper[4681]: I0404 03:22:12.614033 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qrk76"] Apr 04 03:22:14 crc kubenswrapper[4681]: I0404 03:22:14.540222 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qrk76" podUID="aa0762fa-e3b5-448e-bc45-439b634c7e9b" containerName="registry-server" containerID="cri-o://c3684d901178eca1840b5b4245ae83072b1db2e65c569deba1a2a5edfd6b2ccb" gracePeriod=2 Apr 04 03:22:15 crc kubenswrapper[4681]: I0404 03:22:15.099321 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qrk76" Apr 04 03:22:15 crc kubenswrapper[4681]: I0404 03:22:15.260154 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6rmj\" (UniqueName: \"kubernetes.io/projected/aa0762fa-e3b5-448e-bc45-439b634c7e9b-kube-api-access-w6rmj\") pod \"aa0762fa-e3b5-448e-bc45-439b634c7e9b\" (UID: \"aa0762fa-e3b5-448e-bc45-439b634c7e9b\") " Apr 04 03:22:15 crc kubenswrapper[4681]: I0404 03:22:15.260630 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa0762fa-e3b5-448e-bc45-439b634c7e9b-utilities\") pod \"aa0762fa-e3b5-448e-bc45-439b634c7e9b\" (UID: \"aa0762fa-e3b5-448e-bc45-439b634c7e9b\") " Apr 04 03:22:15 crc kubenswrapper[4681]: I0404 03:22:15.260908 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa0762fa-e3b5-448e-bc45-439b634c7e9b-catalog-content\") pod \"aa0762fa-e3b5-448e-bc45-439b634c7e9b\" (UID: \"aa0762fa-e3b5-448e-bc45-439b634c7e9b\") " Apr 04 03:22:15 crc kubenswrapper[4681]: I0404 03:22:15.261486 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa0762fa-e3b5-448e-bc45-439b634c7e9b-utilities" (OuterVolumeSpecName: "utilities") pod "aa0762fa-e3b5-448e-bc45-439b634c7e9b" (UID: "aa0762fa-e3b5-448e-bc45-439b634c7e9b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:22:15 crc kubenswrapper[4681]: I0404 03:22:15.261827 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa0762fa-e3b5-448e-bc45-439b634c7e9b-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 03:22:15 crc kubenswrapper[4681]: I0404 03:22:15.270189 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa0762fa-e3b5-448e-bc45-439b634c7e9b-kube-api-access-w6rmj" (OuterVolumeSpecName: "kube-api-access-w6rmj") pod "aa0762fa-e3b5-448e-bc45-439b634c7e9b" (UID: "aa0762fa-e3b5-448e-bc45-439b634c7e9b"). InnerVolumeSpecName "kube-api-access-w6rmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:22:15 crc kubenswrapper[4681]: I0404 03:22:15.364373 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6rmj\" (UniqueName: \"kubernetes.io/projected/aa0762fa-e3b5-448e-bc45-439b634c7e9b-kube-api-access-w6rmj\") on node \"crc\" DevicePath \"\"" Apr 04 03:22:15 crc kubenswrapper[4681]: I0404 03:22:15.549854 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa0762fa-e3b5-448e-bc45-439b634c7e9b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa0762fa-e3b5-448e-bc45-439b634c7e9b" (UID: "aa0762fa-e3b5-448e-bc45-439b634c7e9b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:22:15 crc kubenswrapper[4681]: I0404 03:22:15.553973 4681 generic.go:334] "Generic (PLEG): container finished" podID="aa0762fa-e3b5-448e-bc45-439b634c7e9b" containerID="c3684d901178eca1840b5b4245ae83072b1db2e65c569deba1a2a5edfd6b2ccb" exitCode=0 Apr 04 03:22:15 crc kubenswrapper[4681]: I0404 03:22:15.554045 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qrk76" Apr 04 03:22:15 crc kubenswrapper[4681]: I0404 03:22:15.554049 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qrk76" event={"ID":"aa0762fa-e3b5-448e-bc45-439b634c7e9b","Type":"ContainerDied","Data":"c3684d901178eca1840b5b4245ae83072b1db2e65c569deba1a2a5edfd6b2ccb"} Apr 04 03:22:15 crc kubenswrapper[4681]: I0404 03:22:15.554106 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qrk76" event={"ID":"aa0762fa-e3b5-448e-bc45-439b634c7e9b","Type":"ContainerDied","Data":"3741d390fd10bd7cd4e1f0969596c17253aebe576d66aeaf249244533f732290"} Apr 04 03:22:15 crc kubenswrapper[4681]: I0404 03:22:15.554137 4681 scope.go:117] "RemoveContainer" containerID="c3684d901178eca1840b5b4245ae83072b1db2e65c569deba1a2a5edfd6b2ccb" Apr 04 03:22:15 crc kubenswrapper[4681]: I0404 03:22:15.567843 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa0762fa-e3b5-448e-bc45-439b634c7e9b-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 03:22:15 crc kubenswrapper[4681]: I0404 03:22:15.589705 4681 scope.go:117] "RemoveContainer" containerID="502a366912638f44b07faa75eb073a6d7c5635e44d8a15e57e7e1fc0d073a3a6" Apr 04 03:22:15 crc kubenswrapper[4681]: I0404 03:22:15.605721 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qrk76"] Apr 04 03:22:15 crc kubenswrapper[4681]: I0404 03:22:15.618963 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qrk76"] Apr 04 03:22:15 crc kubenswrapper[4681]: I0404 03:22:15.626837 4681 scope.go:117] "RemoveContainer" containerID="e2320d68e95c3b7deb80f6cfabc09a906611ba088a8f247c6e1b34b90017add0" Apr 04 03:22:15 crc kubenswrapper[4681]: I0404 03:22:15.684451 4681 scope.go:117] "RemoveContainer" containerID="c3684d901178eca1840b5b4245ae83072b1db2e65c569deba1a2a5edfd6b2ccb" Apr 04 03:22:15 crc kubenswrapper[4681]: E0404 03:22:15.685233 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3684d901178eca1840b5b4245ae83072b1db2e65c569deba1a2a5edfd6b2ccb\": container with ID starting with c3684d901178eca1840b5b4245ae83072b1db2e65c569deba1a2a5edfd6b2ccb not found: ID does not exist" containerID="c3684d901178eca1840b5b4245ae83072b1db2e65c569deba1a2a5edfd6b2ccb" Apr 04 03:22:15 crc kubenswrapper[4681]: I0404 03:22:15.685356 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3684d901178eca1840b5b4245ae83072b1db2e65c569deba1a2a5edfd6b2ccb"} err="failed to get container status \"c3684d901178eca1840b5b4245ae83072b1db2e65c569deba1a2a5edfd6b2ccb\": rpc error: code = NotFound desc = could not find container \"c3684d901178eca1840b5b4245ae83072b1db2e65c569deba1a2a5edfd6b2ccb\": container with ID starting with c3684d901178eca1840b5b4245ae83072b1db2e65c569deba1a2a5edfd6b2ccb not found: ID does not exist" Apr 04 03:22:15 crc kubenswrapper[4681]: I0404 03:22:15.685383 4681 scope.go:117] "RemoveContainer" containerID="502a366912638f44b07faa75eb073a6d7c5635e44d8a15e57e7e1fc0d073a3a6" Apr 04 03:22:15 crc kubenswrapper[4681]: E0404 03:22:15.686051 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"502a366912638f44b07faa75eb073a6d7c5635e44d8a15e57e7e1fc0d073a3a6\": container with ID starting with 502a366912638f44b07faa75eb073a6d7c5635e44d8a15e57e7e1fc0d073a3a6 not found: ID does not exist" containerID="502a366912638f44b07faa75eb073a6d7c5635e44d8a15e57e7e1fc0d073a3a6" Apr 04 03:22:15 crc kubenswrapper[4681]: I0404 03:22:15.686082 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"502a366912638f44b07faa75eb073a6d7c5635e44d8a15e57e7e1fc0d073a3a6"} err="failed to get container status \"502a366912638f44b07faa75eb073a6d7c5635e44d8a15e57e7e1fc0d073a3a6\": rpc error: code = NotFound desc = could not find container \"502a366912638f44b07faa75eb073a6d7c5635e44d8a15e57e7e1fc0d073a3a6\": container with ID starting with 502a366912638f44b07faa75eb073a6d7c5635e44d8a15e57e7e1fc0d073a3a6 not found: ID does not exist" Apr 04 03:22:15 crc kubenswrapper[4681]: I0404 03:22:15.686103 4681 scope.go:117] "RemoveContainer" containerID="e2320d68e95c3b7deb80f6cfabc09a906611ba088a8f247c6e1b34b90017add0" Apr 04 03:22:15 crc kubenswrapper[4681]: E0404 03:22:15.686807 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2320d68e95c3b7deb80f6cfabc09a906611ba088a8f247c6e1b34b90017add0\": container with ID starting with e2320d68e95c3b7deb80f6cfabc09a906611ba088a8f247c6e1b34b90017add0 not found: ID does not exist" containerID="e2320d68e95c3b7deb80f6cfabc09a906611ba088a8f247c6e1b34b90017add0" Apr 04 03:22:15 crc kubenswrapper[4681]: I0404 03:22:15.686833 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2320d68e95c3b7deb80f6cfabc09a906611ba088a8f247c6e1b34b90017add0"} err="failed to get container status \"e2320d68e95c3b7deb80f6cfabc09a906611ba088a8f247c6e1b34b90017add0\": rpc error: code = NotFound desc = could not find container \"e2320d68e95c3b7deb80f6cfabc09a906611ba088a8f247c6e1b34b90017add0\": container with ID starting with e2320d68e95c3b7deb80f6cfabc09a906611ba088a8f247c6e1b34b90017add0 not found: ID does not exist" Apr 04 03:22:17 crc kubenswrapper[4681]: I0404 03:22:17.221881 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa0762fa-e3b5-448e-bc45-439b634c7e9b" path="/var/lib/kubelet/pods/aa0762fa-e3b5-448e-bc45-439b634c7e9b/volumes" Apr 04 03:22:26 crc kubenswrapper[4681]: I0404 03:22:26.524439 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 03:22:26 crc kubenswrapper[4681]: I0404 03:22:26.525108 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 03:22:33 crc kubenswrapper[4681]: I0404 03:22:33.677690 4681 scope.go:117] "RemoveContainer" containerID="393b2692bcb0bd034d2c8189711144a3f00fbf79d97563b9be470f6761676d8a" Apr 04 03:22:56 crc kubenswrapper[4681]: I0404 03:22:56.524814 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 03:22:56 crc kubenswrapper[4681]: I0404 03:22:56.525544 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 03:22:56 crc kubenswrapper[4681]: I0404 03:22:56.525594 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 03:22:56 crc kubenswrapper[4681]: I0404 03:22:56.526442 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b6e3c8e3bb709da31661bbcc7d5fde86a5510b8de30a5bd05cb1e0ec6a6ed0b4"} pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Apr 04 03:22:56 crc kubenswrapper[4681]: I0404 03:22:56.526510 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" containerID="cri-o://b6e3c8e3bb709da31661bbcc7d5fde86a5510b8de30a5bd05cb1e0ec6a6ed0b4" gracePeriod=600 Apr 04 03:22:56 crc kubenswrapper[4681]: E0404 03:22:56.663445 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:22:56 crc kubenswrapper[4681]: I0404 03:22:56.989465 4681 generic.go:334] "Generic (PLEG): container finished" podID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerID="b6e3c8e3bb709da31661bbcc7d5fde86a5510b8de30a5bd05cb1e0ec6a6ed0b4" exitCode=0 Apr 04 03:22:56 crc kubenswrapper[4681]: I0404 03:22:56.989516 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerDied","Data":"b6e3c8e3bb709da31661bbcc7d5fde86a5510b8de30a5bd05cb1e0ec6a6ed0b4"} Apr 04 03:22:56 crc kubenswrapper[4681]: I0404 03:22:56.989561 4681 scope.go:117] "RemoveContainer" containerID="cb35f78733973682832c145e9adcab8aa8004fd64ad6ea23073eaf4ca12de0a0" Apr 04 03:22:56 crc kubenswrapper[4681]: I0404 03:22:56.990348 4681 scope.go:117] "RemoveContainer" containerID="b6e3c8e3bb709da31661bbcc7d5fde86a5510b8de30a5bd05cb1e0ec6a6ed0b4" Apr 04 03:22:56 crc kubenswrapper[4681]: E0404 03:22:56.990648 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:23:09 crc kubenswrapper[4681]: I0404 03:23:09.202455 4681 scope.go:117] "RemoveContainer" containerID="b6e3c8e3bb709da31661bbcc7d5fde86a5510b8de30a5bd05cb1e0ec6a6ed0b4" Apr 04 03:23:09 crc kubenswrapper[4681]: E0404 03:23:09.203233 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:23:20 crc kubenswrapper[4681]: I0404 03:23:20.201233 4681 scope.go:117] "RemoveContainer" containerID="b6e3c8e3bb709da31661bbcc7d5fde86a5510b8de30a5bd05cb1e0ec6a6ed0b4" Apr 04 03:23:20 crc kubenswrapper[4681]: E0404 03:23:20.201969 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:23:32 crc kubenswrapper[4681]: I0404 03:23:32.201804 4681 scope.go:117] "RemoveContainer" containerID="b6e3c8e3bb709da31661bbcc7d5fde86a5510b8de30a5bd05cb1e0ec6a6ed0b4" Apr 04 03:23:32 crc kubenswrapper[4681]: E0404 03:23:32.202788 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:23:47 crc kubenswrapper[4681]: I0404 03:23:47.200990 4681 scope.go:117] "RemoveContainer" containerID="b6e3c8e3bb709da31661bbcc7d5fde86a5510b8de30a5bd05cb1e0ec6a6ed0b4" Apr 04 03:23:47 crc kubenswrapper[4681]: E0404 03:23:47.201651 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:24:00 crc kubenswrapper[4681]: I0404 03:24:00.149892 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587884-q5qlt"] Apr 04 03:24:00 crc kubenswrapper[4681]: E0404 03:24:00.151908 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa0762fa-e3b5-448e-bc45-439b634c7e9b" containerName="extract-content" Apr 04 03:24:00 crc kubenswrapper[4681]: I0404 03:24:00.152021 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa0762fa-e3b5-448e-bc45-439b634c7e9b" containerName="extract-content" Apr 04 03:24:00 crc kubenswrapper[4681]: E0404 03:24:00.152164 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30df2dbd-7792-4460-a15f-5d314523995f" containerName="oc" Apr 04 03:24:00 crc kubenswrapper[4681]: I0404 03:24:00.152250 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="30df2dbd-7792-4460-a15f-5d314523995f" containerName="oc" Apr 04 03:24:00 crc kubenswrapper[4681]: E0404 03:24:00.152358 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa0762fa-e3b5-448e-bc45-439b634c7e9b" containerName="extract-utilities" Apr 04 03:24:00 crc kubenswrapper[4681]: I0404 03:24:00.152462 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa0762fa-e3b5-448e-bc45-439b634c7e9b" containerName="extract-utilities" Apr 04 03:24:00 crc kubenswrapper[4681]: E0404 03:24:00.152563 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa0762fa-e3b5-448e-bc45-439b634c7e9b" containerName="registry-server" Apr 04 03:24:00 crc kubenswrapper[4681]: I0404 03:24:00.152630 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa0762fa-e3b5-448e-bc45-439b634c7e9b" containerName="registry-server" Apr 04 03:24:00 crc kubenswrapper[4681]: I0404 03:24:00.152965 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="30df2dbd-7792-4460-a15f-5d314523995f" containerName="oc" Apr 04 03:24:00 crc kubenswrapper[4681]: I0404 03:24:00.153065 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa0762fa-e3b5-448e-bc45-439b634c7e9b" containerName="registry-server" Apr 04 03:24:00 crc kubenswrapper[4681]: I0404 03:24:00.154476 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587884-q5qlt" Apr 04 03:24:00 crc kubenswrapper[4681]: I0404 03:24:00.157960 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 03:24:00 crc kubenswrapper[4681]: I0404 03:24:00.158050 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 03:24:00 crc kubenswrapper[4681]: I0404 03:24:00.158100 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 03:24:00 crc kubenswrapper[4681]: I0404 03:24:00.169974 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587884-q5qlt"] Apr 04 03:24:00 crc kubenswrapper[4681]: I0404 03:24:00.201608 4681 scope.go:117] "RemoveContainer" containerID="b6e3c8e3bb709da31661bbcc7d5fde86a5510b8de30a5bd05cb1e0ec6a6ed0b4" Apr 04 03:24:00 crc kubenswrapper[4681]: E0404 03:24:00.202176 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:24:00 crc kubenswrapper[4681]: I0404 03:24:00.209351 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxhgp\" (UniqueName: \"kubernetes.io/projected/f49d320b-31c2-48d9-909e-7905bb70030e-kube-api-access-gxhgp\") pod \"auto-csr-approver-29587884-q5qlt\" (UID: \"f49d320b-31c2-48d9-909e-7905bb70030e\") " pod="openshift-infra/auto-csr-approver-29587884-q5qlt" Apr 04 03:24:00 crc kubenswrapper[4681]: I0404 03:24:00.311208 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxhgp\" (UniqueName: \"kubernetes.io/projected/f49d320b-31c2-48d9-909e-7905bb70030e-kube-api-access-gxhgp\") pod \"auto-csr-approver-29587884-q5qlt\" (UID: \"f49d320b-31c2-48d9-909e-7905bb70030e\") " pod="openshift-infra/auto-csr-approver-29587884-q5qlt" Apr 04 03:24:00 crc kubenswrapper[4681]: I0404 03:24:00.336864 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxhgp\" (UniqueName: \"kubernetes.io/projected/f49d320b-31c2-48d9-909e-7905bb70030e-kube-api-access-gxhgp\") pod \"auto-csr-approver-29587884-q5qlt\" (UID: \"f49d320b-31c2-48d9-909e-7905bb70030e\") " pod="openshift-infra/auto-csr-approver-29587884-q5qlt" Apr 04 03:24:00 crc kubenswrapper[4681]: I0404 03:24:00.479396 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587884-q5qlt" Apr 04 03:24:01 crc kubenswrapper[4681]: I0404 03:24:00.995675 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587884-q5qlt"] Apr 04 03:24:01 crc kubenswrapper[4681]: I0404 03:24:01.717036 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587884-q5qlt" event={"ID":"f49d320b-31c2-48d9-909e-7905bb70030e","Type":"ContainerStarted","Data":"ee1db4b920d3629dd988d34de90d002ee6f75c97fb3f37caaf9c97bb688826e5"} Apr 04 03:24:02 crc kubenswrapper[4681]: I0404 03:24:02.729093 4681 generic.go:334] "Generic (PLEG): container finished" podID="f49d320b-31c2-48d9-909e-7905bb70030e" containerID="2eb0cf7e931a6d66a799ae8f50332adde04d8492cb844f527ebee656412baecc" exitCode=0 Apr 04 03:24:02 crc kubenswrapper[4681]: I0404 03:24:02.729207 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587884-q5qlt" event={"ID":"f49d320b-31c2-48d9-909e-7905bb70030e","Type":"ContainerDied","Data":"2eb0cf7e931a6d66a799ae8f50332adde04d8492cb844f527ebee656412baecc"} Apr 04 03:24:04 crc kubenswrapper[4681]: I0404 03:24:04.181548 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587884-q5qlt" Apr 04 03:24:04 crc kubenswrapper[4681]: I0404 03:24:04.303175 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxhgp\" (UniqueName: \"kubernetes.io/projected/f49d320b-31c2-48d9-909e-7905bb70030e-kube-api-access-gxhgp\") pod \"f49d320b-31c2-48d9-909e-7905bb70030e\" (UID: \"f49d320b-31c2-48d9-909e-7905bb70030e\") " Apr 04 03:24:04 crc kubenswrapper[4681]: I0404 03:24:04.309498 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f49d320b-31c2-48d9-909e-7905bb70030e-kube-api-access-gxhgp" (OuterVolumeSpecName: "kube-api-access-gxhgp") pod "f49d320b-31c2-48d9-909e-7905bb70030e" (UID: "f49d320b-31c2-48d9-909e-7905bb70030e"). InnerVolumeSpecName "kube-api-access-gxhgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:24:04 crc kubenswrapper[4681]: I0404 03:24:04.407763 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxhgp\" (UniqueName: \"kubernetes.io/projected/f49d320b-31c2-48d9-909e-7905bb70030e-kube-api-access-gxhgp\") on node \"crc\" DevicePath \"\"" Apr 04 03:24:04 crc kubenswrapper[4681]: I0404 03:24:04.767675 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587884-q5qlt" event={"ID":"f49d320b-31c2-48d9-909e-7905bb70030e","Type":"ContainerDied","Data":"ee1db4b920d3629dd988d34de90d002ee6f75c97fb3f37caaf9c97bb688826e5"} Apr 04 03:24:04 crc kubenswrapper[4681]: I0404 03:24:04.767933 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee1db4b920d3629dd988d34de90d002ee6f75c97fb3f37caaf9c97bb688826e5" Apr 04 03:24:04 crc kubenswrapper[4681]: I0404 03:24:04.767760 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587884-q5qlt" Apr 04 03:24:05 crc kubenswrapper[4681]: I0404 03:24:05.291891 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587878-h48b9"] Apr 04 03:24:05 crc kubenswrapper[4681]: I0404 03:24:05.313847 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587878-h48b9"] Apr 04 03:24:07 crc kubenswrapper[4681]: I0404 03:24:07.212101 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58f60a49-0f7c-44a7-97de-ee6d4969cd2d" path="/var/lib/kubelet/pods/58f60a49-0f7c-44a7-97de-ee6d4969cd2d/volumes" Apr 04 03:24:13 crc kubenswrapper[4681]: I0404 03:24:13.201120 4681 scope.go:117] "RemoveContainer" containerID="b6e3c8e3bb709da31661bbcc7d5fde86a5510b8de30a5bd05cb1e0ec6a6ed0b4" Apr 04 03:24:13 crc kubenswrapper[4681]: E0404 03:24:13.202049 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:24:22 crc kubenswrapper[4681]: E0404 03:24:22.821481 4681 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.71:37830->38.129.56.71:44093: write tcp 38.129.56.71:37830->38.129.56.71:44093: write: connection reset by peer Apr 04 03:24:24 crc kubenswrapper[4681]: I0404 03:24:24.200632 4681 scope.go:117] "RemoveContainer" containerID="b6e3c8e3bb709da31661bbcc7d5fde86a5510b8de30a5bd05cb1e0ec6a6ed0b4" Apr 04 03:24:24 crc kubenswrapper[4681]: E0404 03:24:24.201117 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:24:33 crc kubenswrapper[4681]: I0404 03:24:33.788686 4681 scope.go:117] "RemoveContainer" containerID="e9ff57a5084e294d69d77326a46690db24fb076f24493b05674c3d8229658243" Apr 04 03:24:38 crc kubenswrapper[4681]: I0404 03:24:38.200922 4681 scope.go:117] "RemoveContainer" containerID="b6e3c8e3bb709da31661bbcc7d5fde86a5510b8de30a5bd05cb1e0ec6a6ed0b4" Apr 04 03:24:38 crc kubenswrapper[4681]: E0404 03:24:38.201672 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:24:52 crc kubenswrapper[4681]: I0404 03:24:52.202489 4681 scope.go:117] "RemoveContainer" containerID="b6e3c8e3bb709da31661bbcc7d5fde86a5510b8de30a5bd05cb1e0ec6a6ed0b4" Apr 04 03:24:52 crc kubenswrapper[4681]: E0404 03:24:52.203481 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:25:05 crc kubenswrapper[4681]: I0404 03:25:05.201250 4681 scope.go:117] "RemoveContainer" containerID="b6e3c8e3bb709da31661bbcc7d5fde86a5510b8de30a5bd05cb1e0ec6a6ed0b4" Apr 04 03:25:05 crc kubenswrapper[4681]: E0404 03:25:05.202075 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:25:19 crc kubenswrapper[4681]: I0404 03:25:19.202248 4681 scope.go:117] "RemoveContainer" containerID="b6e3c8e3bb709da31661bbcc7d5fde86a5510b8de30a5bd05cb1e0ec6a6ed0b4" Apr 04 03:25:19 crc kubenswrapper[4681]: E0404 03:25:19.203632 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:25:30 crc kubenswrapper[4681]: I0404 03:25:30.201119 4681 scope.go:117] "RemoveContainer" containerID="b6e3c8e3bb709da31661bbcc7d5fde86a5510b8de30a5bd05cb1e0ec6a6ed0b4" Apr 04 03:25:30 crc kubenswrapper[4681]: E0404 03:25:30.201989 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:25:30 crc kubenswrapper[4681]: I0404 03:25:30.581038 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zl2vv"] Apr 04 03:25:30 crc kubenswrapper[4681]: E0404 03:25:30.581567 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f49d320b-31c2-48d9-909e-7905bb70030e" containerName="oc" Apr 04 03:25:30 crc kubenswrapper[4681]: I0404 03:25:30.581598 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f49d320b-31c2-48d9-909e-7905bb70030e" containerName="oc" Apr 04 03:25:30 crc kubenswrapper[4681]: I0404 03:25:30.582057 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="f49d320b-31c2-48d9-909e-7905bb70030e" containerName="oc" Apr 04 03:25:30 crc kubenswrapper[4681]: I0404 03:25:30.598021 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zl2vv"] Apr 04 03:25:30 crc kubenswrapper[4681]: I0404 03:25:30.598147 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zl2vv" Apr 04 03:25:30 crc kubenswrapper[4681]: I0404 03:25:30.722154 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c44b0cad-66f7-481c-a8b3-f1c554c05f6f-utilities\") pod \"certified-operators-zl2vv\" (UID: \"c44b0cad-66f7-481c-a8b3-f1c554c05f6f\") " pod="openshift-marketplace/certified-operators-zl2vv" Apr 04 03:25:30 crc kubenswrapper[4681]: I0404 03:25:30.722304 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c44b0cad-66f7-481c-a8b3-f1c554c05f6f-catalog-content\") pod \"certified-operators-zl2vv\" (UID: \"c44b0cad-66f7-481c-a8b3-f1c554c05f6f\") " pod="openshift-marketplace/certified-operators-zl2vv" Apr 04 03:25:30 crc kubenswrapper[4681]: I0404 03:25:30.722356 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx2np\" (UniqueName: \"kubernetes.io/projected/c44b0cad-66f7-481c-a8b3-f1c554c05f6f-kube-api-access-rx2np\") pod \"certified-operators-zl2vv\" (UID: \"c44b0cad-66f7-481c-a8b3-f1c554c05f6f\") " pod="openshift-marketplace/certified-operators-zl2vv" Apr 04 03:25:30 crc kubenswrapper[4681]: I0404 03:25:30.823650 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rx2np\" (UniqueName: \"kubernetes.io/projected/c44b0cad-66f7-481c-a8b3-f1c554c05f6f-kube-api-access-rx2np\") pod \"certified-operators-zl2vv\" (UID: \"c44b0cad-66f7-481c-a8b3-f1c554c05f6f\") " pod="openshift-marketplace/certified-operators-zl2vv" Apr 04 03:25:30 crc kubenswrapper[4681]: I0404 03:25:30.823790 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c44b0cad-66f7-481c-a8b3-f1c554c05f6f-utilities\") pod \"certified-operators-zl2vv\" (UID: \"c44b0cad-66f7-481c-a8b3-f1c554c05f6f\") " pod="openshift-marketplace/certified-operators-zl2vv" Apr 04 03:25:30 crc kubenswrapper[4681]: I0404 03:25:30.823952 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c44b0cad-66f7-481c-a8b3-f1c554c05f6f-catalog-content\") pod \"certified-operators-zl2vv\" (UID: \"c44b0cad-66f7-481c-a8b3-f1c554c05f6f\") " pod="openshift-marketplace/certified-operators-zl2vv" Apr 04 03:25:30 crc kubenswrapper[4681]: I0404 03:25:30.824483 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c44b0cad-66f7-481c-a8b3-f1c554c05f6f-utilities\") pod \"certified-operators-zl2vv\" (UID: \"c44b0cad-66f7-481c-a8b3-f1c554c05f6f\") " pod="openshift-marketplace/certified-operators-zl2vv" Apr 04 03:25:30 crc kubenswrapper[4681]: I0404 03:25:30.824514 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c44b0cad-66f7-481c-a8b3-f1c554c05f6f-catalog-content\") pod \"certified-operators-zl2vv\" (UID: \"c44b0cad-66f7-481c-a8b3-f1c554c05f6f\") " pod="openshift-marketplace/certified-operators-zl2vv" Apr 04 03:25:30 crc kubenswrapper[4681]: I0404 03:25:30.858286 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rx2np\" (UniqueName: \"kubernetes.io/projected/c44b0cad-66f7-481c-a8b3-f1c554c05f6f-kube-api-access-rx2np\") pod \"certified-operators-zl2vv\" (UID: \"c44b0cad-66f7-481c-a8b3-f1c554c05f6f\") " pod="openshift-marketplace/certified-operators-zl2vv" Apr 04 03:25:30 crc kubenswrapper[4681]: I0404 03:25:30.923392 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zl2vv" Apr 04 03:25:31 crc kubenswrapper[4681]: I0404 03:25:31.530429 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zl2vv"] Apr 04 03:25:31 crc kubenswrapper[4681]: I0404 03:25:31.677988 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl2vv" event={"ID":"c44b0cad-66f7-481c-a8b3-f1c554c05f6f","Type":"ContainerStarted","Data":"4c2a12377a3c494726cb21347c46fe1966d9db5f6d1f90e68a3e9938dcfa59d2"} Apr 04 03:25:32 crc kubenswrapper[4681]: I0404 03:25:32.688409 4681 generic.go:334] "Generic (PLEG): container finished" podID="c44b0cad-66f7-481c-a8b3-f1c554c05f6f" containerID="30acd6f3eabec1e31836491288258be9902f047e9fc57be8e59faa43fb2bfb41" exitCode=0 Apr 04 03:25:32 crc kubenswrapper[4681]: I0404 03:25:32.688461 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl2vv" event={"ID":"c44b0cad-66f7-481c-a8b3-f1c554c05f6f","Type":"ContainerDied","Data":"30acd6f3eabec1e31836491288258be9902f047e9fc57be8e59faa43fb2bfb41"} Apr 04 03:25:33 crc kubenswrapper[4681]: I0404 03:25:33.701696 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl2vv" event={"ID":"c44b0cad-66f7-481c-a8b3-f1c554c05f6f","Type":"ContainerStarted","Data":"1b95c9a34617bf12bcff02cf6f66aac201f6cf31eda0aea4ff7f60b550efda4e"} Apr 04 03:25:35 crc kubenswrapper[4681]: I0404 03:25:35.732754 4681 generic.go:334] "Generic (PLEG): container finished" podID="c44b0cad-66f7-481c-a8b3-f1c554c05f6f" containerID="1b95c9a34617bf12bcff02cf6f66aac201f6cf31eda0aea4ff7f60b550efda4e" exitCode=0 Apr 04 03:25:35 crc kubenswrapper[4681]: I0404 03:25:35.732839 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl2vv" event={"ID":"c44b0cad-66f7-481c-a8b3-f1c554c05f6f","Type":"ContainerDied","Data":"1b95c9a34617bf12bcff02cf6f66aac201f6cf31eda0aea4ff7f60b550efda4e"} Apr 04 03:25:37 crc kubenswrapper[4681]: I0404 03:25:37.758468 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl2vv" event={"ID":"c44b0cad-66f7-481c-a8b3-f1c554c05f6f","Type":"ContainerStarted","Data":"779d74e8994195b766f7d9d5457554d14e399d339df7716f6f2695095321c4be"} Apr 04 03:25:37 crc kubenswrapper[4681]: I0404 03:25:37.787173 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zl2vv" podStartSLOduration=4.394869573 podStartE2EDuration="7.787153662s" podCreationTimestamp="2026-04-04 03:25:30 +0000 UTC" firstStartedPulling="2026-04-04 03:25:32.690803367 +0000 UTC m=+5412.356578497" lastFinishedPulling="2026-04-04 03:25:36.083087456 +0000 UTC m=+5415.748862586" observedRunningTime="2026-04-04 03:25:37.779073301 +0000 UTC m=+5417.444848441" watchObservedRunningTime="2026-04-04 03:25:37.787153662 +0000 UTC m=+5417.452928782" Apr 04 03:25:40 crc kubenswrapper[4681]: I0404 03:25:40.924436 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zl2vv" Apr 04 03:25:40 crc kubenswrapper[4681]: I0404 03:25:40.924859 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zl2vv" Apr 04 03:25:40 crc kubenswrapper[4681]: I0404 03:25:40.967718 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zl2vv" Apr 04 03:25:45 crc kubenswrapper[4681]: I0404 03:25:45.201815 4681 scope.go:117] "RemoveContainer" containerID="b6e3c8e3bb709da31661bbcc7d5fde86a5510b8de30a5bd05cb1e0ec6a6ed0b4" Apr 04 03:25:45 crc kubenswrapper[4681]: E0404 03:25:45.202755 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:25:50 crc kubenswrapper[4681]: I0404 03:25:50.978595 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zl2vv" Apr 04 03:25:51 crc kubenswrapper[4681]: I0404 03:25:51.039674 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zl2vv"] Apr 04 03:25:51 crc kubenswrapper[4681]: I0404 03:25:51.916725 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zl2vv" podUID="c44b0cad-66f7-481c-a8b3-f1c554c05f6f" containerName="registry-server" containerID="cri-o://779d74e8994195b766f7d9d5457554d14e399d339df7716f6f2695095321c4be" gracePeriod=2 Apr 04 03:25:52 crc kubenswrapper[4681]: I0404 03:25:52.495624 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zl2vv" Apr 04 03:25:52 crc kubenswrapper[4681]: I0404 03:25:52.539076 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c44b0cad-66f7-481c-a8b3-f1c554c05f6f-utilities\") pod \"c44b0cad-66f7-481c-a8b3-f1c554c05f6f\" (UID: \"c44b0cad-66f7-481c-a8b3-f1c554c05f6f\") " Apr 04 03:25:52 crc kubenswrapper[4681]: I0404 03:25:52.539179 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rx2np\" (UniqueName: \"kubernetes.io/projected/c44b0cad-66f7-481c-a8b3-f1c554c05f6f-kube-api-access-rx2np\") pod \"c44b0cad-66f7-481c-a8b3-f1c554c05f6f\" (UID: \"c44b0cad-66f7-481c-a8b3-f1c554c05f6f\") " Apr 04 03:25:52 crc kubenswrapper[4681]: I0404 03:25:52.540413 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c44b0cad-66f7-481c-a8b3-f1c554c05f6f-catalog-content\") pod \"c44b0cad-66f7-481c-a8b3-f1c554c05f6f\" (UID: \"c44b0cad-66f7-481c-a8b3-f1c554c05f6f\") " Apr 04 03:25:52 crc kubenswrapper[4681]: I0404 03:25:52.540439 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c44b0cad-66f7-481c-a8b3-f1c554c05f6f-utilities" (OuterVolumeSpecName: "utilities") pod "c44b0cad-66f7-481c-a8b3-f1c554c05f6f" (UID: "c44b0cad-66f7-481c-a8b3-f1c554c05f6f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:25:52 crc kubenswrapper[4681]: I0404 03:25:52.542999 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c44b0cad-66f7-481c-a8b3-f1c554c05f6f-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 03:25:52 crc kubenswrapper[4681]: I0404 03:25:52.547260 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c44b0cad-66f7-481c-a8b3-f1c554c05f6f-kube-api-access-rx2np" (OuterVolumeSpecName: "kube-api-access-rx2np") pod "c44b0cad-66f7-481c-a8b3-f1c554c05f6f" (UID: "c44b0cad-66f7-481c-a8b3-f1c554c05f6f"). InnerVolumeSpecName "kube-api-access-rx2np". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:25:52 crc kubenswrapper[4681]: I0404 03:25:52.619847 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c44b0cad-66f7-481c-a8b3-f1c554c05f6f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c44b0cad-66f7-481c-a8b3-f1c554c05f6f" (UID: "c44b0cad-66f7-481c-a8b3-f1c554c05f6f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:25:52 crc kubenswrapper[4681]: I0404 03:25:52.644861 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rx2np\" (UniqueName: \"kubernetes.io/projected/c44b0cad-66f7-481c-a8b3-f1c554c05f6f-kube-api-access-rx2np\") on node \"crc\" DevicePath \"\"" Apr 04 03:25:52 crc kubenswrapper[4681]: I0404 03:25:52.644900 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c44b0cad-66f7-481c-a8b3-f1c554c05f6f-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 03:25:52 crc kubenswrapper[4681]: I0404 03:25:52.928256 4681 generic.go:334] "Generic (PLEG): container finished" podID="c44b0cad-66f7-481c-a8b3-f1c554c05f6f" containerID="779d74e8994195b766f7d9d5457554d14e399d339df7716f6f2695095321c4be" exitCode=0 Apr 04 03:25:52 crc kubenswrapper[4681]: I0404 03:25:52.928310 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl2vv" event={"ID":"c44b0cad-66f7-481c-a8b3-f1c554c05f6f","Type":"ContainerDied","Data":"779d74e8994195b766f7d9d5457554d14e399d339df7716f6f2695095321c4be"} Apr 04 03:25:52 crc kubenswrapper[4681]: I0404 03:25:52.928335 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl2vv" event={"ID":"c44b0cad-66f7-481c-a8b3-f1c554c05f6f","Type":"ContainerDied","Data":"4c2a12377a3c494726cb21347c46fe1966d9db5f6d1f90e68a3e9938dcfa59d2"} Apr 04 03:25:52 crc kubenswrapper[4681]: I0404 03:25:52.928373 4681 scope.go:117] "RemoveContainer" containerID="779d74e8994195b766f7d9d5457554d14e399d339df7716f6f2695095321c4be" Apr 04 03:25:52 crc kubenswrapper[4681]: I0404 03:25:52.928378 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zl2vv" Apr 04 03:25:52 crc kubenswrapper[4681]: I0404 03:25:52.948692 4681 scope.go:117] "RemoveContainer" containerID="1b95c9a34617bf12bcff02cf6f66aac201f6cf31eda0aea4ff7f60b550efda4e" Apr 04 03:25:52 crc kubenswrapper[4681]: I0404 03:25:52.965904 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zl2vv"] Apr 04 03:25:52 crc kubenswrapper[4681]: I0404 03:25:52.977418 4681 scope.go:117] "RemoveContainer" containerID="30acd6f3eabec1e31836491288258be9902f047e9fc57be8e59faa43fb2bfb41" Apr 04 03:25:52 crc kubenswrapper[4681]: I0404 03:25:52.983283 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zl2vv"] Apr 04 03:25:53 crc kubenswrapper[4681]: I0404 03:25:53.039849 4681 scope.go:117] "RemoveContainer" containerID="779d74e8994195b766f7d9d5457554d14e399d339df7716f6f2695095321c4be" Apr 04 03:25:53 crc kubenswrapper[4681]: E0404 03:25:53.040315 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"779d74e8994195b766f7d9d5457554d14e399d339df7716f6f2695095321c4be\": container with ID starting with 779d74e8994195b766f7d9d5457554d14e399d339df7716f6f2695095321c4be not found: ID does not exist" containerID="779d74e8994195b766f7d9d5457554d14e399d339df7716f6f2695095321c4be" Apr 04 03:25:53 crc kubenswrapper[4681]: I0404 03:25:53.040344 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"779d74e8994195b766f7d9d5457554d14e399d339df7716f6f2695095321c4be"} err="failed to get container status \"779d74e8994195b766f7d9d5457554d14e399d339df7716f6f2695095321c4be\": rpc error: code = NotFound desc = could not find container \"779d74e8994195b766f7d9d5457554d14e399d339df7716f6f2695095321c4be\": container with ID starting with 779d74e8994195b766f7d9d5457554d14e399d339df7716f6f2695095321c4be not found: ID does not exist" Apr 04 03:25:53 crc kubenswrapper[4681]: I0404 03:25:53.040367 4681 scope.go:117] "RemoveContainer" containerID="1b95c9a34617bf12bcff02cf6f66aac201f6cf31eda0aea4ff7f60b550efda4e" Apr 04 03:25:53 crc kubenswrapper[4681]: E0404 03:25:53.040586 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b95c9a34617bf12bcff02cf6f66aac201f6cf31eda0aea4ff7f60b550efda4e\": container with ID starting with 1b95c9a34617bf12bcff02cf6f66aac201f6cf31eda0aea4ff7f60b550efda4e not found: ID does not exist" containerID="1b95c9a34617bf12bcff02cf6f66aac201f6cf31eda0aea4ff7f60b550efda4e" Apr 04 03:25:53 crc kubenswrapper[4681]: I0404 03:25:53.040637 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b95c9a34617bf12bcff02cf6f66aac201f6cf31eda0aea4ff7f60b550efda4e"} err="failed to get container status \"1b95c9a34617bf12bcff02cf6f66aac201f6cf31eda0aea4ff7f60b550efda4e\": rpc error: code = NotFound desc = could not find container \"1b95c9a34617bf12bcff02cf6f66aac201f6cf31eda0aea4ff7f60b550efda4e\": container with ID starting with 1b95c9a34617bf12bcff02cf6f66aac201f6cf31eda0aea4ff7f60b550efda4e not found: ID does not exist" Apr 04 03:25:53 crc kubenswrapper[4681]: I0404 03:25:53.040655 4681 scope.go:117] "RemoveContainer" containerID="30acd6f3eabec1e31836491288258be9902f047e9fc57be8e59faa43fb2bfb41" Apr 04 03:25:53 crc kubenswrapper[4681]: E0404 03:25:53.040884 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30acd6f3eabec1e31836491288258be9902f047e9fc57be8e59faa43fb2bfb41\": container with ID starting with 30acd6f3eabec1e31836491288258be9902f047e9fc57be8e59faa43fb2bfb41 not found: ID does not exist" containerID="30acd6f3eabec1e31836491288258be9902f047e9fc57be8e59faa43fb2bfb41" Apr 04 03:25:53 crc kubenswrapper[4681]: I0404 03:25:53.040903 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30acd6f3eabec1e31836491288258be9902f047e9fc57be8e59faa43fb2bfb41"} err="failed to get container status \"30acd6f3eabec1e31836491288258be9902f047e9fc57be8e59faa43fb2bfb41\": rpc error: code = NotFound desc = could not find container \"30acd6f3eabec1e31836491288258be9902f047e9fc57be8e59faa43fb2bfb41\": container with ID starting with 30acd6f3eabec1e31836491288258be9902f047e9fc57be8e59faa43fb2bfb41 not found: ID does not exist" Apr 04 03:25:53 crc kubenswrapper[4681]: I0404 03:25:53.211377 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c44b0cad-66f7-481c-a8b3-f1c554c05f6f" path="/var/lib/kubelet/pods/c44b0cad-66f7-481c-a8b3-f1c554c05f6f/volumes" Apr 04 03:26:00 crc kubenswrapper[4681]: I0404 03:26:00.142440 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587886-nfnbm"] Apr 04 03:26:00 crc kubenswrapper[4681]: E0404 03:26:00.143537 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c44b0cad-66f7-481c-a8b3-f1c554c05f6f" containerName="extract-utilities" Apr 04 03:26:00 crc kubenswrapper[4681]: I0404 03:26:00.143555 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="c44b0cad-66f7-481c-a8b3-f1c554c05f6f" containerName="extract-utilities" Apr 04 03:26:00 crc kubenswrapper[4681]: E0404 03:26:00.143579 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c44b0cad-66f7-481c-a8b3-f1c554c05f6f" containerName="extract-content" Apr 04 03:26:00 crc kubenswrapper[4681]: I0404 03:26:00.143589 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="c44b0cad-66f7-481c-a8b3-f1c554c05f6f" containerName="extract-content" Apr 04 03:26:00 crc kubenswrapper[4681]: E0404 03:26:00.143608 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c44b0cad-66f7-481c-a8b3-f1c554c05f6f" containerName="registry-server" Apr 04 03:26:00 crc kubenswrapper[4681]: I0404 03:26:00.143615 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="c44b0cad-66f7-481c-a8b3-f1c554c05f6f" containerName="registry-server" Apr 04 03:26:00 crc kubenswrapper[4681]: I0404 03:26:00.143873 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="c44b0cad-66f7-481c-a8b3-f1c554c05f6f" containerName="registry-server" Apr 04 03:26:00 crc kubenswrapper[4681]: I0404 03:26:00.144753 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587886-nfnbm" Apr 04 03:26:00 crc kubenswrapper[4681]: I0404 03:26:00.155306 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587886-nfnbm"] Apr 04 03:26:00 crc kubenswrapper[4681]: I0404 03:26:00.172177 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 03:26:00 crc kubenswrapper[4681]: I0404 03:26:00.174400 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 03:26:00 crc kubenswrapper[4681]: I0404 03:26:00.174851 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 03:26:00 crc kubenswrapper[4681]: I0404 03:26:00.201184 4681 scope.go:117] "RemoveContainer" containerID="b6e3c8e3bb709da31661bbcc7d5fde86a5510b8de30a5bd05cb1e0ec6a6ed0b4" Apr 04 03:26:00 crc kubenswrapper[4681]: E0404 03:26:00.201556 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:26:00 crc kubenswrapper[4681]: I0404 03:26:00.208551 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wspzh\" (UniqueName: \"kubernetes.io/projected/4e1c55d0-61c9-4b1c-8caa-796bd323b27a-kube-api-access-wspzh\") pod \"auto-csr-approver-29587886-nfnbm\" (UID: \"4e1c55d0-61c9-4b1c-8caa-796bd323b27a\") " pod="openshift-infra/auto-csr-approver-29587886-nfnbm" Apr 04 03:26:00 crc kubenswrapper[4681]: I0404 03:26:00.310191 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wspzh\" (UniqueName: \"kubernetes.io/projected/4e1c55d0-61c9-4b1c-8caa-796bd323b27a-kube-api-access-wspzh\") pod \"auto-csr-approver-29587886-nfnbm\" (UID: \"4e1c55d0-61c9-4b1c-8caa-796bd323b27a\") " pod="openshift-infra/auto-csr-approver-29587886-nfnbm" Apr 04 03:26:00 crc kubenswrapper[4681]: I0404 03:26:00.331669 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wspzh\" (UniqueName: \"kubernetes.io/projected/4e1c55d0-61c9-4b1c-8caa-796bd323b27a-kube-api-access-wspzh\") pod \"auto-csr-approver-29587886-nfnbm\" (UID: \"4e1c55d0-61c9-4b1c-8caa-796bd323b27a\") " pod="openshift-infra/auto-csr-approver-29587886-nfnbm" Apr 04 03:26:00 crc kubenswrapper[4681]: I0404 03:26:00.489192 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587886-nfnbm" Apr 04 03:26:00 crc kubenswrapper[4681]: I0404 03:26:00.988755 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587886-nfnbm"] Apr 04 03:26:00 crc kubenswrapper[4681]: I0404 03:26:00.992440 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 04 03:26:01 crc kubenswrapper[4681]: I0404 03:26:01.011621 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587886-nfnbm" event={"ID":"4e1c55d0-61c9-4b1c-8caa-796bd323b27a","Type":"ContainerStarted","Data":"ad219ebd0e1fe0d68a8366eb1ba13519eb688625273ba4f68a331c7b72a3c681"} Apr 04 03:26:03 crc kubenswrapper[4681]: I0404 03:26:03.070551 4681 generic.go:334] "Generic (PLEG): container finished" podID="4e1c55d0-61c9-4b1c-8caa-796bd323b27a" containerID="ad8f63db96a91428e0f9dae18134a8ecbdc84dd163f5b6dad2fb16e5986499e0" exitCode=0 Apr 04 03:26:03 crc kubenswrapper[4681]: I0404 03:26:03.070615 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587886-nfnbm" event={"ID":"4e1c55d0-61c9-4b1c-8caa-796bd323b27a","Type":"ContainerDied","Data":"ad8f63db96a91428e0f9dae18134a8ecbdc84dd163f5b6dad2fb16e5986499e0"} Apr 04 03:26:04 crc kubenswrapper[4681]: I0404 03:26:04.998445 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587886-nfnbm" Apr 04 03:26:05 crc kubenswrapper[4681]: I0404 03:26:05.090916 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587886-nfnbm" event={"ID":"4e1c55d0-61c9-4b1c-8caa-796bd323b27a","Type":"ContainerDied","Data":"ad219ebd0e1fe0d68a8366eb1ba13519eb688625273ba4f68a331c7b72a3c681"} Apr 04 03:26:05 crc kubenswrapper[4681]: I0404 03:26:05.091335 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad219ebd0e1fe0d68a8366eb1ba13519eb688625273ba4f68a331c7b72a3c681" Apr 04 03:26:05 crc kubenswrapper[4681]: I0404 03:26:05.091397 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587886-nfnbm" Apr 04 03:26:05 crc kubenswrapper[4681]: I0404 03:26:05.123202 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wspzh\" (UniqueName: \"kubernetes.io/projected/4e1c55d0-61c9-4b1c-8caa-796bd323b27a-kube-api-access-wspzh\") pod \"4e1c55d0-61c9-4b1c-8caa-796bd323b27a\" (UID: \"4e1c55d0-61c9-4b1c-8caa-796bd323b27a\") " Apr 04 03:26:05 crc kubenswrapper[4681]: I0404 03:26:05.143726 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e1c55d0-61c9-4b1c-8caa-796bd323b27a-kube-api-access-wspzh" (OuterVolumeSpecName: "kube-api-access-wspzh") pod "4e1c55d0-61c9-4b1c-8caa-796bd323b27a" (UID: "4e1c55d0-61c9-4b1c-8caa-796bd323b27a"). InnerVolumeSpecName "kube-api-access-wspzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:26:05 crc kubenswrapper[4681]: I0404 03:26:05.225255 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wspzh\" (UniqueName: \"kubernetes.io/projected/4e1c55d0-61c9-4b1c-8caa-796bd323b27a-kube-api-access-wspzh\") on node \"crc\" DevicePath \"\"" Apr 04 03:26:06 crc kubenswrapper[4681]: I0404 03:26:06.075446 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587880-7tjfw"] Apr 04 03:26:06 crc kubenswrapper[4681]: I0404 03:26:06.085721 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587880-7tjfw"] Apr 04 03:26:07 crc kubenswrapper[4681]: I0404 03:26:07.218860 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce4ed2c2-8871-48dc-b6ac-e2b62ea8dff1" path="/var/lib/kubelet/pods/ce4ed2c2-8871-48dc-b6ac-e2b62ea8dff1/volumes" Apr 04 03:26:13 crc kubenswrapper[4681]: I0404 03:26:13.201992 4681 scope.go:117] "RemoveContainer" containerID="b6e3c8e3bb709da31661bbcc7d5fde86a5510b8de30a5bd05cb1e0ec6a6ed0b4" Apr 04 03:26:13 crc kubenswrapper[4681]: E0404 03:26:13.203010 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:26:27 crc kubenswrapper[4681]: I0404 03:26:27.201455 4681 scope.go:117] "RemoveContainer" containerID="b6e3c8e3bb709da31661bbcc7d5fde86a5510b8de30a5bd05cb1e0ec6a6ed0b4" Apr 04 03:26:27 crc kubenswrapper[4681]: E0404 03:26:27.202211 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:26:33 crc kubenswrapper[4681]: I0404 03:26:33.948991 4681 scope.go:117] "RemoveContainer" containerID="bc0f83bd9af793949035455275163ccea35299d1a4871ebaa666d569079fec85" Apr 04 03:26:41 crc kubenswrapper[4681]: I0404 03:26:41.209108 4681 scope.go:117] "RemoveContainer" containerID="b6e3c8e3bb709da31661bbcc7d5fde86a5510b8de30a5bd05cb1e0ec6a6ed0b4" Apr 04 03:26:41 crc kubenswrapper[4681]: E0404 03:26:41.209987 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:26:55 crc kubenswrapper[4681]: I0404 03:26:55.201197 4681 scope.go:117] "RemoveContainer" containerID="b6e3c8e3bb709da31661bbcc7d5fde86a5510b8de30a5bd05cb1e0ec6a6ed0b4" Apr 04 03:26:55 crc kubenswrapper[4681]: E0404 03:26:55.202053 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:27:09 crc kubenswrapper[4681]: I0404 03:27:09.201159 4681 scope.go:117] "RemoveContainer" containerID="b6e3c8e3bb709da31661bbcc7d5fde86a5510b8de30a5bd05cb1e0ec6a6ed0b4" Apr 04 03:27:09 crc kubenswrapper[4681]: E0404 03:27:09.201953 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:27:22 crc kubenswrapper[4681]: I0404 03:27:22.201623 4681 scope.go:117] "RemoveContainer" containerID="b6e3c8e3bb709da31661bbcc7d5fde86a5510b8de30a5bd05cb1e0ec6a6ed0b4" Apr 04 03:27:22 crc kubenswrapper[4681]: E0404 03:27:22.202757 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:27:34 crc kubenswrapper[4681]: I0404 03:27:34.835644 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rwp5f"] Apr 04 03:27:34 crc kubenswrapper[4681]: E0404 03:27:34.836781 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1c55d0-61c9-4b1c-8caa-796bd323b27a" containerName="oc" Apr 04 03:27:34 crc kubenswrapper[4681]: I0404 03:27:34.836798 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1c55d0-61c9-4b1c-8caa-796bd323b27a" containerName="oc" Apr 04 03:27:34 crc kubenswrapper[4681]: I0404 03:27:34.837047 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e1c55d0-61c9-4b1c-8caa-796bd323b27a" containerName="oc" Apr 04 03:27:34 crc kubenswrapper[4681]: I0404 03:27:34.839012 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rwp5f" Apr 04 03:27:34 crc kubenswrapper[4681]: I0404 03:27:34.853506 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rwp5f"] Apr 04 03:27:34 crc kubenswrapper[4681]: I0404 03:27:34.968593 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fglvt\" (UniqueName: \"kubernetes.io/projected/51afbf84-17c8-4424-8f03-f568ea010a1d-kube-api-access-fglvt\") pod \"redhat-operators-rwp5f\" (UID: \"51afbf84-17c8-4424-8f03-f568ea010a1d\") " pod="openshift-marketplace/redhat-operators-rwp5f" Apr 04 03:27:34 crc kubenswrapper[4681]: I0404 03:27:34.968982 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51afbf84-17c8-4424-8f03-f568ea010a1d-utilities\") pod \"redhat-operators-rwp5f\" (UID: \"51afbf84-17c8-4424-8f03-f568ea010a1d\") " pod="openshift-marketplace/redhat-operators-rwp5f" Apr 04 03:27:34 crc kubenswrapper[4681]: I0404 03:27:34.969180 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51afbf84-17c8-4424-8f03-f568ea010a1d-catalog-content\") pod \"redhat-operators-rwp5f\" (UID: \"51afbf84-17c8-4424-8f03-f568ea010a1d\") " pod="openshift-marketplace/redhat-operators-rwp5f" Apr 04 03:27:35 crc kubenswrapper[4681]: I0404 03:27:35.071451 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51afbf84-17c8-4424-8f03-f568ea010a1d-utilities\") pod \"redhat-operators-rwp5f\" (UID: \"51afbf84-17c8-4424-8f03-f568ea010a1d\") " pod="openshift-marketplace/redhat-operators-rwp5f" Apr 04 03:27:35 crc kubenswrapper[4681]: I0404 03:27:35.071554 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51afbf84-17c8-4424-8f03-f568ea010a1d-catalog-content\") pod \"redhat-operators-rwp5f\" (UID: \"51afbf84-17c8-4424-8f03-f568ea010a1d\") " pod="openshift-marketplace/redhat-operators-rwp5f" Apr 04 03:27:35 crc kubenswrapper[4681]: I0404 03:27:35.071722 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fglvt\" (UniqueName: \"kubernetes.io/projected/51afbf84-17c8-4424-8f03-f568ea010a1d-kube-api-access-fglvt\") pod \"redhat-operators-rwp5f\" (UID: \"51afbf84-17c8-4424-8f03-f568ea010a1d\") " pod="openshift-marketplace/redhat-operators-rwp5f" Apr 04 03:27:35 crc kubenswrapper[4681]: I0404 03:27:35.071955 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51afbf84-17c8-4424-8f03-f568ea010a1d-utilities\") pod \"redhat-operators-rwp5f\" (UID: \"51afbf84-17c8-4424-8f03-f568ea010a1d\") " pod="openshift-marketplace/redhat-operators-rwp5f" Apr 04 03:27:35 crc kubenswrapper[4681]: I0404 03:27:35.071991 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51afbf84-17c8-4424-8f03-f568ea010a1d-catalog-content\") pod \"redhat-operators-rwp5f\" (UID: \"51afbf84-17c8-4424-8f03-f568ea010a1d\") " pod="openshift-marketplace/redhat-operators-rwp5f" Apr 04 03:27:35 crc kubenswrapper[4681]: I0404 03:27:35.101468 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fglvt\" (UniqueName: \"kubernetes.io/projected/51afbf84-17c8-4424-8f03-f568ea010a1d-kube-api-access-fglvt\") pod \"redhat-operators-rwp5f\" (UID: \"51afbf84-17c8-4424-8f03-f568ea010a1d\") " pod="openshift-marketplace/redhat-operators-rwp5f" Apr 04 03:27:35 crc kubenswrapper[4681]: I0404 03:27:35.163408 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rwp5f" Apr 04 03:27:35 crc kubenswrapper[4681]: I0404 03:27:35.699326 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rwp5f"] Apr 04 03:27:36 crc kubenswrapper[4681]: I0404 03:27:36.031910 4681 generic.go:334] "Generic (PLEG): container finished" podID="51afbf84-17c8-4424-8f03-f568ea010a1d" containerID="5095f5425b7a348536ffed9d23219a4c8ba98a561a1ce8d6764af20d8e2463d0" exitCode=0 Apr 04 03:27:36 crc kubenswrapper[4681]: I0404 03:27:36.031979 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwp5f" event={"ID":"51afbf84-17c8-4424-8f03-f568ea010a1d","Type":"ContainerDied","Data":"5095f5425b7a348536ffed9d23219a4c8ba98a561a1ce8d6764af20d8e2463d0"} Apr 04 03:27:36 crc kubenswrapper[4681]: I0404 03:27:36.032153 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwp5f" event={"ID":"51afbf84-17c8-4424-8f03-f568ea010a1d","Type":"ContainerStarted","Data":"04db5491c3e8e833e421cdb26480ec16c64932a16ee12ccefb4dbfedc3a8f860"} Apr 04 03:27:36 crc kubenswrapper[4681]: I0404 03:27:36.201682 4681 scope.go:117] "RemoveContainer" containerID="b6e3c8e3bb709da31661bbcc7d5fde86a5510b8de30a5bd05cb1e0ec6a6ed0b4" Apr 04 03:27:36 crc kubenswrapper[4681]: E0404 03:27:36.201951 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:27:37 crc kubenswrapper[4681]: I0404 03:27:37.049953 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwp5f" event={"ID":"51afbf84-17c8-4424-8f03-f568ea010a1d","Type":"ContainerStarted","Data":"748366a0d26b3d843d1ca17d8c5cd8984499f583816d72b6649af98e1913f67f"} Apr 04 03:27:39 crc kubenswrapper[4681]: I0404 03:27:39.069589 4681 generic.go:334] "Generic (PLEG): container finished" podID="51afbf84-17c8-4424-8f03-f568ea010a1d" containerID="748366a0d26b3d843d1ca17d8c5cd8984499f583816d72b6649af98e1913f67f" exitCode=0 Apr 04 03:27:39 crc kubenswrapper[4681]: I0404 03:27:39.069664 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwp5f" event={"ID":"51afbf84-17c8-4424-8f03-f568ea010a1d","Type":"ContainerDied","Data":"748366a0d26b3d843d1ca17d8c5cd8984499f583816d72b6649af98e1913f67f"} Apr 04 03:27:44 crc kubenswrapper[4681]: I0404 03:27:44.129498 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwp5f" event={"ID":"51afbf84-17c8-4424-8f03-f568ea010a1d","Type":"ContainerStarted","Data":"2dd098cca2fa2d07e749264a1c406a17fe7b53ab99c04c26414629cd0592c336"} Apr 04 03:27:44 crc kubenswrapper[4681]: I0404 03:27:44.162628 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rwp5f" podStartSLOduration=3.436658871 podStartE2EDuration="10.16260457s" podCreationTimestamp="2026-04-04 03:27:34 +0000 UTC" firstStartedPulling="2026-04-04 03:27:36.033838314 +0000 UTC m=+5535.699613434" lastFinishedPulling="2026-04-04 03:27:42.759784013 +0000 UTC m=+5542.425559133" observedRunningTime="2026-04-04 03:27:44.147500669 +0000 UTC m=+5543.813275789" watchObservedRunningTime="2026-04-04 03:27:44.16260457 +0000 UTC m=+5543.828379690" Apr 04 03:27:45 crc kubenswrapper[4681]: I0404 03:27:45.164340 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rwp5f" Apr 04 03:27:45 crc kubenswrapper[4681]: I0404 03:27:45.164430 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rwp5f" Apr 04 03:27:46 crc kubenswrapper[4681]: I0404 03:27:46.212124 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rwp5f" podUID="51afbf84-17c8-4424-8f03-f568ea010a1d" containerName="registry-server" probeResult="failure" output=< Apr 04 03:27:46 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Apr 04 03:27:46 crc kubenswrapper[4681]: > Apr 04 03:27:47 crc kubenswrapper[4681]: I0404 03:27:47.200713 4681 scope.go:117] "RemoveContainer" containerID="b6e3c8e3bb709da31661bbcc7d5fde86a5510b8de30a5bd05cb1e0ec6a6ed0b4" Apr 04 03:27:47 crc kubenswrapper[4681]: E0404 03:27:47.201411 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:27:55 crc kubenswrapper[4681]: I0404 03:27:55.216047 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rwp5f" Apr 04 03:27:55 crc kubenswrapper[4681]: I0404 03:27:55.262177 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rwp5f" Apr 04 03:27:55 crc kubenswrapper[4681]: I0404 03:27:55.454726 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rwp5f"] Apr 04 03:27:56 crc kubenswrapper[4681]: I0404 03:27:56.267806 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rwp5f" podUID="51afbf84-17c8-4424-8f03-f568ea010a1d" containerName="registry-server" containerID="cri-o://2dd098cca2fa2d07e749264a1c406a17fe7b53ab99c04c26414629cd0592c336" gracePeriod=2 Apr 04 03:27:57 crc kubenswrapper[4681]: I0404 03:27:57.281241 4681 generic.go:334] "Generic (PLEG): container finished" podID="51afbf84-17c8-4424-8f03-f568ea010a1d" containerID="2dd098cca2fa2d07e749264a1c406a17fe7b53ab99c04c26414629cd0592c336" exitCode=0 Apr 04 03:27:57 crc kubenswrapper[4681]: I0404 03:27:57.281287 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwp5f" event={"ID":"51afbf84-17c8-4424-8f03-f568ea010a1d","Type":"ContainerDied","Data":"2dd098cca2fa2d07e749264a1c406a17fe7b53ab99c04c26414629cd0592c336"} Apr 04 03:27:57 crc kubenswrapper[4681]: I0404 03:27:57.282507 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwp5f" event={"ID":"51afbf84-17c8-4424-8f03-f568ea010a1d","Type":"ContainerDied","Data":"04db5491c3e8e833e421cdb26480ec16c64932a16ee12ccefb4dbfedc3a8f860"} Apr 04 03:27:57 crc kubenswrapper[4681]: I0404 03:27:57.282576 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04db5491c3e8e833e421cdb26480ec16c64932a16ee12ccefb4dbfedc3a8f860" Apr 04 03:27:57 crc kubenswrapper[4681]: I0404 03:27:57.373146 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rwp5f" Apr 04 03:27:57 crc kubenswrapper[4681]: I0404 03:27:57.460310 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51afbf84-17c8-4424-8f03-f568ea010a1d-catalog-content\") pod \"51afbf84-17c8-4424-8f03-f568ea010a1d\" (UID: \"51afbf84-17c8-4424-8f03-f568ea010a1d\") " Apr 04 03:27:57 crc kubenswrapper[4681]: I0404 03:27:57.461004 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51afbf84-17c8-4424-8f03-f568ea010a1d-utilities\") pod \"51afbf84-17c8-4424-8f03-f568ea010a1d\" (UID: \"51afbf84-17c8-4424-8f03-f568ea010a1d\") " Apr 04 03:27:57 crc kubenswrapper[4681]: I0404 03:27:57.461243 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fglvt\" (UniqueName: \"kubernetes.io/projected/51afbf84-17c8-4424-8f03-f568ea010a1d-kube-api-access-fglvt\") pod \"51afbf84-17c8-4424-8f03-f568ea010a1d\" (UID: \"51afbf84-17c8-4424-8f03-f568ea010a1d\") " Apr 04 03:27:57 crc kubenswrapper[4681]: I0404 03:27:57.461837 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51afbf84-17c8-4424-8f03-f568ea010a1d-utilities" (OuterVolumeSpecName: "utilities") pod "51afbf84-17c8-4424-8f03-f568ea010a1d" (UID: "51afbf84-17c8-4424-8f03-f568ea010a1d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:27:57 crc kubenswrapper[4681]: I0404 03:27:57.462669 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51afbf84-17c8-4424-8f03-f568ea010a1d-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 03:27:57 crc kubenswrapper[4681]: I0404 03:27:57.469404 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51afbf84-17c8-4424-8f03-f568ea010a1d-kube-api-access-fglvt" (OuterVolumeSpecName: "kube-api-access-fglvt") pod "51afbf84-17c8-4424-8f03-f568ea010a1d" (UID: "51afbf84-17c8-4424-8f03-f568ea010a1d"). InnerVolumeSpecName "kube-api-access-fglvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:27:57 crc kubenswrapper[4681]: I0404 03:27:57.565059 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fglvt\" (UniqueName: \"kubernetes.io/projected/51afbf84-17c8-4424-8f03-f568ea010a1d-kube-api-access-fglvt\") on node \"crc\" DevicePath \"\"" Apr 04 03:27:57 crc kubenswrapper[4681]: I0404 03:27:57.602602 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51afbf84-17c8-4424-8f03-f568ea010a1d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51afbf84-17c8-4424-8f03-f568ea010a1d" (UID: "51afbf84-17c8-4424-8f03-f568ea010a1d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:27:57 crc kubenswrapper[4681]: I0404 03:27:57.666902 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51afbf84-17c8-4424-8f03-f568ea010a1d-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 03:27:58 crc kubenswrapper[4681]: I0404 03:27:58.201685 4681 scope.go:117] "RemoveContainer" containerID="b6e3c8e3bb709da31661bbcc7d5fde86a5510b8de30a5bd05cb1e0ec6a6ed0b4" Apr 04 03:27:58 crc kubenswrapper[4681]: I0404 03:27:58.290834 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rwp5f" Apr 04 03:27:58 crc kubenswrapper[4681]: I0404 03:27:58.333101 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rwp5f"] Apr 04 03:27:58 crc kubenswrapper[4681]: I0404 03:27:58.342602 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rwp5f"] Apr 04 03:27:59 crc kubenswrapper[4681]: I0404 03:27:59.217126 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51afbf84-17c8-4424-8f03-f568ea010a1d" path="/var/lib/kubelet/pods/51afbf84-17c8-4424-8f03-f568ea010a1d/volumes" Apr 04 03:27:59 crc kubenswrapper[4681]: I0404 03:27:59.311613 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerStarted","Data":"50526199248db1b53a9e352e549d4ec2688260bc70be10359c4b6d7bae4c68b5"} Apr 04 03:28:00 crc kubenswrapper[4681]: I0404 03:28:00.152499 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587888-jmnxc"] Apr 04 03:28:00 crc kubenswrapper[4681]: E0404 03:28:00.153628 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51afbf84-17c8-4424-8f03-f568ea010a1d" containerName="registry-server" Apr 04 03:28:00 crc kubenswrapper[4681]: I0404 03:28:00.153654 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="51afbf84-17c8-4424-8f03-f568ea010a1d" containerName="registry-server" Apr 04 03:28:00 crc kubenswrapper[4681]: E0404 03:28:00.153681 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51afbf84-17c8-4424-8f03-f568ea010a1d" containerName="extract-content" Apr 04 03:28:00 crc kubenswrapper[4681]: I0404 03:28:00.153691 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="51afbf84-17c8-4424-8f03-f568ea010a1d" containerName="extract-content" Apr 04 03:28:00 crc kubenswrapper[4681]: E0404 03:28:00.153718 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51afbf84-17c8-4424-8f03-f568ea010a1d" containerName="extract-utilities" Apr 04 03:28:00 crc kubenswrapper[4681]: I0404 03:28:00.153726 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="51afbf84-17c8-4424-8f03-f568ea010a1d" containerName="extract-utilities" Apr 04 03:28:00 crc kubenswrapper[4681]: I0404 03:28:00.153995 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="51afbf84-17c8-4424-8f03-f568ea010a1d" containerName="registry-server" Apr 04 03:28:00 crc kubenswrapper[4681]: I0404 03:28:00.154781 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587888-jmnxc" Apr 04 03:28:00 crc kubenswrapper[4681]: I0404 03:28:00.157196 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 03:28:00 crc kubenswrapper[4681]: I0404 03:28:00.157737 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 03:28:00 crc kubenswrapper[4681]: I0404 03:28:00.159454 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 03:28:00 crc kubenswrapper[4681]: I0404 03:28:00.165706 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587888-jmnxc"] Apr 04 03:28:00 crc kubenswrapper[4681]: I0404 03:28:00.227634 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc9hc\" (UniqueName: \"kubernetes.io/projected/96a01415-f006-40cb-9553-bd8e6488523b-kube-api-access-tc9hc\") pod \"auto-csr-approver-29587888-jmnxc\" (UID: \"96a01415-f006-40cb-9553-bd8e6488523b\") " pod="openshift-infra/auto-csr-approver-29587888-jmnxc" Apr 04 03:28:00 crc kubenswrapper[4681]: I0404 03:28:00.329469 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc9hc\" (UniqueName: \"kubernetes.io/projected/96a01415-f006-40cb-9553-bd8e6488523b-kube-api-access-tc9hc\") pod \"auto-csr-approver-29587888-jmnxc\" (UID: \"96a01415-f006-40cb-9553-bd8e6488523b\") " pod="openshift-infra/auto-csr-approver-29587888-jmnxc" Apr 04 03:28:00 crc kubenswrapper[4681]: I0404 03:28:00.354640 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc9hc\" (UniqueName: \"kubernetes.io/projected/96a01415-f006-40cb-9553-bd8e6488523b-kube-api-access-tc9hc\") pod \"auto-csr-approver-29587888-jmnxc\" (UID: \"96a01415-f006-40cb-9553-bd8e6488523b\") " pod="openshift-infra/auto-csr-approver-29587888-jmnxc" Apr 04 03:28:00 crc kubenswrapper[4681]: I0404 03:28:00.472088 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587888-jmnxc" Apr 04 03:28:00 crc kubenswrapper[4681]: W0404 03:28:00.954553 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96a01415_f006_40cb_9553_bd8e6488523b.slice/crio-c47999923f4c931838100101251eb35fabdb8cf760dd7b5c806f6f1fb2239b4b WatchSource:0}: Error finding container c47999923f4c931838100101251eb35fabdb8cf760dd7b5c806f6f1fb2239b4b: Status 404 returned error can't find the container with id c47999923f4c931838100101251eb35fabdb8cf760dd7b5c806f6f1fb2239b4b Apr 04 03:28:00 crc kubenswrapper[4681]: I0404 03:28:00.970996 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587888-jmnxc"] Apr 04 03:28:01 crc kubenswrapper[4681]: I0404 03:28:01.334903 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587888-jmnxc" event={"ID":"96a01415-f006-40cb-9553-bd8e6488523b","Type":"ContainerStarted","Data":"c47999923f4c931838100101251eb35fabdb8cf760dd7b5c806f6f1fb2239b4b"} Apr 04 03:28:02 crc kubenswrapper[4681]: I0404 03:28:02.344292 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587888-jmnxc" event={"ID":"96a01415-f006-40cb-9553-bd8e6488523b","Type":"ContainerStarted","Data":"26c49260993c043ad51c8014eb83f0faab76d6bbe9a3b5a06a0379896b9aa530"} Apr 04 03:28:02 crc kubenswrapper[4681]: I0404 03:28:02.368869 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29587888-jmnxc" podStartSLOduration=1.548801511 podStartE2EDuration="2.368847752s" podCreationTimestamp="2026-04-04 03:28:00 +0000 UTC" firstStartedPulling="2026-04-04 03:28:00.956971677 +0000 UTC m=+5560.622746797" lastFinishedPulling="2026-04-04 03:28:01.777017928 +0000 UTC m=+5561.442793038" observedRunningTime="2026-04-04 03:28:02.360612048 +0000 UTC m=+5562.026387178" watchObservedRunningTime="2026-04-04 03:28:02.368847752 +0000 UTC m=+5562.034622872" Apr 04 03:28:03 crc kubenswrapper[4681]: I0404 03:28:03.356159 4681 generic.go:334] "Generic (PLEG): container finished" podID="96a01415-f006-40cb-9553-bd8e6488523b" containerID="26c49260993c043ad51c8014eb83f0faab76d6bbe9a3b5a06a0379896b9aa530" exitCode=0 Apr 04 03:28:03 crc kubenswrapper[4681]: I0404 03:28:03.356217 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587888-jmnxc" event={"ID":"96a01415-f006-40cb-9553-bd8e6488523b","Type":"ContainerDied","Data":"26c49260993c043ad51c8014eb83f0faab76d6bbe9a3b5a06a0379896b9aa530"} Apr 04 03:28:04 crc kubenswrapper[4681]: I0404 03:28:04.773164 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587888-jmnxc" Apr 04 03:28:04 crc kubenswrapper[4681]: I0404 03:28:04.836910 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc9hc\" (UniqueName: \"kubernetes.io/projected/96a01415-f006-40cb-9553-bd8e6488523b-kube-api-access-tc9hc\") pod \"96a01415-f006-40cb-9553-bd8e6488523b\" (UID: \"96a01415-f006-40cb-9553-bd8e6488523b\") " Apr 04 03:28:04 crc kubenswrapper[4681]: I0404 03:28:04.843901 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96a01415-f006-40cb-9553-bd8e6488523b-kube-api-access-tc9hc" (OuterVolumeSpecName: "kube-api-access-tc9hc") pod "96a01415-f006-40cb-9553-bd8e6488523b" (UID: "96a01415-f006-40cb-9553-bd8e6488523b"). InnerVolumeSpecName "kube-api-access-tc9hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:28:04 crc kubenswrapper[4681]: I0404 03:28:04.939880 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tc9hc\" (UniqueName: \"kubernetes.io/projected/96a01415-f006-40cb-9553-bd8e6488523b-kube-api-access-tc9hc\") on node \"crc\" DevicePath \"\"" Apr 04 03:28:05 crc kubenswrapper[4681]: I0404 03:28:05.377140 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587888-jmnxc" event={"ID":"96a01415-f006-40cb-9553-bd8e6488523b","Type":"ContainerDied","Data":"c47999923f4c931838100101251eb35fabdb8cf760dd7b5c806f6f1fb2239b4b"} Apr 04 03:28:05 crc kubenswrapper[4681]: I0404 03:28:05.377443 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c47999923f4c931838100101251eb35fabdb8cf760dd7b5c806f6f1fb2239b4b" Apr 04 03:28:05 crc kubenswrapper[4681]: I0404 03:28:05.377467 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587888-jmnxc" Apr 04 03:28:05 crc kubenswrapper[4681]: I0404 03:28:05.429308 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587882-t457x"] Apr 04 03:28:05 crc kubenswrapper[4681]: I0404 03:28:05.439672 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587882-t457x"] Apr 04 03:28:07 crc kubenswrapper[4681]: I0404 03:28:07.217699 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30df2dbd-7792-4460-a15f-5d314523995f" path="/var/lib/kubelet/pods/30df2dbd-7792-4460-a15f-5d314523995f/volumes" Apr 04 03:28:34 crc kubenswrapper[4681]: I0404 03:28:34.065943 4681 scope.go:117] "RemoveContainer" containerID="6b7e23698ebe60a3272d07ec786229e1f661f83b5725b03a68909f53b112a709" Apr 04 03:30:00 crc kubenswrapper[4681]: I0404 03:30:00.156858 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587890-g4hk7"] Apr 04 03:30:00 crc kubenswrapper[4681]: E0404 03:30:00.157977 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96a01415-f006-40cb-9553-bd8e6488523b" containerName="oc" Apr 04 03:30:00 crc kubenswrapper[4681]: I0404 03:30:00.158012 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="96a01415-f006-40cb-9553-bd8e6488523b" containerName="oc" Apr 04 03:30:00 crc kubenswrapper[4681]: I0404 03:30:00.158294 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="96a01415-f006-40cb-9553-bd8e6488523b" containerName="oc" Apr 04 03:30:00 crc kubenswrapper[4681]: I0404 03:30:00.159205 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587890-g4hk7" Apr 04 03:30:00 crc kubenswrapper[4681]: I0404 03:30:00.163528 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 03:30:00 crc kubenswrapper[4681]: I0404 03:30:00.163772 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 03:30:00 crc kubenswrapper[4681]: I0404 03:30:00.165333 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 03:30:00 crc kubenswrapper[4681]: I0404 03:30:00.169491 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587890-hfgct"] Apr 04 03:30:00 crc kubenswrapper[4681]: I0404 03:30:00.171422 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587890-hfgct" Apr 04 03:30:00 crc kubenswrapper[4681]: I0404 03:30:00.172924 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Apr 04 03:30:00 crc kubenswrapper[4681]: I0404 03:30:00.172928 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Apr 04 03:30:00 crc kubenswrapper[4681]: I0404 03:30:00.181822 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587890-hfgct"] Apr 04 03:30:00 crc kubenswrapper[4681]: I0404 03:30:00.194878 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587890-g4hk7"] Apr 04 03:30:00 crc kubenswrapper[4681]: I0404 03:30:00.324769 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90659be2-d637-4eab-acb9-3b2774ba36e5-config-volume\") pod \"collect-profiles-29587890-hfgct\" (UID: \"90659be2-d637-4eab-acb9-3b2774ba36e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587890-hfgct" Apr 04 03:30:00 crc kubenswrapper[4681]: I0404 03:30:00.325089 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90659be2-d637-4eab-acb9-3b2774ba36e5-secret-volume\") pod \"collect-profiles-29587890-hfgct\" (UID: \"90659be2-d637-4eab-acb9-3b2774ba36e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587890-hfgct" Apr 04 03:30:00 crc kubenswrapper[4681]: I0404 03:30:00.325111 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shlqc\" (UniqueName: \"kubernetes.io/projected/cbdaae4f-5738-4ec1-8eda-4bce93f2bbbc-kube-api-access-shlqc\") pod \"auto-csr-approver-29587890-g4hk7\" (UID: \"cbdaae4f-5738-4ec1-8eda-4bce93f2bbbc\") " pod="openshift-infra/auto-csr-approver-29587890-g4hk7" Apr 04 03:30:00 crc kubenswrapper[4681]: I0404 03:30:00.325509 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trqrb\" (UniqueName: \"kubernetes.io/projected/90659be2-d637-4eab-acb9-3b2774ba36e5-kube-api-access-trqrb\") pod \"collect-profiles-29587890-hfgct\" (UID: \"90659be2-d637-4eab-acb9-3b2774ba36e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587890-hfgct" Apr 04 03:30:00 crc kubenswrapper[4681]: I0404 03:30:00.428064 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90659be2-d637-4eab-acb9-3b2774ba36e5-config-volume\") pod \"collect-profiles-29587890-hfgct\" (UID: \"90659be2-d637-4eab-acb9-3b2774ba36e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587890-hfgct" Apr 04 03:30:00 crc kubenswrapper[4681]: I0404 03:30:00.428143 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90659be2-d637-4eab-acb9-3b2774ba36e5-secret-volume\") pod \"collect-profiles-29587890-hfgct\" (UID: \"90659be2-d637-4eab-acb9-3b2774ba36e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587890-hfgct" Apr 04 03:30:00 crc kubenswrapper[4681]: I0404 03:30:00.428165 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shlqc\" (UniqueName: \"kubernetes.io/projected/cbdaae4f-5738-4ec1-8eda-4bce93f2bbbc-kube-api-access-shlqc\") pod \"auto-csr-approver-29587890-g4hk7\" (UID: \"cbdaae4f-5738-4ec1-8eda-4bce93f2bbbc\") " pod="openshift-infra/auto-csr-approver-29587890-g4hk7" Apr 04 03:30:00 crc kubenswrapper[4681]: I0404 03:30:00.428324 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trqrb\" (UniqueName: \"kubernetes.io/projected/90659be2-d637-4eab-acb9-3b2774ba36e5-kube-api-access-trqrb\") pod \"collect-profiles-29587890-hfgct\" (UID: \"90659be2-d637-4eab-acb9-3b2774ba36e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587890-hfgct" Apr 04 03:30:00 crc kubenswrapper[4681]: I0404 03:30:00.428987 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90659be2-d637-4eab-acb9-3b2774ba36e5-config-volume\") pod \"collect-profiles-29587890-hfgct\" (UID: \"90659be2-d637-4eab-acb9-3b2774ba36e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587890-hfgct" Apr 04 03:30:00 crc kubenswrapper[4681]: I0404 03:30:00.434308 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90659be2-d637-4eab-acb9-3b2774ba36e5-secret-volume\") pod \"collect-profiles-29587890-hfgct\" (UID: \"90659be2-d637-4eab-acb9-3b2774ba36e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587890-hfgct" Apr 04 03:30:00 crc kubenswrapper[4681]: I0404 03:30:00.448200 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trqrb\" (UniqueName: \"kubernetes.io/projected/90659be2-d637-4eab-acb9-3b2774ba36e5-kube-api-access-trqrb\") pod \"collect-profiles-29587890-hfgct\" (UID: \"90659be2-d637-4eab-acb9-3b2774ba36e5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587890-hfgct" Apr 04 03:30:00 crc kubenswrapper[4681]: I0404 03:30:00.451963 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shlqc\" (UniqueName: \"kubernetes.io/projected/cbdaae4f-5738-4ec1-8eda-4bce93f2bbbc-kube-api-access-shlqc\") pod \"auto-csr-approver-29587890-g4hk7\" (UID: \"cbdaae4f-5738-4ec1-8eda-4bce93f2bbbc\") " pod="openshift-infra/auto-csr-approver-29587890-g4hk7" Apr 04 03:30:00 crc kubenswrapper[4681]: I0404 03:30:00.494965 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587890-g4hk7" Apr 04 03:30:00 crc kubenswrapper[4681]: I0404 03:30:00.519081 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587890-hfgct" Apr 04 03:30:01 crc kubenswrapper[4681]: I0404 03:30:01.011562 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587890-g4hk7"] Apr 04 03:30:01 crc kubenswrapper[4681]: I0404 03:30:01.162930 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587890-hfgct"] Apr 04 03:30:01 crc kubenswrapper[4681]: I0404 03:30:01.571916 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587890-g4hk7" event={"ID":"cbdaae4f-5738-4ec1-8eda-4bce93f2bbbc","Type":"ContainerStarted","Data":"cd0afb3740ecb6a02d994f19de71a92b7f441b58b341d3c9833a87d478df3de0"} Apr 04 03:30:01 crc kubenswrapper[4681]: I0404 03:30:01.574529 4681 generic.go:334] "Generic (PLEG): container finished" podID="90659be2-d637-4eab-acb9-3b2774ba36e5" containerID="06b8df05b6d0580ede0bf8bf4ee7b6b36a387f11643eeb87b9c441559efa6fac" exitCode=0 Apr 04 03:30:01 crc kubenswrapper[4681]: I0404 03:30:01.574575 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587890-hfgct" event={"ID":"90659be2-d637-4eab-acb9-3b2774ba36e5","Type":"ContainerDied","Data":"06b8df05b6d0580ede0bf8bf4ee7b6b36a387f11643eeb87b9c441559efa6fac"} Apr 04 03:30:01 crc kubenswrapper[4681]: I0404 03:30:01.574619 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587890-hfgct" event={"ID":"90659be2-d637-4eab-acb9-3b2774ba36e5","Type":"ContainerStarted","Data":"120d79a8d93b6a59f71b8255592d69421576b81ccf8306da1eb4d68395e19f38"} Apr 04 03:30:02 crc kubenswrapper[4681]: I0404 03:30:02.984577 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587890-hfgct" Apr 04 03:30:03 crc kubenswrapper[4681]: I0404 03:30:03.085488 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90659be2-d637-4eab-acb9-3b2774ba36e5-config-volume\") pod \"90659be2-d637-4eab-acb9-3b2774ba36e5\" (UID: \"90659be2-d637-4eab-acb9-3b2774ba36e5\") " Apr 04 03:30:03 crc kubenswrapper[4681]: I0404 03:30:03.085805 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trqrb\" (UniqueName: \"kubernetes.io/projected/90659be2-d637-4eab-acb9-3b2774ba36e5-kube-api-access-trqrb\") pod \"90659be2-d637-4eab-acb9-3b2774ba36e5\" (UID: \"90659be2-d637-4eab-acb9-3b2774ba36e5\") " Apr 04 03:30:03 crc kubenswrapper[4681]: I0404 03:30:03.086005 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90659be2-d637-4eab-acb9-3b2774ba36e5-secret-volume\") pod \"90659be2-d637-4eab-acb9-3b2774ba36e5\" (UID: \"90659be2-d637-4eab-acb9-3b2774ba36e5\") " Apr 04 03:30:03 crc kubenswrapper[4681]: I0404 03:30:03.086399 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90659be2-d637-4eab-acb9-3b2774ba36e5-config-volume" (OuterVolumeSpecName: "config-volume") pod "90659be2-d637-4eab-acb9-3b2774ba36e5" (UID: "90659be2-d637-4eab-acb9-3b2774ba36e5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 03:30:03 crc kubenswrapper[4681]: I0404 03:30:03.086510 4681 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90659be2-d637-4eab-acb9-3b2774ba36e5-config-volume\") on node \"crc\" DevicePath \"\"" Apr 04 03:30:03 crc kubenswrapper[4681]: I0404 03:30:03.092523 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90659be2-d637-4eab-acb9-3b2774ba36e5-kube-api-access-trqrb" (OuterVolumeSpecName: "kube-api-access-trqrb") pod "90659be2-d637-4eab-acb9-3b2774ba36e5" (UID: "90659be2-d637-4eab-acb9-3b2774ba36e5"). InnerVolumeSpecName "kube-api-access-trqrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:30:03 crc kubenswrapper[4681]: I0404 03:30:03.092644 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90659be2-d637-4eab-acb9-3b2774ba36e5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "90659be2-d637-4eab-acb9-3b2774ba36e5" (UID: "90659be2-d637-4eab-acb9-3b2774ba36e5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 03:30:03 crc kubenswrapper[4681]: I0404 03:30:03.187991 4681 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90659be2-d637-4eab-acb9-3b2774ba36e5-secret-volume\") on node \"crc\" DevicePath \"\"" Apr 04 03:30:03 crc kubenswrapper[4681]: I0404 03:30:03.188030 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trqrb\" (UniqueName: \"kubernetes.io/projected/90659be2-d637-4eab-acb9-3b2774ba36e5-kube-api-access-trqrb\") on node \"crc\" DevicePath \"\"" Apr 04 03:30:03 crc kubenswrapper[4681]: I0404 03:30:03.595548 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587890-hfgct" event={"ID":"90659be2-d637-4eab-acb9-3b2774ba36e5","Type":"ContainerDied","Data":"120d79a8d93b6a59f71b8255592d69421576b81ccf8306da1eb4d68395e19f38"} Apr 04 03:30:03 crc kubenswrapper[4681]: I0404 03:30:03.595650 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="120d79a8d93b6a59f71b8255592d69421576b81ccf8306da1eb4d68395e19f38" Apr 04 03:30:03 crc kubenswrapper[4681]: I0404 03:30:03.595574 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587890-hfgct" Apr 04 03:30:03 crc kubenswrapper[4681]: I0404 03:30:03.597844 4681 generic.go:334] "Generic (PLEG): container finished" podID="cbdaae4f-5738-4ec1-8eda-4bce93f2bbbc" containerID="ebfbdf1cab7f4178fb9d343b366f976f791bdd4c2a7b4fca962ca3748a6dd04d" exitCode=0 Apr 04 03:30:03 crc kubenswrapper[4681]: I0404 03:30:03.597891 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587890-g4hk7" event={"ID":"cbdaae4f-5738-4ec1-8eda-4bce93f2bbbc","Type":"ContainerDied","Data":"ebfbdf1cab7f4178fb9d343b366f976f791bdd4c2a7b4fca962ca3748a6dd04d"} Apr 04 03:30:04 crc kubenswrapper[4681]: I0404 03:30:04.077165 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587845-2dmx5"] Apr 04 03:30:04 crc kubenswrapper[4681]: I0404 03:30:04.088814 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587845-2dmx5"] Apr 04 03:30:05 crc kubenswrapper[4681]: I0404 03:30:05.211647 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6342da4d-517c-485b-8d88-5fc59e542232" path="/var/lib/kubelet/pods/6342da4d-517c-485b-8d88-5fc59e542232/volumes" Apr 04 03:30:05 crc kubenswrapper[4681]: I0404 03:30:05.624726 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587890-g4hk7" event={"ID":"cbdaae4f-5738-4ec1-8eda-4bce93f2bbbc","Type":"ContainerDied","Data":"cd0afb3740ecb6a02d994f19de71a92b7f441b58b341d3c9833a87d478df3de0"} Apr 04 03:30:05 crc kubenswrapper[4681]: I0404 03:30:05.624785 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd0afb3740ecb6a02d994f19de71a92b7f441b58b341d3c9833a87d478df3de0" Apr 04 03:30:05 crc kubenswrapper[4681]: I0404 03:30:05.726438 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587890-g4hk7" Apr 04 03:30:05 crc kubenswrapper[4681]: I0404 03:30:05.847837 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shlqc\" (UniqueName: \"kubernetes.io/projected/cbdaae4f-5738-4ec1-8eda-4bce93f2bbbc-kube-api-access-shlqc\") pod \"cbdaae4f-5738-4ec1-8eda-4bce93f2bbbc\" (UID: \"cbdaae4f-5738-4ec1-8eda-4bce93f2bbbc\") " Apr 04 03:30:05 crc kubenswrapper[4681]: I0404 03:30:05.854597 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbdaae4f-5738-4ec1-8eda-4bce93f2bbbc-kube-api-access-shlqc" (OuterVolumeSpecName: "kube-api-access-shlqc") pod "cbdaae4f-5738-4ec1-8eda-4bce93f2bbbc" (UID: "cbdaae4f-5738-4ec1-8eda-4bce93f2bbbc"). InnerVolumeSpecName "kube-api-access-shlqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:30:05 crc kubenswrapper[4681]: I0404 03:30:05.950624 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shlqc\" (UniqueName: \"kubernetes.io/projected/cbdaae4f-5738-4ec1-8eda-4bce93f2bbbc-kube-api-access-shlqc\") on node \"crc\" DevicePath \"\"" Apr 04 03:30:06 crc kubenswrapper[4681]: I0404 03:30:06.632610 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587890-g4hk7" Apr 04 03:30:06 crc kubenswrapper[4681]: I0404 03:30:06.785218 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587884-q5qlt"] Apr 04 03:30:06 crc kubenswrapper[4681]: I0404 03:30:06.794678 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587884-q5qlt"] Apr 04 03:30:07 crc kubenswrapper[4681]: I0404 03:30:07.232916 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f49d320b-31c2-48d9-909e-7905bb70030e" path="/var/lib/kubelet/pods/f49d320b-31c2-48d9-909e-7905bb70030e/volumes" Apr 04 03:30:26 crc kubenswrapper[4681]: I0404 03:30:26.523807 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 03:30:26 crc kubenswrapper[4681]: I0404 03:30:26.524353 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 03:30:34 crc kubenswrapper[4681]: I0404 03:30:34.182781 4681 scope.go:117] "RemoveContainer" containerID="2eb0cf7e931a6d66a799ae8f50332adde04d8492cb844f527ebee656412baecc" Apr 04 03:30:34 crc kubenswrapper[4681]: I0404 03:30:34.265016 4681 scope.go:117] "RemoveContainer" containerID="3af678ce9b7c15c9c7b23fd447f9427f707a59e0bd727d8fe1ddb8d611854f49" Apr 04 03:30:56 crc kubenswrapper[4681]: I0404 03:30:56.523733 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 03:30:56 crc kubenswrapper[4681]: I0404 03:30:56.524440 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 03:31:26 crc kubenswrapper[4681]: I0404 03:31:26.523901 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 03:31:26 crc kubenswrapper[4681]: I0404 03:31:26.524622 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 03:31:26 crc kubenswrapper[4681]: I0404 03:31:26.524670 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 03:31:26 crc kubenswrapper[4681]: I0404 03:31:26.525851 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"50526199248db1b53a9e352e549d4ec2688260bc70be10359c4b6d7bae4c68b5"} pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Apr 04 03:31:26 crc kubenswrapper[4681]: I0404 03:31:26.525918 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" containerID="cri-o://50526199248db1b53a9e352e549d4ec2688260bc70be10359c4b6d7bae4c68b5" gracePeriod=600 Apr 04 03:31:27 crc kubenswrapper[4681]: I0404 03:31:27.516472 4681 generic.go:334] "Generic (PLEG): container finished" podID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerID="50526199248db1b53a9e352e549d4ec2688260bc70be10359c4b6d7bae4c68b5" exitCode=0 Apr 04 03:31:27 crc kubenswrapper[4681]: I0404 03:31:27.516558 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerDied","Data":"50526199248db1b53a9e352e549d4ec2688260bc70be10359c4b6d7bae4c68b5"} Apr 04 03:31:27 crc kubenswrapper[4681]: I0404 03:31:27.517152 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerStarted","Data":"e6ea3afc9053f8a419be029e9129e5308e72192f68fd39e7a983304500440f3a"} Apr 04 03:31:27 crc kubenswrapper[4681]: I0404 03:31:27.517183 4681 scope.go:117] "RemoveContainer" containerID="b6e3c8e3bb709da31661bbcc7d5fde86a5510b8de30a5bd05cb1e0ec6a6ed0b4" Apr 04 03:32:00 crc kubenswrapper[4681]: I0404 03:32:00.160517 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587892-mwdbw"] Apr 04 03:32:00 crc kubenswrapper[4681]: E0404 03:32:00.161484 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90659be2-d637-4eab-acb9-3b2774ba36e5" containerName="collect-profiles" Apr 04 03:32:00 crc kubenswrapper[4681]: I0404 03:32:00.161498 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="90659be2-d637-4eab-acb9-3b2774ba36e5" containerName="collect-profiles" Apr 04 03:32:00 crc kubenswrapper[4681]: E0404 03:32:00.161527 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbdaae4f-5738-4ec1-8eda-4bce93f2bbbc" containerName="oc" Apr 04 03:32:00 crc kubenswrapper[4681]: I0404 03:32:00.161534 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbdaae4f-5738-4ec1-8eda-4bce93f2bbbc" containerName="oc" Apr 04 03:32:00 crc kubenswrapper[4681]: I0404 03:32:00.161734 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbdaae4f-5738-4ec1-8eda-4bce93f2bbbc" containerName="oc" Apr 04 03:32:00 crc kubenswrapper[4681]: I0404 03:32:00.161752 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="90659be2-d637-4eab-acb9-3b2774ba36e5" containerName="collect-profiles" Apr 04 03:32:00 crc kubenswrapper[4681]: I0404 03:32:00.162482 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587892-mwdbw" Apr 04 03:32:00 crc kubenswrapper[4681]: I0404 03:32:00.166036 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 03:32:00 crc kubenswrapper[4681]: I0404 03:32:00.166109 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 03:32:00 crc kubenswrapper[4681]: I0404 03:32:00.166341 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 03:32:00 crc kubenswrapper[4681]: I0404 03:32:00.174389 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587892-mwdbw"] Apr 04 03:32:00 crc kubenswrapper[4681]: I0404 03:32:00.256911 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp4pt\" (UniqueName: \"kubernetes.io/projected/15eb65ea-57bf-4144-b436-4d5b18e25a0e-kube-api-access-zp4pt\") pod \"auto-csr-approver-29587892-mwdbw\" (UID: \"15eb65ea-57bf-4144-b436-4d5b18e25a0e\") " pod="openshift-infra/auto-csr-approver-29587892-mwdbw" Apr 04 03:32:00 crc kubenswrapper[4681]: I0404 03:32:00.359310 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp4pt\" (UniqueName: \"kubernetes.io/projected/15eb65ea-57bf-4144-b436-4d5b18e25a0e-kube-api-access-zp4pt\") pod \"auto-csr-approver-29587892-mwdbw\" (UID: \"15eb65ea-57bf-4144-b436-4d5b18e25a0e\") " pod="openshift-infra/auto-csr-approver-29587892-mwdbw" Apr 04 03:32:00 crc kubenswrapper[4681]: I0404 03:32:00.379416 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp4pt\" (UniqueName: \"kubernetes.io/projected/15eb65ea-57bf-4144-b436-4d5b18e25a0e-kube-api-access-zp4pt\") pod \"auto-csr-approver-29587892-mwdbw\" (UID: \"15eb65ea-57bf-4144-b436-4d5b18e25a0e\") " pod="openshift-infra/auto-csr-approver-29587892-mwdbw" Apr 04 03:32:00 crc kubenswrapper[4681]: I0404 03:32:00.495884 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587892-mwdbw" Apr 04 03:32:00 crc kubenswrapper[4681]: I0404 03:32:00.935567 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587892-mwdbw"] Apr 04 03:32:00 crc kubenswrapper[4681]: W0404 03:32:00.941973 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15eb65ea_57bf_4144_b436_4d5b18e25a0e.slice/crio-4c7fd8adee90e8d680c0ef8bad3f9d3da45f9ff13f5ed4cf2072abe74cb799e6 WatchSource:0}: Error finding container 4c7fd8adee90e8d680c0ef8bad3f9d3da45f9ff13f5ed4cf2072abe74cb799e6: Status 404 returned error can't find the container with id 4c7fd8adee90e8d680c0ef8bad3f9d3da45f9ff13f5ed4cf2072abe74cb799e6 Apr 04 03:32:00 crc kubenswrapper[4681]: I0404 03:32:00.946367 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 04 03:32:01 crc kubenswrapper[4681]: I0404 03:32:01.874364 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587892-mwdbw" event={"ID":"15eb65ea-57bf-4144-b436-4d5b18e25a0e","Type":"ContainerStarted","Data":"4c7fd8adee90e8d680c0ef8bad3f9d3da45f9ff13f5ed4cf2072abe74cb799e6"} Apr 04 03:32:02 crc kubenswrapper[4681]: I0404 03:32:02.886412 4681 generic.go:334] "Generic (PLEG): container finished" podID="15eb65ea-57bf-4144-b436-4d5b18e25a0e" containerID="ab3e37c459ad3c1a45cc0d80f6a5febffe5d2843f2b3b526554828a459c32b44" exitCode=0 Apr 04 03:32:02 crc kubenswrapper[4681]: I0404 03:32:02.886541 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587892-mwdbw" event={"ID":"15eb65ea-57bf-4144-b436-4d5b18e25a0e","Type":"ContainerDied","Data":"ab3e37c459ad3c1a45cc0d80f6a5febffe5d2843f2b3b526554828a459c32b44"} Apr 04 03:32:04 crc kubenswrapper[4681]: I0404 03:32:04.282491 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587892-mwdbw" Apr 04 03:32:04 crc kubenswrapper[4681]: I0404 03:32:04.375453 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp4pt\" (UniqueName: \"kubernetes.io/projected/15eb65ea-57bf-4144-b436-4d5b18e25a0e-kube-api-access-zp4pt\") pod \"15eb65ea-57bf-4144-b436-4d5b18e25a0e\" (UID: \"15eb65ea-57bf-4144-b436-4d5b18e25a0e\") " Apr 04 03:32:04 crc kubenswrapper[4681]: I0404 03:32:04.388488 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15eb65ea-57bf-4144-b436-4d5b18e25a0e-kube-api-access-zp4pt" (OuterVolumeSpecName: "kube-api-access-zp4pt") pod "15eb65ea-57bf-4144-b436-4d5b18e25a0e" (UID: "15eb65ea-57bf-4144-b436-4d5b18e25a0e"). InnerVolumeSpecName "kube-api-access-zp4pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:32:04 crc kubenswrapper[4681]: I0404 03:32:04.477949 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp4pt\" (UniqueName: \"kubernetes.io/projected/15eb65ea-57bf-4144-b436-4d5b18e25a0e-kube-api-access-zp4pt\") on node \"crc\" DevicePath \"\"" Apr 04 03:32:04 crc kubenswrapper[4681]: I0404 03:32:04.918690 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587892-mwdbw" event={"ID":"15eb65ea-57bf-4144-b436-4d5b18e25a0e","Type":"ContainerDied","Data":"4c7fd8adee90e8d680c0ef8bad3f9d3da45f9ff13f5ed4cf2072abe74cb799e6"} Apr 04 03:32:04 crc kubenswrapper[4681]: I0404 03:32:04.919170 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c7fd8adee90e8d680c0ef8bad3f9d3da45f9ff13f5ed4cf2072abe74cb799e6" Apr 04 03:32:04 crc kubenswrapper[4681]: I0404 03:32:04.918982 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587892-mwdbw" Apr 04 03:32:05 crc kubenswrapper[4681]: I0404 03:32:05.373728 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587886-nfnbm"] Apr 04 03:32:05 crc kubenswrapper[4681]: I0404 03:32:05.384570 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587886-nfnbm"] Apr 04 03:32:07 crc kubenswrapper[4681]: I0404 03:32:07.211574 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e1c55d0-61c9-4b1c-8caa-796bd323b27a" path="/var/lib/kubelet/pods/4e1c55d0-61c9-4b1c-8caa-796bd323b27a/volumes" Apr 04 03:32:34 crc kubenswrapper[4681]: I0404 03:32:34.367139 4681 scope.go:117] "RemoveContainer" containerID="ad8f63db96a91428e0f9dae18134a8ecbdc84dd163f5b6dad2fb16e5986499e0" Apr 04 03:33:07 crc kubenswrapper[4681]: I0404 03:33:07.506323 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xrwwj"] Apr 04 03:33:07 crc kubenswrapper[4681]: E0404 03:33:07.507532 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15eb65ea-57bf-4144-b436-4d5b18e25a0e" containerName="oc" Apr 04 03:33:07 crc kubenswrapper[4681]: I0404 03:33:07.507551 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="15eb65ea-57bf-4144-b436-4d5b18e25a0e" containerName="oc" Apr 04 03:33:07 crc kubenswrapper[4681]: I0404 03:33:07.507813 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="15eb65ea-57bf-4144-b436-4d5b18e25a0e" containerName="oc" Apr 04 03:33:07 crc kubenswrapper[4681]: I0404 03:33:07.509716 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xrwwj" Apr 04 03:33:07 crc kubenswrapper[4681]: I0404 03:33:07.537821 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrwwj"] Apr 04 03:33:07 crc kubenswrapper[4681]: I0404 03:33:07.611973 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z7f5\" (UniqueName: \"kubernetes.io/projected/6667e052-a901-4571-8581-959b3bbfed62-kube-api-access-8z7f5\") pod \"redhat-marketplace-xrwwj\" (UID: \"6667e052-a901-4571-8581-959b3bbfed62\") " pod="openshift-marketplace/redhat-marketplace-xrwwj" Apr 04 03:33:07 crc kubenswrapper[4681]: I0404 03:33:07.612066 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6667e052-a901-4571-8581-959b3bbfed62-utilities\") pod \"redhat-marketplace-xrwwj\" (UID: \"6667e052-a901-4571-8581-959b3bbfed62\") " pod="openshift-marketplace/redhat-marketplace-xrwwj" Apr 04 03:33:07 crc kubenswrapper[4681]: I0404 03:33:07.612204 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6667e052-a901-4571-8581-959b3bbfed62-catalog-content\") pod \"redhat-marketplace-xrwwj\" (UID: \"6667e052-a901-4571-8581-959b3bbfed62\") " pod="openshift-marketplace/redhat-marketplace-xrwwj" Apr 04 03:33:07 crc kubenswrapper[4681]: I0404 03:33:07.714509 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6667e052-a901-4571-8581-959b3bbfed62-utilities\") pod \"redhat-marketplace-xrwwj\" (UID: \"6667e052-a901-4571-8581-959b3bbfed62\") " pod="openshift-marketplace/redhat-marketplace-xrwwj" Apr 04 03:33:07 crc kubenswrapper[4681]: I0404 03:33:07.714644 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6667e052-a901-4571-8581-959b3bbfed62-catalog-content\") pod \"redhat-marketplace-xrwwj\" (UID: \"6667e052-a901-4571-8581-959b3bbfed62\") " pod="openshift-marketplace/redhat-marketplace-xrwwj" Apr 04 03:33:07 crc kubenswrapper[4681]: I0404 03:33:07.714764 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z7f5\" (UniqueName: \"kubernetes.io/projected/6667e052-a901-4571-8581-959b3bbfed62-kube-api-access-8z7f5\") pod \"redhat-marketplace-xrwwj\" (UID: \"6667e052-a901-4571-8581-959b3bbfed62\") " pod="openshift-marketplace/redhat-marketplace-xrwwj" Apr 04 03:33:07 crc kubenswrapper[4681]: I0404 03:33:07.715077 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6667e052-a901-4571-8581-959b3bbfed62-utilities\") pod \"redhat-marketplace-xrwwj\" (UID: \"6667e052-a901-4571-8581-959b3bbfed62\") " pod="openshift-marketplace/redhat-marketplace-xrwwj" Apr 04 03:33:07 crc kubenswrapper[4681]: I0404 03:33:07.715116 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6667e052-a901-4571-8581-959b3bbfed62-catalog-content\") pod \"redhat-marketplace-xrwwj\" (UID: \"6667e052-a901-4571-8581-959b3bbfed62\") " pod="openshift-marketplace/redhat-marketplace-xrwwj" Apr 04 03:33:07 crc kubenswrapper[4681]: I0404 03:33:07.746160 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z7f5\" (UniqueName: \"kubernetes.io/projected/6667e052-a901-4571-8581-959b3bbfed62-kube-api-access-8z7f5\") pod \"redhat-marketplace-xrwwj\" (UID: \"6667e052-a901-4571-8581-959b3bbfed62\") " pod="openshift-marketplace/redhat-marketplace-xrwwj" Apr 04 03:33:07 crc kubenswrapper[4681]: I0404 03:33:07.836028 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xrwwj" Apr 04 03:33:08 crc kubenswrapper[4681]: I0404 03:33:08.300747 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrwwj"] Apr 04 03:33:08 crc kubenswrapper[4681]: I0404 03:33:08.667709 4681 generic.go:334] "Generic (PLEG): container finished" podID="6667e052-a901-4571-8581-959b3bbfed62" containerID="80b24036d8382f1cdb0aea4946dd580dd7853b840b61064481b9ec69eca3aba0" exitCode=0 Apr 04 03:33:08 crc kubenswrapper[4681]: I0404 03:33:08.667759 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrwwj" event={"ID":"6667e052-a901-4571-8581-959b3bbfed62","Type":"ContainerDied","Data":"80b24036d8382f1cdb0aea4946dd580dd7853b840b61064481b9ec69eca3aba0"} Apr 04 03:33:08 crc kubenswrapper[4681]: I0404 03:33:08.667980 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrwwj" event={"ID":"6667e052-a901-4571-8581-959b3bbfed62","Type":"ContainerStarted","Data":"dab44333edd90f80c00e8df4100c4f93cc159c4fbfdddce2c4ed4d54dfa2392a"} Apr 04 03:33:09 crc kubenswrapper[4681]: I0404 03:33:09.680826 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrwwj" event={"ID":"6667e052-a901-4571-8581-959b3bbfed62","Type":"ContainerStarted","Data":"4febd1ee0bac036daa767908d6ee6e583fd62783ed16cde52bc2f9986b371523"} Apr 04 03:33:10 crc kubenswrapper[4681]: I0404 03:33:10.693661 4681 generic.go:334] "Generic (PLEG): container finished" podID="6667e052-a901-4571-8581-959b3bbfed62" containerID="4febd1ee0bac036daa767908d6ee6e583fd62783ed16cde52bc2f9986b371523" exitCode=0 Apr 04 03:33:10 crc kubenswrapper[4681]: I0404 03:33:10.693725 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrwwj" event={"ID":"6667e052-a901-4571-8581-959b3bbfed62","Type":"ContainerDied","Data":"4febd1ee0bac036daa767908d6ee6e583fd62783ed16cde52bc2f9986b371523"} Apr 04 03:33:11 crc kubenswrapper[4681]: I0404 03:33:11.705668 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrwwj" event={"ID":"6667e052-a901-4571-8581-959b3bbfed62","Type":"ContainerStarted","Data":"dfda4d9b274185d54673f7b685d565c5fb697c20c6e0755da1e18929390effce"} Apr 04 03:33:11 crc kubenswrapper[4681]: I0404 03:33:11.725331 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xrwwj" podStartSLOduration=2.239245173 podStartE2EDuration="4.72530894s" podCreationTimestamp="2026-04-04 03:33:07 +0000 UTC" firstStartedPulling="2026-04-04 03:33:08.670405255 +0000 UTC m=+5868.336180415" lastFinishedPulling="2026-04-04 03:33:11.156469062 +0000 UTC m=+5870.822244182" observedRunningTime="2026-04-04 03:33:11.724433726 +0000 UTC m=+5871.390208866" watchObservedRunningTime="2026-04-04 03:33:11.72530894 +0000 UTC m=+5871.391084060" Apr 04 03:33:17 crc kubenswrapper[4681]: I0404 03:33:17.836560 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xrwwj" Apr 04 03:33:17 crc kubenswrapper[4681]: I0404 03:33:17.837057 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xrwwj" Apr 04 03:33:17 crc kubenswrapper[4681]: I0404 03:33:17.882038 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xrwwj" Apr 04 03:33:18 crc kubenswrapper[4681]: I0404 03:33:18.851909 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xrwwj" Apr 04 03:33:18 crc kubenswrapper[4681]: I0404 03:33:18.925134 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrwwj"] Apr 04 03:33:20 crc kubenswrapper[4681]: I0404 03:33:20.801494 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xrwwj" podUID="6667e052-a901-4571-8581-959b3bbfed62" containerName="registry-server" containerID="cri-o://dfda4d9b274185d54673f7b685d565c5fb697c20c6e0755da1e18929390effce" gracePeriod=2 Apr 04 03:33:21 crc kubenswrapper[4681]: I0404 03:33:21.275186 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xrwwj" Apr 04 03:33:21 crc kubenswrapper[4681]: I0404 03:33:21.438097 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6667e052-a901-4571-8581-959b3bbfed62-utilities\") pod \"6667e052-a901-4571-8581-959b3bbfed62\" (UID: \"6667e052-a901-4571-8581-959b3bbfed62\") " Apr 04 03:33:21 crc kubenswrapper[4681]: I0404 03:33:21.438577 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z7f5\" (UniqueName: \"kubernetes.io/projected/6667e052-a901-4571-8581-959b3bbfed62-kube-api-access-8z7f5\") pod \"6667e052-a901-4571-8581-959b3bbfed62\" (UID: \"6667e052-a901-4571-8581-959b3bbfed62\") " Apr 04 03:33:21 crc kubenswrapper[4681]: I0404 03:33:21.438607 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6667e052-a901-4571-8581-959b3bbfed62-catalog-content\") pod \"6667e052-a901-4571-8581-959b3bbfed62\" (UID: \"6667e052-a901-4571-8581-959b3bbfed62\") " Apr 04 03:33:21 crc kubenswrapper[4681]: I0404 03:33:21.438995 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6667e052-a901-4571-8581-959b3bbfed62-utilities" (OuterVolumeSpecName: "utilities") pod "6667e052-a901-4571-8581-959b3bbfed62" (UID: "6667e052-a901-4571-8581-959b3bbfed62"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:33:21 crc kubenswrapper[4681]: I0404 03:33:21.439293 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6667e052-a901-4571-8581-959b3bbfed62-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 03:33:21 crc kubenswrapper[4681]: I0404 03:33:21.444726 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6667e052-a901-4571-8581-959b3bbfed62-kube-api-access-8z7f5" (OuterVolumeSpecName: "kube-api-access-8z7f5") pod "6667e052-a901-4571-8581-959b3bbfed62" (UID: "6667e052-a901-4571-8581-959b3bbfed62"). InnerVolumeSpecName "kube-api-access-8z7f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:33:21 crc kubenswrapper[4681]: I0404 03:33:21.474465 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6667e052-a901-4571-8581-959b3bbfed62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6667e052-a901-4571-8581-959b3bbfed62" (UID: "6667e052-a901-4571-8581-959b3bbfed62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:33:21 crc kubenswrapper[4681]: I0404 03:33:21.541402 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z7f5\" (UniqueName: \"kubernetes.io/projected/6667e052-a901-4571-8581-959b3bbfed62-kube-api-access-8z7f5\") on node \"crc\" DevicePath \"\"" Apr 04 03:33:21 crc kubenswrapper[4681]: I0404 03:33:21.541437 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6667e052-a901-4571-8581-959b3bbfed62-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 03:33:21 crc kubenswrapper[4681]: I0404 03:33:21.812723 4681 generic.go:334] "Generic (PLEG): container finished" podID="6667e052-a901-4571-8581-959b3bbfed62" containerID="dfda4d9b274185d54673f7b685d565c5fb697c20c6e0755da1e18929390effce" exitCode=0 Apr 04 03:33:21 crc kubenswrapper[4681]: I0404 03:33:21.812774 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrwwj" event={"ID":"6667e052-a901-4571-8581-959b3bbfed62","Type":"ContainerDied","Data":"dfda4d9b274185d54673f7b685d565c5fb697c20c6e0755da1e18929390effce"} Apr 04 03:33:21 crc kubenswrapper[4681]: I0404 03:33:21.812799 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xrwwj" Apr 04 03:33:21 crc kubenswrapper[4681]: I0404 03:33:21.812817 4681 scope.go:117] "RemoveContainer" containerID="dfda4d9b274185d54673f7b685d565c5fb697c20c6e0755da1e18929390effce" Apr 04 03:33:21 crc kubenswrapper[4681]: I0404 03:33:21.812804 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrwwj" event={"ID":"6667e052-a901-4571-8581-959b3bbfed62","Type":"ContainerDied","Data":"dab44333edd90f80c00e8df4100c4f93cc159c4fbfdddce2c4ed4d54dfa2392a"} Apr 04 03:33:21 crc kubenswrapper[4681]: I0404 03:33:21.846921 4681 scope.go:117] "RemoveContainer" containerID="4febd1ee0bac036daa767908d6ee6e583fd62783ed16cde52bc2f9986b371523" Apr 04 03:33:21 crc kubenswrapper[4681]: I0404 03:33:21.850239 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrwwj"] Apr 04 03:33:21 crc kubenswrapper[4681]: I0404 03:33:21.861837 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrwwj"] Apr 04 03:33:21 crc kubenswrapper[4681]: I0404 03:33:21.872238 4681 scope.go:117] "RemoveContainer" containerID="80b24036d8382f1cdb0aea4946dd580dd7853b840b61064481b9ec69eca3aba0" Apr 04 03:33:21 crc kubenswrapper[4681]: I0404 03:33:21.928437 4681 scope.go:117] "RemoveContainer" containerID="dfda4d9b274185d54673f7b685d565c5fb697c20c6e0755da1e18929390effce" Apr 04 03:33:21 crc kubenswrapper[4681]: E0404 03:33:21.929057 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfda4d9b274185d54673f7b685d565c5fb697c20c6e0755da1e18929390effce\": container with ID starting with dfda4d9b274185d54673f7b685d565c5fb697c20c6e0755da1e18929390effce not found: ID does not exist" containerID="dfda4d9b274185d54673f7b685d565c5fb697c20c6e0755da1e18929390effce" Apr 04 03:33:21 crc kubenswrapper[4681]: I0404 03:33:21.929107 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfda4d9b274185d54673f7b685d565c5fb697c20c6e0755da1e18929390effce"} err="failed to get container status \"dfda4d9b274185d54673f7b685d565c5fb697c20c6e0755da1e18929390effce\": rpc error: code = NotFound desc = could not find container \"dfda4d9b274185d54673f7b685d565c5fb697c20c6e0755da1e18929390effce\": container with ID starting with dfda4d9b274185d54673f7b685d565c5fb697c20c6e0755da1e18929390effce not found: ID does not exist" Apr 04 03:33:21 crc kubenswrapper[4681]: I0404 03:33:21.929140 4681 scope.go:117] "RemoveContainer" containerID="4febd1ee0bac036daa767908d6ee6e583fd62783ed16cde52bc2f9986b371523" Apr 04 03:33:21 crc kubenswrapper[4681]: E0404 03:33:21.929596 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4febd1ee0bac036daa767908d6ee6e583fd62783ed16cde52bc2f9986b371523\": container with ID starting with 4febd1ee0bac036daa767908d6ee6e583fd62783ed16cde52bc2f9986b371523 not found: ID does not exist" containerID="4febd1ee0bac036daa767908d6ee6e583fd62783ed16cde52bc2f9986b371523" Apr 04 03:33:21 crc kubenswrapper[4681]: I0404 03:33:21.929635 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4febd1ee0bac036daa767908d6ee6e583fd62783ed16cde52bc2f9986b371523"} err="failed to get container status \"4febd1ee0bac036daa767908d6ee6e583fd62783ed16cde52bc2f9986b371523\": rpc error: code = NotFound desc = could not find container \"4febd1ee0bac036daa767908d6ee6e583fd62783ed16cde52bc2f9986b371523\": container with ID starting with 4febd1ee0bac036daa767908d6ee6e583fd62783ed16cde52bc2f9986b371523 not found: ID does not exist" Apr 04 03:33:21 crc kubenswrapper[4681]: I0404 03:33:21.929659 4681 scope.go:117] "RemoveContainer" containerID="80b24036d8382f1cdb0aea4946dd580dd7853b840b61064481b9ec69eca3aba0" Apr 04 03:33:21 crc kubenswrapper[4681]: E0404 03:33:21.930523 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80b24036d8382f1cdb0aea4946dd580dd7853b840b61064481b9ec69eca3aba0\": container with ID starting with 80b24036d8382f1cdb0aea4946dd580dd7853b840b61064481b9ec69eca3aba0 not found: ID does not exist" containerID="80b24036d8382f1cdb0aea4946dd580dd7853b840b61064481b9ec69eca3aba0" Apr 04 03:33:21 crc kubenswrapper[4681]: I0404 03:33:21.930567 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80b24036d8382f1cdb0aea4946dd580dd7853b840b61064481b9ec69eca3aba0"} err="failed to get container status \"80b24036d8382f1cdb0aea4946dd580dd7853b840b61064481b9ec69eca3aba0\": rpc error: code = NotFound desc = could not find container \"80b24036d8382f1cdb0aea4946dd580dd7853b840b61064481b9ec69eca3aba0\": container with ID starting with 80b24036d8382f1cdb0aea4946dd580dd7853b840b61064481b9ec69eca3aba0 not found: ID does not exist" Apr 04 03:33:23 crc kubenswrapper[4681]: I0404 03:33:23.220698 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6667e052-a901-4571-8581-959b3bbfed62" path="/var/lib/kubelet/pods/6667e052-a901-4571-8581-959b3bbfed62/volumes" Apr 04 03:33:26 crc kubenswrapper[4681]: I0404 03:33:26.524501 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 03:33:26 crc kubenswrapper[4681]: I0404 03:33:26.525177 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 03:33:56 crc kubenswrapper[4681]: I0404 03:33:56.524582 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 03:33:56 crc kubenswrapper[4681]: I0404 03:33:56.525060 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 03:34:00 crc kubenswrapper[4681]: I0404 03:34:00.162341 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587894-tbc8x"] Apr 04 03:34:00 crc kubenswrapper[4681]: E0404 03:34:00.163226 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6667e052-a901-4571-8581-959b3bbfed62" containerName="extract-content" Apr 04 03:34:00 crc kubenswrapper[4681]: I0404 03:34:00.163239 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6667e052-a901-4571-8581-959b3bbfed62" containerName="extract-content" Apr 04 03:34:00 crc kubenswrapper[4681]: E0404 03:34:00.163262 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6667e052-a901-4571-8581-959b3bbfed62" containerName="registry-server" Apr 04 03:34:00 crc kubenswrapper[4681]: I0404 03:34:00.163268 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6667e052-a901-4571-8581-959b3bbfed62" containerName="registry-server" Apr 04 03:34:00 crc kubenswrapper[4681]: E0404 03:34:00.163278 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6667e052-a901-4571-8581-959b3bbfed62" containerName="extract-utilities" Apr 04 03:34:00 crc kubenswrapper[4681]: I0404 03:34:00.163304 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6667e052-a901-4571-8581-959b3bbfed62" containerName="extract-utilities" Apr 04 03:34:00 crc kubenswrapper[4681]: I0404 03:34:00.163559 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="6667e052-a901-4571-8581-959b3bbfed62" containerName="registry-server" Apr 04 03:34:00 crc kubenswrapper[4681]: I0404 03:34:00.164265 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587894-tbc8x" Apr 04 03:34:00 crc kubenswrapper[4681]: I0404 03:34:00.166913 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 03:34:00 crc kubenswrapper[4681]: I0404 03:34:00.167028 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 03:34:00 crc kubenswrapper[4681]: I0404 03:34:00.168320 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 03:34:00 crc kubenswrapper[4681]: I0404 03:34:00.188036 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587894-tbc8x"] Apr 04 03:34:00 crc kubenswrapper[4681]: I0404 03:34:00.216751 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq5ks\" (UniqueName: \"kubernetes.io/projected/88694644-a5da-4e3a-a6eb-7f394a06c826-kube-api-access-sq5ks\") pod \"auto-csr-approver-29587894-tbc8x\" (UID: \"88694644-a5da-4e3a-a6eb-7f394a06c826\") " pod="openshift-infra/auto-csr-approver-29587894-tbc8x" Apr 04 03:34:00 crc kubenswrapper[4681]: I0404 03:34:00.317603 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq5ks\" (UniqueName: \"kubernetes.io/projected/88694644-a5da-4e3a-a6eb-7f394a06c826-kube-api-access-sq5ks\") pod \"auto-csr-approver-29587894-tbc8x\" (UID: \"88694644-a5da-4e3a-a6eb-7f394a06c826\") " pod="openshift-infra/auto-csr-approver-29587894-tbc8x" Apr 04 03:34:00 crc kubenswrapper[4681]: I0404 03:34:00.350239 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq5ks\" (UniqueName: \"kubernetes.io/projected/88694644-a5da-4e3a-a6eb-7f394a06c826-kube-api-access-sq5ks\") pod \"auto-csr-approver-29587894-tbc8x\" (UID: \"88694644-a5da-4e3a-a6eb-7f394a06c826\") " pod="openshift-infra/auto-csr-approver-29587894-tbc8x" Apr 04 03:34:00 crc kubenswrapper[4681]: I0404 03:34:00.481536 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587894-tbc8x" Apr 04 03:34:00 crc kubenswrapper[4681]: I0404 03:34:00.941044 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587894-tbc8x"] Apr 04 03:34:01 crc kubenswrapper[4681]: I0404 03:34:01.280336 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587894-tbc8x" event={"ID":"88694644-a5da-4e3a-a6eb-7f394a06c826","Type":"ContainerStarted","Data":"91adbf0235a62be780f83fbc548ba580ad6cca7edf6b59db858b86c89ba39b21"} Apr 04 03:34:02 crc kubenswrapper[4681]: I0404 03:34:02.292758 4681 generic.go:334] "Generic (PLEG): container finished" podID="88694644-a5da-4e3a-a6eb-7f394a06c826" containerID="a3dfc533218a8b3779f52f8ad941ee296040fd6aee03a14dba61c589fa046d26" exitCode=0 Apr 04 03:34:02 crc kubenswrapper[4681]: I0404 03:34:02.292832 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587894-tbc8x" event={"ID":"88694644-a5da-4e3a-a6eb-7f394a06c826","Type":"ContainerDied","Data":"a3dfc533218a8b3779f52f8ad941ee296040fd6aee03a14dba61c589fa046d26"} Apr 04 03:34:03 crc kubenswrapper[4681]: I0404 03:34:03.670945 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587894-tbc8x" Apr 04 03:34:03 crc kubenswrapper[4681]: I0404 03:34:03.793162 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq5ks\" (UniqueName: \"kubernetes.io/projected/88694644-a5da-4e3a-a6eb-7f394a06c826-kube-api-access-sq5ks\") pod \"88694644-a5da-4e3a-a6eb-7f394a06c826\" (UID: \"88694644-a5da-4e3a-a6eb-7f394a06c826\") " Apr 04 03:34:03 crc kubenswrapper[4681]: I0404 03:34:03.799212 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88694644-a5da-4e3a-a6eb-7f394a06c826-kube-api-access-sq5ks" (OuterVolumeSpecName: "kube-api-access-sq5ks") pod "88694644-a5da-4e3a-a6eb-7f394a06c826" (UID: "88694644-a5da-4e3a-a6eb-7f394a06c826"). InnerVolumeSpecName "kube-api-access-sq5ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:34:03 crc kubenswrapper[4681]: I0404 03:34:03.895377 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq5ks\" (UniqueName: \"kubernetes.io/projected/88694644-a5da-4e3a-a6eb-7f394a06c826-kube-api-access-sq5ks\") on node \"crc\" DevicePath \"\"" Apr 04 03:34:04 crc kubenswrapper[4681]: I0404 03:34:04.321457 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587894-tbc8x" event={"ID":"88694644-a5da-4e3a-a6eb-7f394a06c826","Type":"ContainerDied","Data":"91adbf0235a62be780f83fbc548ba580ad6cca7edf6b59db858b86c89ba39b21"} Apr 04 03:34:04 crc kubenswrapper[4681]: I0404 03:34:04.321498 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91adbf0235a62be780f83fbc548ba580ad6cca7edf6b59db858b86c89ba39b21" Apr 04 03:34:04 crc kubenswrapper[4681]: I0404 03:34:04.321590 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587894-tbc8x" Apr 04 03:34:04 crc kubenswrapper[4681]: I0404 03:34:04.750413 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587888-jmnxc"] Apr 04 03:34:04 crc kubenswrapper[4681]: I0404 03:34:04.761076 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587888-jmnxc"] Apr 04 03:34:05 crc kubenswrapper[4681]: I0404 03:34:05.213925 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96a01415-f006-40cb-9553-bd8e6488523b" path="/var/lib/kubelet/pods/96a01415-f006-40cb-9553-bd8e6488523b/volumes" Apr 04 03:34:26 crc kubenswrapper[4681]: I0404 03:34:26.524794 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 03:34:26 crc kubenswrapper[4681]: I0404 03:34:26.525336 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 03:34:26 crc kubenswrapper[4681]: I0404 03:34:26.525392 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 03:34:26 crc kubenswrapper[4681]: I0404 03:34:26.526222 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e6ea3afc9053f8a419be029e9129e5308e72192f68fd39e7a983304500440f3a"} pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Apr 04 03:34:26 crc kubenswrapper[4681]: I0404 03:34:26.526306 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" containerID="cri-o://e6ea3afc9053f8a419be029e9129e5308e72192f68fd39e7a983304500440f3a" gracePeriod=600 Apr 04 03:34:26 crc kubenswrapper[4681]: E0404 03:34:26.652432 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:34:27 crc kubenswrapper[4681]: I0404 03:34:27.553658 4681 generic.go:334] "Generic (PLEG): container finished" podID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerID="e6ea3afc9053f8a419be029e9129e5308e72192f68fd39e7a983304500440f3a" exitCode=0 Apr 04 03:34:27 crc kubenswrapper[4681]: I0404 03:34:27.553723 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerDied","Data":"e6ea3afc9053f8a419be029e9129e5308e72192f68fd39e7a983304500440f3a"} Apr 04 03:34:27 crc kubenswrapper[4681]: I0404 03:34:27.553979 4681 scope.go:117] "RemoveContainer" containerID="50526199248db1b53a9e352e549d4ec2688260bc70be10359c4b6d7bae4c68b5" Apr 04 03:34:27 crc kubenswrapper[4681]: I0404 03:34:27.554819 4681 scope.go:117] "RemoveContainer" containerID="e6ea3afc9053f8a419be029e9129e5308e72192f68fd39e7a983304500440f3a" Apr 04 03:34:27 crc kubenswrapper[4681]: E0404 03:34:27.555166 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:34:34 crc kubenswrapper[4681]: I0404 03:34:34.503021 4681 scope.go:117] "RemoveContainer" containerID="748366a0d26b3d843d1ca17d8c5cd8984499f583816d72b6649af98e1913f67f" Apr 04 03:34:34 crc kubenswrapper[4681]: I0404 03:34:34.622214 4681 scope.go:117] "RemoveContainer" containerID="2dd098cca2fa2d07e749264a1c406a17fe7b53ab99c04c26414629cd0592c336" Apr 04 03:34:34 crc kubenswrapper[4681]: I0404 03:34:34.686921 4681 scope.go:117] "RemoveContainer" containerID="26c49260993c043ad51c8014eb83f0faab76d6bbe9a3b5a06a0379896b9aa530" Apr 04 03:34:34 crc kubenswrapper[4681]: I0404 03:34:34.733099 4681 scope.go:117] "RemoveContainer" containerID="5095f5425b7a348536ffed9d23219a4c8ba98a561a1ce8d6764af20d8e2463d0" Apr 04 03:34:39 crc kubenswrapper[4681]: I0404 03:34:39.201761 4681 scope.go:117] "RemoveContainer" containerID="e6ea3afc9053f8a419be029e9129e5308e72192f68fd39e7a983304500440f3a" Apr 04 03:34:39 crc kubenswrapper[4681]: E0404 03:34:39.202447 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:34:47 crc kubenswrapper[4681]: I0404 03:34:47.735879 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cnrm6"] Apr 04 03:34:47 crc kubenswrapper[4681]: E0404 03:34:47.737128 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88694644-a5da-4e3a-a6eb-7f394a06c826" containerName="oc" Apr 04 03:34:47 crc kubenswrapper[4681]: I0404 03:34:47.737146 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="88694644-a5da-4e3a-a6eb-7f394a06c826" containerName="oc" Apr 04 03:34:47 crc kubenswrapper[4681]: I0404 03:34:47.737534 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="88694644-a5da-4e3a-a6eb-7f394a06c826" containerName="oc" Apr 04 03:34:47 crc kubenswrapper[4681]: I0404 03:34:47.739457 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cnrm6" Apr 04 03:34:47 crc kubenswrapper[4681]: I0404 03:34:47.753993 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cnrm6"] Apr 04 03:34:47 crc kubenswrapper[4681]: I0404 03:34:47.835754 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dbdb54c-a681-4478-8841-152b0738cc37-utilities\") pod \"community-operators-cnrm6\" (UID: \"5dbdb54c-a681-4478-8841-152b0738cc37\") " pod="openshift-marketplace/community-operators-cnrm6" Apr 04 03:34:47 crc kubenswrapper[4681]: I0404 03:34:47.835853 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgs6b\" (UniqueName: \"kubernetes.io/projected/5dbdb54c-a681-4478-8841-152b0738cc37-kube-api-access-tgs6b\") pod \"community-operators-cnrm6\" (UID: \"5dbdb54c-a681-4478-8841-152b0738cc37\") " pod="openshift-marketplace/community-operators-cnrm6" Apr 04 03:34:47 crc kubenswrapper[4681]: I0404 03:34:47.835917 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dbdb54c-a681-4478-8841-152b0738cc37-catalog-content\") pod \"community-operators-cnrm6\" (UID: \"5dbdb54c-a681-4478-8841-152b0738cc37\") " pod="openshift-marketplace/community-operators-cnrm6" Apr 04 03:34:47 crc kubenswrapper[4681]: I0404 03:34:47.938357 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dbdb54c-a681-4478-8841-152b0738cc37-utilities\") pod \"community-operators-cnrm6\" (UID: \"5dbdb54c-a681-4478-8841-152b0738cc37\") " pod="openshift-marketplace/community-operators-cnrm6" Apr 04 03:34:47 crc kubenswrapper[4681]: I0404 03:34:47.938801 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgs6b\" (UniqueName: \"kubernetes.io/projected/5dbdb54c-a681-4478-8841-152b0738cc37-kube-api-access-tgs6b\") pod \"community-operators-cnrm6\" (UID: \"5dbdb54c-a681-4478-8841-152b0738cc37\") " pod="openshift-marketplace/community-operators-cnrm6" Apr 04 03:34:47 crc kubenswrapper[4681]: I0404 03:34:47.938873 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dbdb54c-a681-4478-8841-152b0738cc37-catalog-content\") pod \"community-operators-cnrm6\" (UID: \"5dbdb54c-a681-4478-8841-152b0738cc37\") " pod="openshift-marketplace/community-operators-cnrm6" Apr 04 03:34:47 crc kubenswrapper[4681]: I0404 03:34:47.939061 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dbdb54c-a681-4478-8841-152b0738cc37-utilities\") pod \"community-operators-cnrm6\" (UID: \"5dbdb54c-a681-4478-8841-152b0738cc37\") " pod="openshift-marketplace/community-operators-cnrm6" Apr 04 03:34:47 crc kubenswrapper[4681]: I0404 03:34:47.939344 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dbdb54c-a681-4478-8841-152b0738cc37-catalog-content\") pod \"community-operators-cnrm6\" (UID: \"5dbdb54c-a681-4478-8841-152b0738cc37\") " pod="openshift-marketplace/community-operators-cnrm6" Apr 04 03:34:47 crc kubenswrapper[4681]: I0404 03:34:47.963889 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgs6b\" (UniqueName: \"kubernetes.io/projected/5dbdb54c-a681-4478-8841-152b0738cc37-kube-api-access-tgs6b\") pod \"community-operators-cnrm6\" (UID: \"5dbdb54c-a681-4478-8841-152b0738cc37\") " pod="openshift-marketplace/community-operators-cnrm6" Apr 04 03:34:48 crc kubenswrapper[4681]: I0404 03:34:48.071123 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cnrm6" Apr 04 03:34:48 crc kubenswrapper[4681]: W0404 03:34:48.636113 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5dbdb54c_a681_4478_8841_152b0738cc37.slice/crio-7618d627a2b427e3d2adf84cefb3356d797eda82f2a87263d87f975936b022f3 WatchSource:0}: Error finding container 7618d627a2b427e3d2adf84cefb3356d797eda82f2a87263d87f975936b022f3: Status 404 returned error can't find the container with id 7618d627a2b427e3d2adf84cefb3356d797eda82f2a87263d87f975936b022f3 Apr 04 03:34:48 crc kubenswrapper[4681]: I0404 03:34:48.637347 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cnrm6"] Apr 04 03:34:48 crc kubenswrapper[4681]: I0404 03:34:48.781240 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cnrm6" event={"ID":"5dbdb54c-a681-4478-8841-152b0738cc37","Type":"ContainerStarted","Data":"7618d627a2b427e3d2adf84cefb3356d797eda82f2a87263d87f975936b022f3"} Apr 04 03:34:49 crc kubenswrapper[4681]: I0404 03:34:49.793829 4681 generic.go:334] "Generic (PLEG): container finished" podID="5dbdb54c-a681-4478-8841-152b0738cc37" containerID="4a506bb2ebb96a825f50611014758d3e9b98389342ed8403766de3c67d3dd9cb" exitCode=0 Apr 04 03:34:49 crc kubenswrapper[4681]: I0404 03:34:49.793922 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cnrm6" event={"ID":"5dbdb54c-a681-4478-8841-152b0738cc37","Type":"ContainerDied","Data":"4a506bb2ebb96a825f50611014758d3e9b98389342ed8403766de3c67d3dd9cb"} Apr 04 03:34:51 crc kubenswrapper[4681]: I0404 03:34:51.822769 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cnrm6" event={"ID":"5dbdb54c-a681-4478-8841-152b0738cc37","Type":"ContainerStarted","Data":"1e1c5474275d9333738217986484dfb64b5426e5c40dc6ae457ddbccbb8a0579"} Apr 04 03:34:52 crc kubenswrapper[4681]: I0404 03:34:52.837860 4681 generic.go:334] "Generic (PLEG): container finished" podID="5dbdb54c-a681-4478-8841-152b0738cc37" containerID="1e1c5474275d9333738217986484dfb64b5426e5c40dc6ae457ddbccbb8a0579" exitCode=0 Apr 04 03:34:52 crc kubenswrapper[4681]: I0404 03:34:52.838047 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cnrm6" event={"ID":"5dbdb54c-a681-4478-8841-152b0738cc37","Type":"ContainerDied","Data":"1e1c5474275d9333738217986484dfb64b5426e5c40dc6ae457ddbccbb8a0579"} Apr 04 03:34:53 crc kubenswrapper[4681]: I0404 03:34:53.853765 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cnrm6" event={"ID":"5dbdb54c-a681-4478-8841-152b0738cc37","Type":"ContainerStarted","Data":"7e0e35d3a658c7c525eae84df5bdf2ff879e9aedff380d197569fbe8a0325201"} Apr 04 03:34:53 crc kubenswrapper[4681]: I0404 03:34:53.878834 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cnrm6" podStartSLOduration=3.491559113 podStartE2EDuration="6.878815245s" podCreationTimestamp="2026-04-04 03:34:47 +0000 UTC" firstStartedPulling="2026-04-04 03:34:49.795929677 +0000 UTC m=+5969.461704797" lastFinishedPulling="2026-04-04 03:34:53.183185799 +0000 UTC m=+5972.848960929" observedRunningTime="2026-04-04 03:34:53.871246999 +0000 UTC m=+5973.537022199" watchObservedRunningTime="2026-04-04 03:34:53.878815245 +0000 UTC m=+5973.544590355" Apr 04 03:34:54 crc kubenswrapper[4681]: I0404 03:34:54.201581 4681 scope.go:117] "RemoveContainer" containerID="e6ea3afc9053f8a419be029e9129e5308e72192f68fd39e7a983304500440f3a" Apr 04 03:34:54 crc kubenswrapper[4681]: E0404 03:34:54.201865 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:34:58 crc kubenswrapper[4681]: I0404 03:34:58.072594 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cnrm6" Apr 04 03:34:58 crc kubenswrapper[4681]: I0404 03:34:58.073092 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cnrm6" Apr 04 03:34:58 crc kubenswrapper[4681]: I0404 03:34:58.126873 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cnrm6" Apr 04 03:34:58 crc kubenswrapper[4681]: I0404 03:34:58.945753 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cnrm6" Apr 04 03:35:00 crc kubenswrapper[4681]: I0404 03:35:00.126881 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cnrm6"] Apr 04 03:35:00 crc kubenswrapper[4681]: I0404 03:35:00.928716 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cnrm6" podUID="5dbdb54c-a681-4478-8841-152b0738cc37" containerName="registry-server" containerID="cri-o://7e0e35d3a658c7c525eae84df5bdf2ff879e9aedff380d197569fbe8a0325201" gracePeriod=2 Apr 04 03:35:01 crc kubenswrapper[4681]: I0404 03:35:01.444334 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cnrm6" Apr 04 03:35:01 crc kubenswrapper[4681]: I0404 03:35:01.558963 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dbdb54c-a681-4478-8841-152b0738cc37-catalog-content\") pod \"5dbdb54c-a681-4478-8841-152b0738cc37\" (UID: \"5dbdb54c-a681-4478-8841-152b0738cc37\") " Apr 04 03:35:01 crc kubenswrapper[4681]: I0404 03:35:01.559332 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dbdb54c-a681-4478-8841-152b0738cc37-utilities\") pod \"5dbdb54c-a681-4478-8841-152b0738cc37\" (UID: \"5dbdb54c-a681-4478-8841-152b0738cc37\") " Apr 04 03:35:01 crc kubenswrapper[4681]: I0404 03:35:01.559629 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgs6b\" (UniqueName: \"kubernetes.io/projected/5dbdb54c-a681-4478-8841-152b0738cc37-kube-api-access-tgs6b\") pod \"5dbdb54c-a681-4478-8841-152b0738cc37\" (UID: \"5dbdb54c-a681-4478-8841-152b0738cc37\") " Apr 04 03:35:01 crc kubenswrapper[4681]: I0404 03:35:01.560554 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dbdb54c-a681-4478-8841-152b0738cc37-utilities" (OuterVolumeSpecName: "utilities") pod "5dbdb54c-a681-4478-8841-152b0738cc37" (UID: "5dbdb54c-a681-4478-8841-152b0738cc37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:35:01 crc kubenswrapper[4681]: I0404 03:35:01.585359 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dbdb54c-a681-4478-8841-152b0738cc37-kube-api-access-tgs6b" (OuterVolumeSpecName: "kube-api-access-tgs6b") pod "5dbdb54c-a681-4478-8841-152b0738cc37" (UID: "5dbdb54c-a681-4478-8841-152b0738cc37"). InnerVolumeSpecName "kube-api-access-tgs6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:35:01 crc kubenswrapper[4681]: I0404 03:35:01.622622 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dbdb54c-a681-4478-8841-152b0738cc37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5dbdb54c-a681-4478-8841-152b0738cc37" (UID: "5dbdb54c-a681-4478-8841-152b0738cc37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:35:01 crc kubenswrapper[4681]: I0404 03:35:01.676874 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgs6b\" (UniqueName: \"kubernetes.io/projected/5dbdb54c-a681-4478-8841-152b0738cc37-kube-api-access-tgs6b\") on node \"crc\" DevicePath \"\"" Apr 04 03:35:01 crc kubenswrapper[4681]: I0404 03:35:01.676908 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dbdb54c-a681-4478-8841-152b0738cc37-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 03:35:01 crc kubenswrapper[4681]: I0404 03:35:01.676917 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dbdb54c-a681-4478-8841-152b0738cc37-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 03:35:01 crc kubenswrapper[4681]: I0404 03:35:01.938090 4681 generic.go:334] "Generic (PLEG): container finished" podID="5dbdb54c-a681-4478-8841-152b0738cc37" containerID="7e0e35d3a658c7c525eae84df5bdf2ff879e9aedff380d197569fbe8a0325201" exitCode=0 Apr 04 03:35:01 crc kubenswrapper[4681]: I0404 03:35:01.938135 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cnrm6" Apr 04 03:35:01 crc kubenswrapper[4681]: I0404 03:35:01.938154 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cnrm6" event={"ID":"5dbdb54c-a681-4478-8841-152b0738cc37","Type":"ContainerDied","Data":"7e0e35d3a658c7c525eae84df5bdf2ff879e9aedff380d197569fbe8a0325201"} Apr 04 03:35:01 crc kubenswrapper[4681]: I0404 03:35:01.939065 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cnrm6" event={"ID":"5dbdb54c-a681-4478-8841-152b0738cc37","Type":"ContainerDied","Data":"7618d627a2b427e3d2adf84cefb3356d797eda82f2a87263d87f975936b022f3"} Apr 04 03:35:01 crc kubenswrapper[4681]: I0404 03:35:01.939120 4681 scope.go:117] "RemoveContainer" containerID="7e0e35d3a658c7c525eae84df5bdf2ff879e9aedff380d197569fbe8a0325201" Apr 04 03:35:01 crc kubenswrapper[4681]: I0404 03:35:01.975290 4681 scope.go:117] "RemoveContainer" containerID="1e1c5474275d9333738217986484dfb64b5426e5c40dc6ae457ddbccbb8a0579" Apr 04 03:35:01 crc kubenswrapper[4681]: I0404 03:35:01.986716 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cnrm6"] Apr 04 03:35:01 crc kubenswrapper[4681]: I0404 03:35:01.995735 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cnrm6"] Apr 04 03:35:02 crc kubenswrapper[4681]: I0404 03:35:02.004632 4681 scope.go:117] "RemoveContainer" containerID="4a506bb2ebb96a825f50611014758d3e9b98389342ed8403766de3c67d3dd9cb" Apr 04 03:35:02 crc kubenswrapper[4681]: I0404 03:35:02.055703 4681 scope.go:117] "RemoveContainer" containerID="7e0e35d3a658c7c525eae84df5bdf2ff879e9aedff380d197569fbe8a0325201" Apr 04 03:35:02 crc kubenswrapper[4681]: E0404 03:35:02.056132 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e0e35d3a658c7c525eae84df5bdf2ff879e9aedff380d197569fbe8a0325201\": container with ID starting with 7e0e35d3a658c7c525eae84df5bdf2ff879e9aedff380d197569fbe8a0325201 not found: ID does not exist" containerID="7e0e35d3a658c7c525eae84df5bdf2ff879e9aedff380d197569fbe8a0325201" Apr 04 03:35:02 crc kubenswrapper[4681]: I0404 03:35:02.056163 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e0e35d3a658c7c525eae84df5bdf2ff879e9aedff380d197569fbe8a0325201"} err="failed to get container status \"7e0e35d3a658c7c525eae84df5bdf2ff879e9aedff380d197569fbe8a0325201\": rpc error: code = NotFound desc = could not find container \"7e0e35d3a658c7c525eae84df5bdf2ff879e9aedff380d197569fbe8a0325201\": container with ID starting with 7e0e35d3a658c7c525eae84df5bdf2ff879e9aedff380d197569fbe8a0325201 not found: ID does not exist" Apr 04 03:35:02 crc kubenswrapper[4681]: I0404 03:35:02.056182 4681 scope.go:117] "RemoveContainer" containerID="1e1c5474275d9333738217986484dfb64b5426e5c40dc6ae457ddbccbb8a0579" Apr 04 03:35:02 crc kubenswrapper[4681]: E0404 03:35:02.056759 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e1c5474275d9333738217986484dfb64b5426e5c40dc6ae457ddbccbb8a0579\": container with ID starting with 1e1c5474275d9333738217986484dfb64b5426e5c40dc6ae457ddbccbb8a0579 not found: ID does not exist" containerID="1e1c5474275d9333738217986484dfb64b5426e5c40dc6ae457ddbccbb8a0579" Apr 04 03:35:02 crc kubenswrapper[4681]: I0404 03:35:02.056789 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e1c5474275d9333738217986484dfb64b5426e5c40dc6ae457ddbccbb8a0579"} err="failed to get container status \"1e1c5474275d9333738217986484dfb64b5426e5c40dc6ae457ddbccbb8a0579\": rpc error: code = NotFound desc = could not find container \"1e1c5474275d9333738217986484dfb64b5426e5c40dc6ae457ddbccbb8a0579\": container with ID starting with 1e1c5474275d9333738217986484dfb64b5426e5c40dc6ae457ddbccbb8a0579 not found: ID does not exist" Apr 04 03:35:02 crc kubenswrapper[4681]: I0404 03:35:02.056807 4681 scope.go:117] "RemoveContainer" containerID="4a506bb2ebb96a825f50611014758d3e9b98389342ed8403766de3c67d3dd9cb" Apr 04 03:35:02 crc kubenswrapper[4681]: E0404 03:35:02.057182 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a506bb2ebb96a825f50611014758d3e9b98389342ed8403766de3c67d3dd9cb\": container with ID starting with 4a506bb2ebb96a825f50611014758d3e9b98389342ed8403766de3c67d3dd9cb not found: ID does not exist" containerID="4a506bb2ebb96a825f50611014758d3e9b98389342ed8403766de3c67d3dd9cb" Apr 04 03:35:02 crc kubenswrapper[4681]: I0404 03:35:02.057232 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a506bb2ebb96a825f50611014758d3e9b98389342ed8403766de3c67d3dd9cb"} err="failed to get container status \"4a506bb2ebb96a825f50611014758d3e9b98389342ed8403766de3c67d3dd9cb\": rpc error: code = NotFound desc = could not find container \"4a506bb2ebb96a825f50611014758d3e9b98389342ed8403766de3c67d3dd9cb\": container with ID starting with 4a506bb2ebb96a825f50611014758d3e9b98389342ed8403766de3c67d3dd9cb not found: ID does not exist" Apr 04 03:35:03 crc kubenswrapper[4681]: I0404 03:35:03.212447 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dbdb54c-a681-4478-8841-152b0738cc37" path="/var/lib/kubelet/pods/5dbdb54c-a681-4478-8841-152b0738cc37/volumes" Apr 04 03:35:06 crc kubenswrapper[4681]: I0404 03:35:06.201661 4681 scope.go:117] "RemoveContainer" containerID="e6ea3afc9053f8a419be029e9129e5308e72192f68fd39e7a983304500440f3a" Apr 04 03:35:06 crc kubenswrapper[4681]: E0404 03:35:06.202290 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:35:19 crc kubenswrapper[4681]: I0404 03:35:19.201236 4681 scope.go:117] "RemoveContainer" containerID="e6ea3afc9053f8a419be029e9129e5308e72192f68fd39e7a983304500440f3a" Apr 04 03:35:19 crc kubenswrapper[4681]: E0404 03:35:19.201957 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:35:31 crc kubenswrapper[4681]: I0404 03:35:31.219524 4681 scope.go:117] "RemoveContainer" containerID="e6ea3afc9053f8a419be029e9129e5308e72192f68fd39e7a983304500440f3a" Apr 04 03:35:31 crc kubenswrapper[4681]: E0404 03:35:31.220418 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:35:46 crc kubenswrapper[4681]: I0404 03:35:46.201947 4681 scope.go:117] "RemoveContainer" containerID="e6ea3afc9053f8a419be029e9129e5308e72192f68fd39e7a983304500440f3a" Apr 04 03:35:46 crc kubenswrapper[4681]: E0404 03:35:46.202880 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:36:00 crc kubenswrapper[4681]: I0404 03:36:00.143398 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587896-vt9tm"] Apr 04 03:36:00 crc kubenswrapper[4681]: E0404 03:36:00.144503 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dbdb54c-a681-4478-8841-152b0738cc37" containerName="extract-content" Apr 04 03:36:00 crc kubenswrapper[4681]: I0404 03:36:00.144520 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dbdb54c-a681-4478-8841-152b0738cc37" containerName="extract-content" Apr 04 03:36:00 crc kubenswrapper[4681]: E0404 03:36:00.144535 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dbdb54c-a681-4478-8841-152b0738cc37" containerName="registry-server" Apr 04 03:36:00 crc kubenswrapper[4681]: I0404 03:36:00.144543 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dbdb54c-a681-4478-8841-152b0738cc37" containerName="registry-server" Apr 04 03:36:00 crc kubenswrapper[4681]: E0404 03:36:00.144581 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dbdb54c-a681-4478-8841-152b0738cc37" containerName="extract-utilities" Apr 04 03:36:00 crc kubenswrapper[4681]: I0404 03:36:00.144590 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dbdb54c-a681-4478-8841-152b0738cc37" containerName="extract-utilities" Apr 04 03:36:00 crc kubenswrapper[4681]: I0404 03:36:00.144884 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dbdb54c-a681-4478-8841-152b0738cc37" containerName="registry-server" Apr 04 03:36:00 crc kubenswrapper[4681]: I0404 03:36:00.145977 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587896-vt9tm" Apr 04 03:36:00 crc kubenswrapper[4681]: I0404 03:36:00.148657 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 03:36:00 crc kubenswrapper[4681]: I0404 03:36:00.149478 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 03:36:00 crc kubenswrapper[4681]: I0404 03:36:00.149544 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 03:36:00 crc kubenswrapper[4681]: I0404 03:36:00.167335 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587896-vt9tm"] Apr 04 03:36:00 crc kubenswrapper[4681]: I0404 03:36:00.201173 4681 scope.go:117] "RemoveContainer" containerID="e6ea3afc9053f8a419be029e9129e5308e72192f68fd39e7a983304500440f3a" Apr 04 03:36:00 crc kubenswrapper[4681]: E0404 03:36:00.201678 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:36:00 crc kubenswrapper[4681]: I0404 03:36:00.215011 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xv7r\" (UniqueName: \"kubernetes.io/projected/2d61db7d-a894-4d42-8713-f0df549a25d9-kube-api-access-4xv7r\") pod \"auto-csr-approver-29587896-vt9tm\" (UID: \"2d61db7d-a894-4d42-8713-f0df549a25d9\") " pod="openshift-infra/auto-csr-approver-29587896-vt9tm" Apr 04 03:36:00 crc kubenswrapper[4681]: I0404 03:36:00.317817 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xv7r\" (UniqueName: \"kubernetes.io/projected/2d61db7d-a894-4d42-8713-f0df549a25d9-kube-api-access-4xv7r\") pod \"auto-csr-approver-29587896-vt9tm\" (UID: \"2d61db7d-a894-4d42-8713-f0df549a25d9\") " pod="openshift-infra/auto-csr-approver-29587896-vt9tm" Apr 04 03:36:00 crc kubenswrapper[4681]: I0404 03:36:00.341685 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xv7r\" (UniqueName: \"kubernetes.io/projected/2d61db7d-a894-4d42-8713-f0df549a25d9-kube-api-access-4xv7r\") pod \"auto-csr-approver-29587896-vt9tm\" (UID: \"2d61db7d-a894-4d42-8713-f0df549a25d9\") " pod="openshift-infra/auto-csr-approver-29587896-vt9tm" Apr 04 03:36:00 crc kubenswrapper[4681]: I0404 03:36:00.467748 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587896-vt9tm" Apr 04 03:36:00 crc kubenswrapper[4681]: I0404 03:36:00.960363 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587896-vt9tm"] Apr 04 03:36:01 crc kubenswrapper[4681]: I0404 03:36:01.566469 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587896-vt9tm" event={"ID":"2d61db7d-a894-4d42-8713-f0df549a25d9","Type":"ContainerStarted","Data":"1c600c01e450237f3099a0793607ed799e46d8f990339dcbb2a7e1864fe142c9"} Apr 04 03:36:02 crc kubenswrapper[4681]: I0404 03:36:02.581080 4681 generic.go:334] "Generic (PLEG): container finished" podID="2d61db7d-a894-4d42-8713-f0df549a25d9" containerID="c35e34c79cd031a4f4015e6ef9f10aeaaa49dacfb8a922f1777c15556eec69b1" exitCode=0 Apr 04 03:36:02 crc kubenswrapper[4681]: I0404 03:36:02.581159 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587896-vt9tm" event={"ID":"2d61db7d-a894-4d42-8713-f0df549a25d9","Type":"ContainerDied","Data":"c35e34c79cd031a4f4015e6ef9f10aeaaa49dacfb8a922f1777c15556eec69b1"} Apr 04 03:36:04 crc kubenswrapper[4681]: I0404 03:36:04.010898 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587896-vt9tm" Apr 04 03:36:04 crc kubenswrapper[4681]: I0404 03:36:04.201886 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xv7r\" (UniqueName: \"kubernetes.io/projected/2d61db7d-a894-4d42-8713-f0df549a25d9-kube-api-access-4xv7r\") pod \"2d61db7d-a894-4d42-8713-f0df549a25d9\" (UID: \"2d61db7d-a894-4d42-8713-f0df549a25d9\") " Apr 04 03:36:04 crc kubenswrapper[4681]: I0404 03:36:04.213020 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d61db7d-a894-4d42-8713-f0df549a25d9-kube-api-access-4xv7r" (OuterVolumeSpecName: "kube-api-access-4xv7r") pod "2d61db7d-a894-4d42-8713-f0df549a25d9" (UID: "2d61db7d-a894-4d42-8713-f0df549a25d9"). InnerVolumeSpecName "kube-api-access-4xv7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:36:04 crc kubenswrapper[4681]: I0404 03:36:04.304597 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xv7r\" (UniqueName: \"kubernetes.io/projected/2d61db7d-a894-4d42-8713-f0df549a25d9-kube-api-access-4xv7r\") on node \"crc\" DevicePath \"\"" Apr 04 03:36:04 crc kubenswrapper[4681]: I0404 03:36:04.613656 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587896-vt9tm" event={"ID":"2d61db7d-a894-4d42-8713-f0df549a25d9","Type":"ContainerDied","Data":"1c600c01e450237f3099a0793607ed799e46d8f990339dcbb2a7e1864fe142c9"} Apr 04 03:36:04 crc kubenswrapper[4681]: I0404 03:36:04.613905 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c600c01e450237f3099a0793607ed799e46d8f990339dcbb2a7e1864fe142c9" Apr 04 03:36:04 crc kubenswrapper[4681]: I0404 03:36:04.613749 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587896-vt9tm" Apr 04 03:36:05 crc kubenswrapper[4681]: I0404 03:36:05.084179 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587890-g4hk7"] Apr 04 03:36:05 crc kubenswrapper[4681]: I0404 03:36:05.093604 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587890-g4hk7"] Apr 04 03:36:05 crc kubenswrapper[4681]: I0404 03:36:05.211170 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbdaae4f-5738-4ec1-8eda-4bce93f2bbbc" path="/var/lib/kubelet/pods/cbdaae4f-5738-4ec1-8eda-4bce93f2bbbc/volumes" Apr 04 03:36:13 crc kubenswrapper[4681]: I0404 03:36:13.201850 4681 scope.go:117] "RemoveContainer" containerID="e6ea3afc9053f8a419be029e9129e5308e72192f68fd39e7a983304500440f3a" Apr 04 03:36:13 crc kubenswrapper[4681]: E0404 03:36:13.202469 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:36:24 crc kubenswrapper[4681]: I0404 03:36:24.201589 4681 scope.go:117] "RemoveContainer" containerID="e6ea3afc9053f8a419be029e9129e5308e72192f68fd39e7a983304500440f3a" Apr 04 03:36:24 crc kubenswrapper[4681]: E0404 03:36:24.202320 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:36:34 crc kubenswrapper[4681]: I0404 03:36:34.837026 4681 scope.go:117] "RemoveContainer" containerID="ebfbdf1cab7f4178fb9d343b366f976f791bdd4c2a7b4fca962ca3748a6dd04d" Apr 04 03:36:35 crc kubenswrapper[4681]: I0404 03:36:35.201491 4681 scope.go:117] "RemoveContainer" containerID="e6ea3afc9053f8a419be029e9129e5308e72192f68fd39e7a983304500440f3a" Apr 04 03:36:35 crc kubenswrapper[4681]: E0404 03:36:35.202135 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:36:46 crc kubenswrapper[4681]: I0404 03:36:46.201599 4681 scope.go:117] "RemoveContainer" containerID="e6ea3afc9053f8a419be029e9129e5308e72192f68fd39e7a983304500440f3a" Apr 04 03:36:46 crc kubenswrapper[4681]: E0404 03:36:46.202379 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:37:01 crc kubenswrapper[4681]: I0404 03:37:01.210040 4681 scope.go:117] "RemoveContainer" containerID="e6ea3afc9053f8a419be029e9129e5308e72192f68fd39e7a983304500440f3a" Apr 04 03:37:01 crc kubenswrapper[4681]: E0404 03:37:01.210930 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:37:03 crc kubenswrapper[4681]: I0404 03:37:03.071525 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p2c6x"] Apr 04 03:37:03 crc kubenswrapper[4681]: E0404 03:37:03.072338 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d61db7d-a894-4d42-8713-f0df549a25d9" containerName="oc" Apr 04 03:37:03 crc kubenswrapper[4681]: I0404 03:37:03.072354 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d61db7d-a894-4d42-8713-f0df549a25d9" containerName="oc" Apr 04 03:37:03 crc kubenswrapper[4681]: I0404 03:37:03.072635 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d61db7d-a894-4d42-8713-f0df549a25d9" containerName="oc" Apr 04 03:37:03 crc kubenswrapper[4681]: I0404 03:37:03.074445 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p2c6x" Apr 04 03:37:03 crc kubenswrapper[4681]: I0404 03:37:03.089776 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p2c6x"] Apr 04 03:37:03 crc kubenswrapper[4681]: I0404 03:37:03.197318 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmzcp\" (UniqueName: \"kubernetes.io/projected/ec6e1e26-9279-4906-8acb-74341ed895ab-kube-api-access-fmzcp\") pod \"certified-operators-p2c6x\" (UID: \"ec6e1e26-9279-4906-8acb-74341ed895ab\") " pod="openshift-marketplace/certified-operators-p2c6x" Apr 04 03:37:03 crc kubenswrapper[4681]: I0404 03:37:03.197375 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec6e1e26-9279-4906-8acb-74341ed895ab-catalog-content\") pod \"certified-operators-p2c6x\" (UID: \"ec6e1e26-9279-4906-8acb-74341ed895ab\") " pod="openshift-marketplace/certified-operators-p2c6x" Apr 04 03:37:03 crc kubenswrapper[4681]: I0404 03:37:03.197410 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec6e1e26-9279-4906-8acb-74341ed895ab-utilities\") pod \"certified-operators-p2c6x\" (UID: \"ec6e1e26-9279-4906-8acb-74341ed895ab\") " pod="openshift-marketplace/certified-operators-p2c6x" Apr 04 03:37:03 crc kubenswrapper[4681]: I0404 03:37:03.298952 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmzcp\" (UniqueName: \"kubernetes.io/projected/ec6e1e26-9279-4906-8acb-74341ed895ab-kube-api-access-fmzcp\") pod \"certified-operators-p2c6x\" (UID: \"ec6e1e26-9279-4906-8acb-74341ed895ab\") " pod="openshift-marketplace/certified-operators-p2c6x" Apr 04 03:37:03 crc kubenswrapper[4681]: I0404 03:37:03.299002 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec6e1e26-9279-4906-8acb-74341ed895ab-catalog-content\") pod \"certified-operators-p2c6x\" (UID: \"ec6e1e26-9279-4906-8acb-74341ed895ab\") " pod="openshift-marketplace/certified-operators-p2c6x" Apr 04 03:37:03 crc kubenswrapper[4681]: I0404 03:37:03.299042 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec6e1e26-9279-4906-8acb-74341ed895ab-utilities\") pod \"certified-operators-p2c6x\" (UID: \"ec6e1e26-9279-4906-8acb-74341ed895ab\") " pod="openshift-marketplace/certified-operators-p2c6x" Apr 04 03:37:03 crc kubenswrapper[4681]: I0404 03:37:03.300248 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec6e1e26-9279-4906-8acb-74341ed895ab-catalog-content\") pod \"certified-operators-p2c6x\" (UID: \"ec6e1e26-9279-4906-8acb-74341ed895ab\") " pod="openshift-marketplace/certified-operators-p2c6x" Apr 04 03:37:03 crc kubenswrapper[4681]: I0404 03:37:03.300312 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec6e1e26-9279-4906-8acb-74341ed895ab-utilities\") pod \"certified-operators-p2c6x\" (UID: \"ec6e1e26-9279-4906-8acb-74341ed895ab\") " pod="openshift-marketplace/certified-operators-p2c6x" Apr 04 03:37:03 crc kubenswrapper[4681]: I0404 03:37:03.320153 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmzcp\" (UniqueName: \"kubernetes.io/projected/ec6e1e26-9279-4906-8acb-74341ed895ab-kube-api-access-fmzcp\") pod \"certified-operators-p2c6x\" (UID: \"ec6e1e26-9279-4906-8acb-74341ed895ab\") " pod="openshift-marketplace/certified-operators-p2c6x" Apr 04 03:37:03 crc kubenswrapper[4681]: I0404 03:37:03.424996 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p2c6x" Apr 04 03:37:04 crc kubenswrapper[4681]: I0404 03:37:04.001043 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p2c6x"] Apr 04 03:37:04 crc kubenswrapper[4681]: W0404 03:37:04.009865 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec6e1e26_9279_4906_8acb_74341ed895ab.slice/crio-fc54d82d265a342b87ad3271157ca51e04d2a0e42cc02f4027f419f696ca9f1d WatchSource:0}: Error finding container fc54d82d265a342b87ad3271157ca51e04d2a0e42cc02f4027f419f696ca9f1d: Status 404 returned error can't find the container with id fc54d82d265a342b87ad3271157ca51e04d2a0e42cc02f4027f419f696ca9f1d Apr 04 03:37:04 crc kubenswrapper[4681]: I0404 03:37:04.253036 4681 generic.go:334] "Generic (PLEG): container finished" podID="ec6e1e26-9279-4906-8acb-74341ed895ab" containerID="c06948faf5d3f1b89f81522f6bae52b34f8f6b8dae54b95ee349db3450cc6778" exitCode=0 Apr 04 03:37:04 crc kubenswrapper[4681]: I0404 03:37:04.253160 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2c6x" event={"ID":"ec6e1e26-9279-4906-8acb-74341ed895ab","Type":"ContainerDied","Data":"c06948faf5d3f1b89f81522f6bae52b34f8f6b8dae54b95ee349db3450cc6778"} Apr 04 03:37:04 crc kubenswrapper[4681]: I0404 03:37:04.253419 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2c6x" event={"ID":"ec6e1e26-9279-4906-8acb-74341ed895ab","Type":"ContainerStarted","Data":"fc54d82d265a342b87ad3271157ca51e04d2a0e42cc02f4027f419f696ca9f1d"} Apr 04 03:37:04 crc kubenswrapper[4681]: I0404 03:37:04.255625 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 04 03:37:06 crc kubenswrapper[4681]: I0404 03:37:06.275315 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2c6x" event={"ID":"ec6e1e26-9279-4906-8acb-74341ed895ab","Type":"ContainerStarted","Data":"dc2b03f925f95051da9fb83620824d33b73c3b6e9cdcd3bbeb94b07ab95ff58b"} Apr 04 03:37:08 crc kubenswrapper[4681]: I0404 03:37:08.299799 4681 generic.go:334] "Generic (PLEG): container finished" podID="ec6e1e26-9279-4906-8acb-74341ed895ab" containerID="dc2b03f925f95051da9fb83620824d33b73c3b6e9cdcd3bbeb94b07ab95ff58b" exitCode=0 Apr 04 03:37:08 crc kubenswrapper[4681]: I0404 03:37:08.299875 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2c6x" event={"ID":"ec6e1e26-9279-4906-8acb-74341ed895ab","Type":"ContainerDied","Data":"dc2b03f925f95051da9fb83620824d33b73c3b6e9cdcd3bbeb94b07ab95ff58b"} Apr 04 03:37:09 crc kubenswrapper[4681]: I0404 03:37:09.313565 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2c6x" event={"ID":"ec6e1e26-9279-4906-8acb-74341ed895ab","Type":"ContainerStarted","Data":"9fc2ddca964c66d1e37d423616365de5d8253d0fa556e9b5e1f062ad02f7b548"} Apr 04 03:37:09 crc kubenswrapper[4681]: I0404 03:37:09.339778 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p2c6x" podStartSLOduration=1.8864069190000001 podStartE2EDuration="6.339754412s" podCreationTimestamp="2026-04-04 03:37:03 +0000 UTC" firstStartedPulling="2026-04-04 03:37:04.255384305 +0000 UTC m=+6103.921159425" lastFinishedPulling="2026-04-04 03:37:08.708731788 +0000 UTC m=+6108.374506918" observedRunningTime="2026-04-04 03:37:09.33565369 +0000 UTC m=+6109.001428810" watchObservedRunningTime="2026-04-04 03:37:09.339754412 +0000 UTC m=+6109.005529532" Apr 04 03:37:12 crc kubenswrapper[4681]: I0404 03:37:12.201345 4681 scope.go:117] "RemoveContainer" containerID="e6ea3afc9053f8a419be029e9129e5308e72192f68fd39e7a983304500440f3a" Apr 04 03:37:12 crc kubenswrapper[4681]: E0404 03:37:12.201934 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:37:13 crc kubenswrapper[4681]: I0404 03:37:13.425869 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p2c6x" Apr 04 03:37:13 crc kubenswrapper[4681]: I0404 03:37:13.426201 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p2c6x" Apr 04 03:37:13 crc kubenswrapper[4681]: I0404 03:37:13.470655 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p2c6x" Apr 04 03:37:14 crc kubenswrapper[4681]: I0404 03:37:14.433168 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p2c6x" Apr 04 03:37:14 crc kubenswrapper[4681]: I0404 03:37:14.486644 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p2c6x"] Apr 04 03:37:16 crc kubenswrapper[4681]: I0404 03:37:16.383356 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p2c6x" podUID="ec6e1e26-9279-4906-8acb-74341ed895ab" containerName="registry-server" containerID="cri-o://9fc2ddca964c66d1e37d423616365de5d8253d0fa556e9b5e1f062ad02f7b548" gracePeriod=2 Apr 04 03:37:16 crc kubenswrapper[4681]: I0404 03:37:16.922153 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p2c6x" Apr 04 03:37:17 crc kubenswrapper[4681]: I0404 03:37:17.038987 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmzcp\" (UniqueName: \"kubernetes.io/projected/ec6e1e26-9279-4906-8acb-74341ed895ab-kube-api-access-fmzcp\") pod \"ec6e1e26-9279-4906-8acb-74341ed895ab\" (UID: \"ec6e1e26-9279-4906-8acb-74341ed895ab\") " Apr 04 03:37:17 crc kubenswrapper[4681]: I0404 03:37:17.039162 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec6e1e26-9279-4906-8acb-74341ed895ab-catalog-content\") pod \"ec6e1e26-9279-4906-8acb-74341ed895ab\" (UID: \"ec6e1e26-9279-4906-8acb-74341ed895ab\") " Apr 04 03:37:17 crc kubenswrapper[4681]: I0404 03:37:17.039280 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec6e1e26-9279-4906-8acb-74341ed895ab-utilities\") pod \"ec6e1e26-9279-4906-8acb-74341ed895ab\" (UID: \"ec6e1e26-9279-4906-8acb-74341ed895ab\") " Apr 04 03:37:17 crc kubenswrapper[4681]: I0404 03:37:17.040079 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec6e1e26-9279-4906-8acb-74341ed895ab-utilities" (OuterVolumeSpecName: "utilities") pod "ec6e1e26-9279-4906-8acb-74341ed895ab" (UID: "ec6e1e26-9279-4906-8acb-74341ed895ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:37:17 crc kubenswrapper[4681]: I0404 03:37:17.045944 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec6e1e26-9279-4906-8acb-74341ed895ab-kube-api-access-fmzcp" (OuterVolumeSpecName: "kube-api-access-fmzcp") pod "ec6e1e26-9279-4906-8acb-74341ed895ab" (UID: "ec6e1e26-9279-4906-8acb-74341ed895ab"). InnerVolumeSpecName "kube-api-access-fmzcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:37:17 crc kubenswrapper[4681]: I0404 03:37:17.104372 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec6e1e26-9279-4906-8acb-74341ed895ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec6e1e26-9279-4906-8acb-74341ed895ab" (UID: "ec6e1e26-9279-4906-8acb-74341ed895ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:37:17 crc kubenswrapper[4681]: I0404 03:37:17.142252 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmzcp\" (UniqueName: \"kubernetes.io/projected/ec6e1e26-9279-4906-8acb-74341ed895ab-kube-api-access-fmzcp\") on node \"crc\" DevicePath \"\"" Apr 04 03:37:17 crc kubenswrapper[4681]: I0404 03:37:17.142310 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec6e1e26-9279-4906-8acb-74341ed895ab-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 03:37:17 crc kubenswrapper[4681]: I0404 03:37:17.142326 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec6e1e26-9279-4906-8acb-74341ed895ab-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 03:37:17 crc kubenswrapper[4681]: I0404 03:37:17.405204 4681 generic.go:334] "Generic (PLEG): container finished" podID="ec6e1e26-9279-4906-8acb-74341ed895ab" containerID="9fc2ddca964c66d1e37d423616365de5d8253d0fa556e9b5e1f062ad02f7b548" exitCode=0 Apr 04 03:37:17 crc kubenswrapper[4681]: I0404 03:37:17.405294 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2c6x" event={"ID":"ec6e1e26-9279-4906-8acb-74341ed895ab","Type":"ContainerDied","Data":"9fc2ddca964c66d1e37d423616365de5d8253d0fa556e9b5e1f062ad02f7b548"} Apr 04 03:37:17 crc kubenswrapper[4681]: I0404 03:37:17.405642 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p2c6x" event={"ID":"ec6e1e26-9279-4906-8acb-74341ed895ab","Type":"ContainerDied","Data":"fc54d82d265a342b87ad3271157ca51e04d2a0e42cc02f4027f419f696ca9f1d"} Apr 04 03:37:17 crc kubenswrapper[4681]: I0404 03:37:17.405672 4681 scope.go:117] "RemoveContainer" containerID="9fc2ddca964c66d1e37d423616365de5d8253d0fa556e9b5e1f062ad02f7b548" Apr 04 03:37:17 crc kubenswrapper[4681]: I0404 03:37:17.405352 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p2c6x" Apr 04 03:37:17 crc kubenswrapper[4681]: I0404 03:37:17.430732 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p2c6x"] Apr 04 03:37:17 crc kubenswrapper[4681]: I0404 03:37:17.439795 4681 scope.go:117] "RemoveContainer" containerID="dc2b03f925f95051da9fb83620824d33b73c3b6e9cdcd3bbeb94b07ab95ff58b" Apr 04 03:37:17 crc kubenswrapper[4681]: I0404 03:37:17.440034 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p2c6x"] Apr 04 03:37:17 crc kubenswrapper[4681]: I0404 03:37:17.465423 4681 scope.go:117] "RemoveContainer" containerID="c06948faf5d3f1b89f81522f6bae52b34f8f6b8dae54b95ee349db3450cc6778" Apr 04 03:37:17 crc kubenswrapper[4681]: I0404 03:37:17.522798 4681 scope.go:117] "RemoveContainer" containerID="9fc2ddca964c66d1e37d423616365de5d8253d0fa556e9b5e1f062ad02f7b548" Apr 04 03:37:17 crc kubenswrapper[4681]: E0404 03:37:17.523340 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fc2ddca964c66d1e37d423616365de5d8253d0fa556e9b5e1f062ad02f7b548\": container with ID starting with 9fc2ddca964c66d1e37d423616365de5d8253d0fa556e9b5e1f062ad02f7b548 not found: ID does not exist" containerID="9fc2ddca964c66d1e37d423616365de5d8253d0fa556e9b5e1f062ad02f7b548" Apr 04 03:37:17 crc kubenswrapper[4681]: I0404 03:37:17.523407 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fc2ddca964c66d1e37d423616365de5d8253d0fa556e9b5e1f062ad02f7b548"} err="failed to get container status \"9fc2ddca964c66d1e37d423616365de5d8253d0fa556e9b5e1f062ad02f7b548\": rpc error: code = NotFound desc = could not find container \"9fc2ddca964c66d1e37d423616365de5d8253d0fa556e9b5e1f062ad02f7b548\": container with ID starting with 9fc2ddca964c66d1e37d423616365de5d8253d0fa556e9b5e1f062ad02f7b548 not found: ID does not exist" Apr 04 03:37:17 crc kubenswrapper[4681]: I0404 03:37:17.523447 4681 scope.go:117] "RemoveContainer" containerID="dc2b03f925f95051da9fb83620824d33b73c3b6e9cdcd3bbeb94b07ab95ff58b" Apr 04 03:37:17 crc kubenswrapper[4681]: E0404 03:37:17.524001 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc2b03f925f95051da9fb83620824d33b73c3b6e9cdcd3bbeb94b07ab95ff58b\": container with ID starting with dc2b03f925f95051da9fb83620824d33b73c3b6e9cdcd3bbeb94b07ab95ff58b not found: ID does not exist" containerID="dc2b03f925f95051da9fb83620824d33b73c3b6e9cdcd3bbeb94b07ab95ff58b" Apr 04 03:37:17 crc kubenswrapper[4681]: I0404 03:37:17.524043 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc2b03f925f95051da9fb83620824d33b73c3b6e9cdcd3bbeb94b07ab95ff58b"} err="failed to get container status \"dc2b03f925f95051da9fb83620824d33b73c3b6e9cdcd3bbeb94b07ab95ff58b\": rpc error: code = NotFound desc = could not find container \"dc2b03f925f95051da9fb83620824d33b73c3b6e9cdcd3bbeb94b07ab95ff58b\": container with ID starting with dc2b03f925f95051da9fb83620824d33b73c3b6e9cdcd3bbeb94b07ab95ff58b not found: ID does not exist" Apr 04 03:37:17 crc kubenswrapper[4681]: I0404 03:37:17.524077 4681 scope.go:117] "RemoveContainer" containerID="c06948faf5d3f1b89f81522f6bae52b34f8f6b8dae54b95ee349db3450cc6778" Apr 04 03:37:17 crc kubenswrapper[4681]: E0404 03:37:17.524481 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c06948faf5d3f1b89f81522f6bae52b34f8f6b8dae54b95ee349db3450cc6778\": container with ID starting with c06948faf5d3f1b89f81522f6bae52b34f8f6b8dae54b95ee349db3450cc6778 not found: ID does not exist" containerID="c06948faf5d3f1b89f81522f6bae52b34f8f6b8dae54b95ee349db3450cc6778" Apr 04 03:37:17 crc kubenswrapper[4681]: I0404 03:37:17.524543 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c06948faf5d3f1b89f81522f6bae52b34f8f6b8dae54b95ee349db3450cc6778"} err="failed to get container status \"c06948faf5d3f1b89f81522f6bae52b34f8f6b8dae54b95ee349db3450cc6778\": rpc error: code = NotFound desc = could not find container \"c06948faf5d3f1b89f81522f6bae52b34f8f6b8dae54b95ee349db3450cc6778\": container with ID starting with c06948faf5d3f1b89f81522f6bae52b34f8f6b8dae54b95ee349db3450cc6778 not found: ID does not exist" Apr 04 03:37:19 crc kubenswrapper[4681]: I0404 03:37:19.218760 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec6e1e26-9279-4906-8acb-74341ed895ab" path="/var/lib/kubelet/pods/ec6e1e26-9279-4906-8acb-74341ed895ab/volumes" Apr 04 03:37:27 crc kubenswrapper[4681]: I0404 03:37:27.201192 4681 scope.go:117] "RemoveContainer" containerID="e6ea3afc9053f8a419be029e9129e5308e72192f68fd39e7a983304500440f3a" Apr 04 03:37:27 crc kubenswrapper[4681]: E0404 03:37:27.202494 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:37:41 crc kubenswrapper[4681]: I0404 03:37:41.211554 4681 scope.go:117] "RemoveContainer" containerID="e6ea3afc9053f8a419be029e9129e5308e72192f68fd39e7a983304500440f3a" Apr 04 03:37:41 crc kubenswrapper[4681]: E0404 03:37:41.212456 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:37:41 crc kubenswrapper[4681]: I0404 03:37:41.687450 4681 generic.go:334] "Generic (PLEG): container finished" podID="9d245209-8139-42b0-aae0-5cafddfc00dd" containerID="094f9ad09696ebe66b7b1ffd4d7e985a677f284c3c10e16a5e8e4f68a7a81ba3" exitCode=0 Apr 04 03:37:41 crc kubenswrapper[4681]: I0404 03:37:41.687518 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9d245209-8139-42b0-aae0-5cafddfc00dd","Type":"ContainerDied","Data":"094f9ad09696ebe66b7b1ffd4d7e985a677f284c3c10e16a5e8e4f68a7a81ba3"} Apr 04 03:37:41 crc kubenswrapper[4681]: I0404 03:37:41.833378 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nkj27"] Apr 04 03:37:41 crc kubenswrapper[4681]: E0404 03:37:41.834805 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec6e1e26-9279-4906-8acb-74341ed895ab" containerName="extract-content" Apr 04 03:37:41 crc kubenswrapper[4681]: I0404 03:37:41.834835 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec6e1e26-9279-4906-8acb-74341ed895ab" containerName="extract-content" Apr 04 03:37:41 crc kubenswrapper[4681]: E0404 03:37:41.834874 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec6e1e26-9279-4906-8acb-74341ed895ab" containerName="registry-server" Apr 04 03:37:41 crc kubenswrapper[4681]: I0404 03:37:41.834886 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec6e1e26-9279-4906-8acb-74341ed895ab" containerName="registry-server" Apr 04 03:37:41 crc kubenswrapper[4681]: E0404 03:37:41.834908 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec6e1e26-9279-4906-8acb-74341ed895ab" containerName="extract-utilities" Apr 04 03:37:41 crc kubenswrapper[4681]: I0404 03:37:41.834922 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec6e1e26-9279-4906-8acb-74341ed895ab" containerName="extract-utilities" Apr 04 03:37:41 crc kubenswrapper[4681]: I0404 03:37:41.835256 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec6e1e26-9279-4906-8acb-74341ed895ab" containerName="registry-server" Apr 04 03:37:41 crc kubenswrapper[4681]: I0404 03:37:41.838208 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nkj27" Apr 04 03:37:41 crc kubenswrapper[4681]: I0404 03:37:41.852672 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nkj27"] Apr 04 03:37:41 crc kubenswrapper[4681]: I0404 03:37:41.934161 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwndl\" (UniqueName: \"kubernetes.io/projected/82e00235-c539-407e-8e96-0543c3d3d6dd-kube-api-access-rwndl\") pod \"redhat-operators-nkj27\" (UID: \"82e00235-c539-407e-8e96-0543c3d3d6dd\") " pod="openshift-marketplace/redhat-operators-nkj27" Apr 04 03:37:41 crc kubenswrapper[4681]: I0404 03:37:41.934979 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82e00235-c539-407e-8e96-0543c3d3d6dd-utilities\") pod \"redhat-operators-nkj27\" (UID: \"82e00235-c539-407e-8e96-0543c3d3d6dd\") " pod="openshift-marketplace/redhat-operators-nkj27" Apr 04 03:37:41 crc kubenswrapper[4681]: I0404 03:37:41.935184 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82e00235-c539-407e-8e96-0543c3d3d6dd-catalog-content\") pod \"redhat-operators-nkj27\" (UID: \"82e00235-c539-407e-8e96-0543c3d3d6dd\") " pod="openshift-marketplace/redhat-operators-nkj27" Apr 04 03:37:42 crc kubenswrapper[4681]: I0404 03:37:42.036937 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82e00235-c539-407e-8e96-0543c3d3d6dd-utilities\") pod \"redhat-operators-nkj27\" (UID: \"82e00235-c539-407e-8e96-0543c3d3d6dd\") " pod="openshift-marketplace/redhat-operators-nkj27" Apr 04 03:37:42 crc kubenswrapper[4681]: I0404 03:37:42.037006 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82e00235-c539-407e-8e96-0543c3d3d6dd-catalog-content\") pod \"redhat-operators-nkj27\" (UID: \"82e00235-c539-407e-8e96-0543c3d3d6dd\") " pod="openshift-marketplace/redhat-operators-nkj27" Apr 04 03:37:42 crc kubenswrapper[4681]: I0404 03:37:42.037079 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwndl\" (UniqueName: \"kubernetes.io/projected/82e00235-c539-407e-8e96-0543c3d3d6dd-kube-api-access-rwndl\") pod \"redhat-operators-nkj27\" (UID: \"82e00235-c539-407e-8e96-0543c3d3d6dd\") " pod="openshift-marketplace/redhat-operators-nkj27" Apr 04 03:37:42 crc kubenswrapper[4681]: I0404 03:37:42.037422 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82e00235-c539-407e-8e96-0543c3d3d6dd-utilities\") pod \"redhat-operators-nkj27\" (UID: \"82e00235-c539-407e-8e96-0543c3d3d6dd\") " pod="openshift-marketplace/redhat-operators-nkj27" Apr 04 03:37:42 crc kubenswrapper[4681]: I0404 03:37:42.037909 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82e00235-c539-407e-8e96-0543c3d3d6dd-catalog-content\") pod \"redhat-operators-nkj27\" (UID: \"82e00235-c539-407e-8e96-0543c3d3d6dd\") " pod="openshift-marketplace/redhat-operators-nkj27" Apr 04 03:37:42 crc kubenswrapper[4681]: I0404 03:37:42.055713 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwndl\" (UniqueName: \"kubernetes.io/projected/82e00235-c539-407e-8e96-0543c3d3d6dd-kube-api-access-rwndl\") pod \"redhat-operators-nkj27\" (UID: \"82e00235-c539-407e-8e96-0543c3d3d6dd\") " pod="openshift-marketplace/redhat-operators-nkj27" Apr 04 03:37:42 crc kubenswrapper[4681]: I0404 03:37:42.156526 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nkj27" Apr 04 03:37:42 crc kubenswrapper[4681]: I0404 03:37:42.634606 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nkj27"] Apr 04 03:37:42 crc kubenswrapper[4681]: I0404 03:37:42.715897 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkj27" event={"ID":"82e00235-c539-407e-8e96-0543c3d3d6dd","Type":"ContainerStarted","Data":"019d708c8948b78ffbb79a0fe4f129ecf9b9d1048c23ba877dda7d36851fa415"} Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.027592 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.160185 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9d245209-8139-42b0-aae0-5cafddfc00dd-ssh-key\") pod \"9d245209-8139-42b0-aae0-5cafddfc00dd\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.160338 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9d245209-8139-42b0-aae0-5cafddfc00dd-test-operator-ephemeral-workdir\") pod \"9d245209-8139-42b0-aae0-5cafddfc00dd\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.160372 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d245209-8139-42b0-aae0-5cafddfc00dd-config-data\") pod \"9d245209-8139-42b0-aae0-5cafddfc00dd\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.160433 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9d245209-8139-42b0-aae0-5cafddfc00dd-test-operator-ephemeral-temporary\") pod \"9d245209-8139-42b0-aae0-5cafddfc00dd\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.160488 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9d245209-8139-42b0-aae0-5cafddfc00dd-ca-certs\") pod \"9d245209-8139-42b0-aae0-5cafddfc00dd\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.160573 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9d245209-8139-42b0-aae0-5cafddfc00dd-openstack-config-secret\") pod \"9d245209-8139-42b0-aae0-5cafddfc00dd\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.160748 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9d245209-8139-42b0-aae0-5cafddfc00dd-openstack-config\") pod \"9d245209-8139-42b0-aae0-5cafddfc00dd\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.161114 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"9d245209-8139-42b0-aae0-5cafddfc00dd\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.161184 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txt6t\" (UniqueName: \"kubernetes.io/projected/9d245209-8139-42b0-aae0-5cafddfc00dd-kube-api-access-txt6t\") pod \"9d245209-8139-42b0-aae0-5cafddfc00dd\" (UID: \"9d245209-8139-42b0-aae0-5cafddfc00dd\") " Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.161290 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d245209-8139-42b0-aae0-5cafddfc00dd-config-data" (OuterVolumeSpecName: "config-data") pod "9d245209-8139-42b0-aae0-5cafddfc00dd" (UID: "9d245209-8139-42b0-aae0-5cafddfc00dd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.162053 4681 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d245209-8139-42b0-aae0-5cafddfc00dd-config-data\") on node \"crc\" DevicePath \"\"" Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.162724 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d245209-8139-42b0-aae0-5cafddfc00dd-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "9d245209-8139-42b0-aae0-5cafddfc00dd" (UID: "9d245209-8139-42b0-aae0-5cafddfc00dd"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.169880 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d245209-8139-42b0-aae0-5cafddfc00dd-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "9d245209-8139-42b0-aae0-5cafddfc00dd" (UID: "9d245209-8139-42b0-aae0-5cafddfc00dd"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.175769 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d245209-8139-42b0-aae0-5cafddfc00dd-kube-api-access-txt6t" (OuterVolumeSpecName: "kube-api-access-txt6t") pod "9d245209-8139-42b0-aae0-5cafddfc00dd" (UID: "9d245209-8139-42b0-aae0-5cafddfc00dd"). InnerVolumeSpecName "kube-api-access-txt6t". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.188446 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "test-operator-logs") pod "9d245209-8139-42b0-aae0-5cafddfc00dd" (UID: "9d245209-8139-42b0-aae0-5cafddfc00dd"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.196081 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d245209-8139-42b0-aae0-5cafddfc00dd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9d245209-8139-42b0-aae0-5cafddfc00dd" (UID: "9d245209-8139-42b0-aae0-5cafddfc00dd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.207020 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d245209-8139-42b0-aae0-5cafddfc00dd-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "9d245209-8139-42b0-aae0-5cafddfc00dd" (UID: "9d245209-8139-42b0-aae0-5cafddfc00dd"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.209778 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d245209-8139-42b0-aae0-5cafddfc00dd-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "9d245209-8139-42b0-aae0-5cafddfc00dd" (UID: "9d245209-8139-42b0-aae0-5cafddfc00dd"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.245430 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d245209-8139-42b0-aae0-5cafddfc00dd-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "9d245209-8139-42b0-aae0-5cafddfc00dd" (UID: "9d245209-8139-42b0-aae0-5cafddfc00dd"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.264468 4681 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/9d245209-8139-42b0-aae0-5cafddfc00dd-ca-certs\") on node \"crc\" DevicePath \"\"" Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.264514 4681 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9d245209-8139-42b0-aae0-5cafddfc00dd-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.264530 4681 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9d245209-8139-42b0-aae0-5cafddfc00dd-openstack-config\") on node \"crc\" DevicePath \"\"" Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.264569 4681 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.264582 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txt6t\" (UniqueName: \"kubernetes.io/projected/9d245209-8139-42b0-aae0-5cafddfc00dd-kube-api-access-txt6t\") on node \"crc\" DevicePath \"\"" Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.264592 4681 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9d245209-8139-42b0-aae0-5cafddfc00dd-ssh-key\") on node \"crc\" DevicePath \"\"" Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.264602 4681 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/9d245209-8139-42b0-aae0-5cafddfc00dd-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.264616 4681 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/9d245209-8139-42b0-aae0-5cafddfc00dd-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.288131 4681 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.368145 4681 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.739856 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"9d245209-8139-42b0-aae0-5cafddfc00dd","Type":"ContainerDied","Data":"ac4f245b101187bf37c6a24ea79fa37ab14375bdb90e08c394db084aca1e328f"} Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.740169 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac4f245b101187bf37c6a24ea79fa37ab14375bdb90e08c394db084aca1e328f" Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.740357 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.744649 4681 generic.go:334] "Generic (PLEG): container finished" podID="82e00235-c539-407e-8e96-0543c3d3d6dd" containerID="d97a177ae5c6834b1d61c22a87d43f8d16cf5c651848126e578f8a2df96321d0" exitCode=0 Apr 04 03:37:43 crc kubenswrapper[4681]: I0404 03:37:43.744712 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkj27" event={"ID":"82e00235-c539-407e-8e96-0543c3d3d6dd","Type":"ContainerDied","Data":"d97a177ae5c6834b1d61c22a87d43f8d16cf5c651848126e578f8a2df96321d0"} Apr 04 03:37:44 crc kubenswrapper[4681]: I0404 03:37:44.758728 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkj27" event={"ID":"82e00235-c539-407e-8e96-0543c3d3d6dd","Type":"ContainerStarted","Data":"35149bec8484e5817f0cba79074dac5e4080bca7e673114959a35d3ef54a1e70"} Apr 04 03:37:49 crc kubenswrapper[4681]: I0404 03:37:49.430694 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Apr 04 03:37:49 crc kubenswrapper[4681]: E0404 03:37:49.432375 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d245209-8139-42b0-aae0-5cafddfc00dd" containerName="tempest-tests-tempest-tests-runner" Apr 04 03:37:49 crc kubenswrapper[4681]: I0404 03:37:49.432402 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d245209-8139-42b0-aae0-5cafddfc00dd" containerName="tempest-tests-tempest-tests-runner" Apr 04 03:37:49 crc kubenswrapper[4681]: I0404 03:37:49.432644 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d245209-8139-42b0-aae0-5cafddfc00dd" containerName="tempest-tests-tempest-tests-runner" Apr 04 03:37:49 crc kubenswrapper[4681]: I0404 03:37:49.433504 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Apr 04 03:37:49 crc kubenswrapper[4681]: I0404 03:37:49.438721 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-5r25l" Apr 04 03:37:49 crc kubenswrapper[4681]: I0404 03:37:49.442555 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Apr 04 03:37:49 crc kubenswrapper[4681]: I0404 03:37:49.597854 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4e1e5191-e069-4447-ad1d-00e07ba61407\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Apr 04 03:37:49 crc kubenswrapper[4681]: I0404 03:37:49.597929 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzjrr\" (UniqueName: \"kubernetes.io/projected/4e1e5191-e069-4447-ad1d-00e07ba61407-kube-api-access-mzjrr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4e1e5191-e069-4447-ad1d-00e07ba61407\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Apr 04 03:37:49 crc kubenswrapper[4681]: I0404 03:37:49.699897 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4e1e5191-e069-4447-ad1d-00e07ba61407\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Apr 04 03:37:49 crc kubenswrapper[4681]: I0404 03:37:49.699967 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzjrr\" (UniqueName: \"kubernetes.io/projected/4e1e5191-e069-4447-ad1d-00e07ba61407-kube-api-access-mzjrr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4e1e5191-e069-4447-ad1d-00e07ba61407\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Apr 04 03:37:49 crc kubenswrapper[4681]: I0404 03:37:49.700685 4681 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4e1e5191-e069-4447-ad1d-00e07ba61407\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Apr 04 03:37:49 crc kubenswrapper[4681]: I0404 03:37:49.741959 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzjrr\" (UniqueName: \"kubernetes.io/projected/4e1e5191-e069-4447-ad1d-00e07ba61407-kube-api-access-mzjrr\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4e1e5191-e069-4447-ad1d-00e07ba61407\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Apr 04 03:37:49 crc kubenswrapper[4681]: I0404 03:37:49.756112 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"4e1e5191-e069-4447-ad1d-00e07ba61407\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Apr 04 03:37:49 crc kubenswrapper[4681]: I0404 03:37:49.771134 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Apr 04 03:37:49 crc kubenswrapper[4681]: I0404 03:37:49.828501 4681 generic.go:334] "Generic (PLEG): container finished" podID="82e00235-c539-407e-8e96-0543c3d3d6dd" containerID="35149bec8484e5817f0cba79074dac5e4080bca7e673114959a35d3ef54a1e70" exitCode=0 Apr 04 03:37:49 crc kubenswrapper[4681]: I0404 03:37:49.828939 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkj27" event={"ID":"82e00235-c539-407e-8e96-0543c3d3d6dd","Type":"ContainerDied","Data":"35149bec8484e5817f0cba79074dac5e4080bca7e673114959a35d3ef54a1e70"} Apr 04 03:37:50 crc kubenswrapper[4681]: I0404 03:37:50.269414 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Apr 04 03:37:50 crc kubenswrapper[4681]: I0404 03:37:50.840209 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"4e1e5191-e069-4447-ad1d-00e07ba61407","Type":"ContainerStarted","Data":"b2e422ff67b690f06e2284e89cec34111773b37e0479ccee72246dbc0714b420"} Apr 04 03:37:51 crc kubenswrapper[4681]: I0404 03:37:51.916562 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkj27" event={"ID":"82e00235-c539-407e-8e96-0543c3d3d6dd","Type":"ContainerStarted","Data":"381cf3aed946b1d9b6ec41d337c4cfdd0c0e7200e1bbfec6b768d8f7a7642484"} Apr 04 03:37:51 crc kubenswrapper[4681]: I0404 03:37:51.952615 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nkj27" podStartSLOduration=3.4665328029999998 podStartE2EDuration="10.952587337s" podCreationTimestamp="2026-04-04 03:37:41 +0000 UTC" firstStartedPulling="2026-04-04 03:37:43.74830823 +0000 UTC m=+6143.414083350" lastFinishedPulling="2026-04-04 03:37:51.234362764 +0000 UTC m=+6150.900137884" observedRunningTime="2026-04-04 03:37:51.948612668 +0000 UTC m=+6151.614387798" watchObservedRunningTime="2026-04-04 03:37:51.952587337 +0000 UTC m=+6151.618362457" Apr 04 03:37:52 crc kubenswrapper[4681]: I0404 03:37:52.157314 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nkj27" Apr 04 03:37:52 crc kubenswrapper[4681]: I0404 03:37:52.157367 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nkj27" Apr 04 03:37:52 crc kubenswrapper[4681]: I0404 03:37:52.202485 4681 scope.go:117] "RemoveContainer" containerID="e6ea3afc9053f8a419be029e9129e5308e72192f68fd39e7a983304500440f3a" Apr 04 03:37:52 crc kubenswrapper[4681]: E0404 03:37:52.202777 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:37:52 crc kubenswrapper[4681]: I0404 03:37:52.928024 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"4e1e5191-e069-4447-ad1d-00e07ba61407","Type":"ContainerStarted","Data":"426771ee5a50e7d71deb58eeefd26bf2b3538ffc4041b14cb27547e4de794b44"} Apr 04 03:37:52 crc kubenswrapper[4681]: I0404 03:37:52.945672 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.07654978 podStartE2EDuration="3.945656067s" podCreationTimestamp="2026-04-04 03:37:49 +0000 UTC" firstStartedPulling="2026-04-04 03:37:50.261527026 +0000 UTC m=+6149.927302146" lastFinishedPulling="2026-04-04 03:37:52.130633313 +0000 UTC m=+6151.796408433" observedRunningTime="2026-04-04 03:37:52.940729393 +0000 UTC m=+6152.606504523" watchObservedRunningTime="2026-04-04 03:37:52.945656067 +0000 UTC m=+6152.611431187" Apr 04 03:37:53 crc kubenswrapper[4681]: I0404 03:37:53.202610 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nkj27" podUID="82e00235-c539-407e-8e96-0543c3d3d6dd" containerName="registry-server" probeResult="failure" output=< Apr 04 03:37:53 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Apr 04 03:37:53 crc kubenswrapper[4681]: > Apr 04 03:38:00 crc kubenswrapper[4681]: I0404 03:38:00.157485 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587898-kcmp6"] Apr 04 03:38:00 crc kubenswrapper[4681]: I0404 03:38:00.159935 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587898-kcmp6" Apr 04 03:38:00 crc kubenswrapper[4681]: I0404 03:38:00.162534 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 03:38:00 crc kubenswrapper[4681]: I0404 03:38:00.162703 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 03:38:00 crc kubenswrapper[4681]: I0404 03:38:00.168344 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 03:38:00 crc kubenswrapper[4681]: I0404 03:38:00.169302 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587898-kcmp6"] Apr 04 03:38:00 crc kubenswrapper[4681]: I0404 03:38:00.236304 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swb8k\" (UniqueName: \"kubernetes.io/projected/20aaf7bf-469a-462c-a60a-d44305ae3848-kube-api-access-swb8k\") pod \"auto-csr-approver-29587898-kcmp6\" (UID: \"20aaf7bf-469a-462c-a60a-d44305ae3848\") " pod="openshift-infra/auto-csr-approver-29587898-kcmp6" Apr 04 03:38:00 crc kubenswrapper[4681]: I0404 03:38:00.338424 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swb8k\" (UniqueName: \"kubernetes.io/projected/20aaf7bf-469a-462c-a60a-d44305ae3848-kube-api-access-swb8k\") pod \"auto-csr-approver-29587898-kcmp6\" (UID: \"20aaf7bf-469a-462c-a60a-d44305ae3848\") " pod="openshift-infra/auto-csr-approver-29587898-kcmp6" Apr 04 03:38:00 crc kubenswrapper[4681]: I0404 03:38:00.368242 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swb8k\" (UniqueName: \"kubernetes.io/projected/20aaf7bf-469a-462c-a60a-d44305ae3848-kube-api-access-swb8k\") pod \"auto-csr-approver-29587898-kcmp6\" (UID: \"20aaf7bf-469a-462c-a60a-d44305ae3848\") " pod="openshift-infra/auto-csr-approver-29587898-kcmp6" Apr 04 03:38:00 crc kubenswrapper[4681]: I0404 03:38:00.479861 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587898-kcmp6" Apr 04 03:38:00 crc kubenswrapper[4681]: I0404 03:38:00.948192 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587898-kcmp6"] Apr 04 03:38:01 crc kubenswrapper[4681]: I0404 03:38:01.003457 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587898-kcmp6" event={"ID":"20aaf7bf-469a-462c-a60a-d44305ae3848","Type":"ContainerStarted","Data":"2a279e9d661a925d2e6983b77aa6efe59200ea30ee12d6a85d3a68012e5600b9"} Apr 04 03:38:02 crc kubenswrapper[4681]: I0404 03:38:02.216408 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nkj27" Apr 04 03:38:02 crc kubenswrapper[4681]: I0404 03:38:02.330513 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nkj27" Apr 04 03:38:02 crc kubenswrapper[4681]: I0404 03:38:02.458759 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nkj27"] Apr 04 03:38:03 crc kubenswrapper[4681]: I0404 03:38:03.022752 4681 generic.go:334] "Generic (PLEG): container finished" podID="20aaf7bf-469a-462c-a60a-d44305ae3848" containerID="ab3bba527904c622cdfd010e79df10068efaaf0560449381446e1541b89f8bae" exitCode=0 Apr 04 03:38:03 crc kubenswrapper[4681]: I0404 03:38:03.022795 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587898-kcmp6" event={"ID":"20aaf7bf-469a-462c-a60a-d44305ae3848","Type":"ContainerDied","Data":"ab3bba527904c622cdfd010e79df10068efaaf0560449381446e1541b89f8bae"} Apr 04 03:38:04 crc kubenswrapper[4681]: I0404 03:38:04.032087 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nkj27" podUID="82e00235-c539-407e-8e96-0543c3d3d6dd" containerName="registry-server" containerID="cri-o://381cf3aed946b1d9b6ec41d337c4cfdd0c0e7200e1bbfec6b768d8f7a7642484" gracePeriod=2 Apr 04 03:38:04 crc kubenswrapper[4681]: I0404 03:38:04.487377 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587898-kcmp6" Apr 04 03:38:04 crc kubenswrapper[4681]: I0404 03:38:04.641086 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swb8k\" (UniqueName: \"kubernetes.io/projected/20aaf7bf-469a-462c-a60a-d44305ae3848-kube-api-access-swb8k\") pod \"20aaf7bf-469a-462c-a60a-d44305ae3848\" (UID: \"20aaf7bf-469a-462c-a60a-d44305ae3848\") " Apr 04 03:38:04 crc kubenswrapper[4681]: I0404 03:38:04.647356 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20aaf7bf-469a-462c-a60a-d44305ae3848-kube-api-access-swb8k" (OuterVolumeSpecName: "kube-api-access-swb8k") pod "20aaf7bf-469a-462c-a60a-d44305ae3848" (UID: "20aaf7bf-469a-462c-a60a-d44305ae3848"). InnerVolumeSpecName "kube-api-access-swb8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:38:04 crc kubenswrapper[4681]: I0404 03:38:04.657499 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nkj27" Apr 04 03:38:04 crc kubenswrapper[4681]: I0404 03:38:04.742615 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82e00235-c539-407e-8e96-0543c3d3d6dd-catalog-content\") pod \"82e00235-c539-407e-8e96-0543c3d3d6dd\" (UID: \"82e00235-c539-407e-8e96-0543c3d3d6dd\") " Apr 04 03:38:04 crc kubenswrapper[4681]: I0404 03:38:04.742697 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwndl\" (UniqueName: \"kubernetes.io/projected/82e00235-c539-407e-8e96-0543c3d3d6dd-kube-api-access-rwndl\") pod \"82e00235-c539-407e-8e96-0543c3d3d6dd\" (UID: \"82e00235-c539-407e-8e96-0543c3d3d6dd\") " Apr 04 03:38:04 crc kubenswrapper[4681]: I0404 03:38:04.742774 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82e00235-c539-407e-8e96-0543c3d3d6dd-utilities\") pod \"82e00235-c539-407e-8e96-0543c3d3d6dd\" (UID: \"82e00235-c539-407e-8e96-0543c3d3d6dd\") " Apr 04 03:38:04 crc kubenswrapper[4681]: I0404 03:38:04.743417 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swb8k\" (UniqueName: \"kubernetes.io/projected/20aaf7bf-469a-462c-a60a-d44305ae3848-kube-api-access-swb8k\") on node \"crc\" DevicePath \"\"" Apr 04 03:38:04 crc kubenswrapper[4681]: I0404 03:38:04.743838 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82e00235-c539-407e-8e96-0543c3d3d6dd-utilities" (OuterVolumeSpecName: "utilities") pod "82e00235-c539-407e-8e96-0543c3d3d6dd" (UID: "82e00235-c539-407e-8e96-0543c3d3d6dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:38:04 crc kubenswrapper[4681]: I0404 03:38:04.746162 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82e00235-c539-407e-8e96-0543c3d3d6dd-kube-api-access-rwndl" (OuterVolumeSpecName: "kube-api-access-rwndl") pod "82e00235-c539-407e-8e96-0543c3d3d6dd" (UID: "82e00235-c539-407e-8e96-0543c3d3d6dd"). InnerVolumeSpecName "kube-api-access-rwndl". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:38:04 crc kubenswrapper[4681]: I0404 03:38:04.845398 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwndl\" (UniqueName: \"kubernetes.io/projected/82e00235-c539-407e-8e96-0543c3d3d6dd-kube-api-access-rwndl\") on node \"crc\" DevicePath \"\"" Apr 04 03:38:04 crc kubenswrapper[4681]: I0404 03:38:04.845441 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/82e00235-c539-407e-8e96-0543c3d3d6dd-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 03:38:04 crc kubenswrapper[4681]: I0404 03:38:04.885972 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82e00235-c539-407e-8e96-0543c3d3d6dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "82e00235-c539-407e-8e96-0543c3d3d6dd" (UID: "82e00235-c539-407e-8e96-0543c3d3d6dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:38:04 crc kubenswrapper[4681]: I0404 03:38:04.947483 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/82e00235-c539-407e-8e96-0543c3d3d6dd-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 03:38:05 crc kubenswrapper[4681]: I0404 03:38:05.045079 4681 generic.go:334] "Generic (PLEG): container finished" podID="82e00235-c539-407e-8e96-0543c3d3d6dd" containerID="381cf3aed946b1d9b6ec41d337c4cfdd0c0e7200e1bbfec6b768d8f7a7642484" exitCode=0 Apr 04 03:38:05 crc kubenswrapper[4681]: I0404 03:38:05.045206 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nkj27" Apr 04 03:38:05 crc kubenswrapper[4681]: I0404 03:38:05.046285 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkj27" event={"ID":"82e00235-c539-407e-8e96-0543c3d3d6dd","Type":"ContainerDied","Data":"381cf3aed946b1d9b6ec41d337c4cfdd0c0e7200e1bbfec6b768d8f7a7642484"} Apr 04 03:38:05 crc kubenswrapper[4681]: I0404 03:38:05.046454 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkj27" event={"ID":"82e00235-c539-407e-8e96-0543c3d3d6dd","Type":"ContainerDied","Data":"019d708c8948b78ffbb79a0fe4f129ecf9b9d1048c23ba877dda7d36851fa415"} Apr 04 03:38:05 crc kubenswrapper[4681]: I0404 03:38:05.046534 4681 scope.go:117] "RemoveContainer" containerID="381cf3aed946b1d9b6ec41d337c4cfdd0c0e7200e1bbfec6b768d8f7a7642484" Apr 04 03:38:05 crc kubenswrapper[4681]: I0404 03:38:05.049905 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587898-kcmp6" event={"ID":"20aaf7bf-469a-462c-a60a-d44305ae3848","Type":"ContainerDied","Data":"2a279e9d661a925d2e6983b77aa6efe59200ea30ee12d6a85d3a68012e5600b9"} Apr 04 03:38:05 crc kubenswrapper[4681]: I0404 03:38:05.049934 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a279e9d661a925d2e6983b77aa6efe59200ea30ee12d6a85d3a68012e5600b9" Apr 04 03:38:05 crc kubenswrapper[4681]: I0404 03:38:05.049969 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587898-kcmp6" Apr 04 03:38:05 crc kubenswrapper[4681]: I0404 03:38:05.078835 4681 scope.go:117] "RemoveContainer" containerID="35149bec8484e5817f0cba79074dac5e4080bca7e673114959a35d3ef54a1e70" Apr 04 03:38:05 crc kubenswrapper[4681]: I0404 03:38:05.089452 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nkj27"] Apr 04 03:38:05 crc kubenswrapper[4681]: I0404 03:38:05.112404 4681 scope.go:117] "RemoveContainer" containerID="d97a177ae5c6834b1d61c22a87d43f8d16cf5c651848126e578f8a2df96321d0" Apr 04 03:38:05 crc kubenswrapper[4681]: I0404 03:38:05.112544 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nkj27"] Apr 04 03:38:05 crc kubenswrapper[4681]: I0404 03:38:05.140880 4681 scope.go:117] "RemoveContainer" containerID="381cf3aed946b1d9b6ec41d337c4cfdd0c0e7200e1bbfec6b768d8f7a7642484" Apr 04 03:38:05 crc kubenswrapper[4681]: E0404 03:38:05.141460 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"381cf3aed946b1d9b6ec41d337c4cfdd0c0e7200e1bbfec6b768d8f7a7642484\": container with ID starting with 381cf3aed946b1d9b6ec41d337c4cfdd0c0e7200e1bbfec6b768d8f7a7642484 not found: ID does not exist" containerID="381cf3aed946b1d9b6ec41d337c4cfdd0c0e7200e1bbfec6b768d8f7a7642484" Apr 04 03:38:05 crc kubenswrapper[4681]: I0404 03:38:05.141525 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"381cf3aed946b1d9b6ec41d337c4cfdd0c0e7200e1bbfec6b768d8f7a7642484"} err="failed to get container status \"381cf3aed946b1d9b6ec41d337c4cfdd0c0e7200e1bbfec6b768d8f7a7642484\": rpc error: code = NotFound desc = could not find container \"381cf3aed946b1d9b6ec41d337c4cfdd0c0e7200e1bbfec6b768d8f7a7642484\": container with ID starting with 381cf3aed946b1d9b6ec41d337c4cfdd0c0e7200e1bbfec6b768d8f7a7642484 not found: ID does not exist" Apr 04 03:38:05 crc kubenswrapper[4681]: I0404 03:38:05.141560 4681 scope.go:117] "RemoveContainer" containerID="35149bec8484e5817f0cba79074dac5e4080bca7e673114959a35d3ef54a1e70" Apr 04 03:38:05 crc kubenswrapper[4681]: E0404 03:38:05.141867 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35149bec8484e5817f0cba79074dac5e4080bca7e673114959a35d3ef54a1e70\": container with ID starting with 35149bec8484e5817f0cba79074dac5e4080bca7e673114959a35d3ef54a1e70 not found: ID does not exist" containerID="35149bec8484e5817f0cba79074dac5e4080bca7e673114959a35d3ef54a1e70" Apr 04 03:38:05 crc kubenswrapper[4681]: I0404 03:38:05.141985 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35149bec8484e5817f0cba79074dac5e4080bca7e673114959a35d3ef54a1e70"} err="failed to get container status \"35149bec8484e5817f0cba79074dac5e4080bca7e673114959a35d3ef54a1e70\": rpc error: code = NotFound desc = could not find container \"35149bec8484e5817f0cba79074dac5e4080bca7e673114959a35d3ef54a1e70\": container with ID starting with 35149bec8484e5817f0cba79074dac5e4080bca7e673114959a35d3ef54a1e70 not found: ID does not exist" Apr 04 03:38:05 crc kubenswrapper[4681]: I0404 03:38:05.142064 4681 scope.go:117] "RemoveContainer" containerID="d97a177ae5c6834b1d61c22a87d43f8d16cf5c651848126e578f8a2df96321d0" Apr 04 03:38:05 crc kubenswrapper[4681]: E0404 03:38:05.142365 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d97a177ae5c6834b1d61c22a87d43f8d16cf5c651848126e578f8a2df96321d0\": container with ID starting with d97a177ae5c6834b1d61c22a87d43f8d16cf5c651848126e578f8a2df96321d0 not found: ID does not exist" containerID="d97a177ae5c6834b1d61c22a87d43f8d16cf5c651848126e578f8a2df96321d0" Apr 04 03:38:05 crc kubenswrapper[4681]: I0404 03:38:05.142395 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d97a177ae5c6834b1d61c22a87d43f8d16cf5c651848126e578f8a2df96321d0"} err="failed to get container status \"d97a177ae5c6834b1d61c22a87d43f8d16cf5c651848126e578f8a2df96321d0\": rpc error: code = NotFound desc = could not find container \"d97a177ae5c6834b1d61c22a87d43f8d16cf5c651848126e578f8a2df96321d0\": container with ID starting with d97a177ae5c6834b1d61c22a87d43f8d16cf5c651848126e578f8a2df96321d0 not found: ID does not exist" Apr 04 03:38:05 crc kubenswrapper[4681]: I0404 03:38:05.213892 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82e00235-c539-407e-8e96-0543c3d3d6dd" path="/var/lib/kubelet/pods/82e00235-c539-407e-8e96-0543c3d3d6dd/volumes" Apr 04 03:38:05 crc kubenswrapper[4681]: I0404 03:38:05.564631 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587892-mwdbw"] Apr 04 03:38:05 crc kubenswrapper[4681]: I0404 03:38:05.574147 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587892-mwdbw"] Apr 04 03:38:06 crc kubenswrapper[4681]: I0404 03:38:06.201078 4681 scope.go:117] "RemoveContainer" containerID="e6ea3afc9053f8a419be029e9129e5308e72192f68fd39e7a983304500440f3a" Apr 04 03:38:06 crc kubenswrapper[4681]: E0404 03:38:06.201338 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:38:07 crc kubenswrapper[4681]: I0404 03:38:07.215938 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15eb65ea-57bf-4144-b436-4d5b18e25a0e" path="/var/lib/kubelet/pods/15eb65ea-57bf-4144-b436-4d5b18e25a0e/volumes" Apr 04 03:38:19 crc kubenswrapper[4681]: I0404 03:38:19.005447 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ddjjc/must-gather-qsq8k"] Apr 04 03:38:19 crc kubenswrapper[4681]: E0404 03:38:19.006575 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e00235-c539-407e-8e96-0543c3d3d6dd" containerName="registry-server" Apr 04 03:38:19 crc kubenswrapper[4681]: I0404 03:38:19.006594 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e00235-c539-407e-8e96-0543c3d3d6dd" containerName="registry-server" Apr 04 03:38:19 crc kubenswrapper[4681]: E0404 03:38:19.006637 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20aaf7bf-469a-462c-a60a-d44305ae3848" containerName="oc" Apr 04 03:38:19 crc kubenswrapper[4681]: I0404 03:38:19.006651 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="20aaf7bf-469a-462c-a60a-d44305ae3848" containerName="oc" Apr 04 03:38:19 crc kubenswrapper[4681]: E0404 03:38:19.006671 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e00235-c539-407e-8e96-0543c3d3d6dd" containerName="extract-utilities" Apr 04 03:38:19 crc kubenswrapper[4681]: I0404 03:38:19.006680 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e00235-c539-407e-8e96-0543c3d3d6dd" containerName="extract-utilities" Apr 04 03:38:19 crc kubenswrapper[4681]: E0404 03:38:19.006698 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e00235-c539-407e-8e96-0543c3d3d6dd" containerName="extract-content" Apr 04 03:38:19 crc kubenswrapper[4681]: I0404 03:38:19.006705 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e00235-c539-407e-8e96-0543c3d3d6dd" containerName="extract-content" Apr 04 03:38:19 crc kubenswrapper[4681]: I0404 03:38:19.006958 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="20aaf7bf-469a-462c-a60a-d44305ae3848" containerName="oc" Apr 04 03:38:19 crc kubenswrapper[4681]: I0404 03:38:19.006995 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e00235-c539-407e-8e96-0543c3d3d6dd" containerName="registry-server" Apr 04 03:38:19 crc kubenswrapper[4681]: I0404 03:38:19.008366 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ddjjc/must-gather-qsq8k" Apr 04 03:38:19 crc kubenswrapper[4681]: I0404 03:38:19.010792 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ddjjc"/"openshift-service-ca.crt" Apr 04 03:38:19 crc kubenswrapper[4681]: I0404 03:38:19.011020 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ddjjc"/"kube-root-ca.crt" Apr 04 03:38:19 crc kubenswrapper[4681]: I0404 03:38:19.016416 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ddjjc/must-gather-qsq8k"] Apr 04 03:38:19 crc kubenswrapper[4681]: I0404 03:38:19.026600 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-ddjjc"/"default-dockercfg-7mxl4" Apr 04 03:38:19 crc kubenswrapper[4681]: I0404 03:38:19.140670 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e2edff7-0c6b-41c0-acd8-f1e7dec006a8-must-gather-output\") pod \"must-gather-qsq8k\" (UID: \"4e2edff7-0c6b-41c0-acd8-f1e7dec006a8\") " pod="openshift-must-gather-ddjjc/must-gather-qsq8k" Apr 04 03:38:19 crc kubenswrapper[4681]: I0404 03:38:19.140815 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4dtl\" (UniqueName: \"kubernetes.io/projected/4e2edff7-0c6b-41c0-acd8-f1e7dec006a8-kube-api-access-h4dtl\") pod \"must-gather-qsq8k\" (UID: \"4e2edff7-0c6b-41c0-acd8-f1e7dec006a8\") " pod="openshift-must-gather-ddjjc/must-gather-qsq8k" Apr 04 03:38:19 crc kubenswrapper[4681]: I0404 03:38:19.242608 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e2edff7-0c6b-41c0-acd8-f1e7dec006a8-must-gather-output\") pod \"must-gather-qsq8k\" (UID: \"4e2edff7-0c6b-41c0-acd8-f1e7dec006a8\") " pod="openshift-must-gather-ddjjc/must-gather-qsq8k" Apr 04 03:38:19 crc kubenswrapper[4681]: I0404 03:38:19.242709 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4dtl\" (UniqueName: \"kubernetes.io/projected/4e2edff7-0c6b-41c0-acd8-f1e7dec006a8-kube-api-access-h4dtl\") pod \"must-gather-qsq8k\" (UID: \"4e2edff7-0c6b-41c0-acd8-f1e7dec006a8\") " pod="openshift-must-gather-ddjjc/must-gather-qsq8k" Apr 04 03:38:19 crc kubenswrapper[4681]: I0404 03:38:19.243250 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e2edff7-0c6b-41c0-acd8-f1e7dec006a8-must-gather-output\") pod \"must-gather-qsq8k\" (UID: \"4e2edff7-0c6b-41c0-acd8-f1e7dec006a8\") " pod="openshift-must-gather-ddjjc/must-gather-qsq8k" Apr 04 03:38:19 crc kubenswrapper[4681]: I0404 03:38:19.262043 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4dtl\" (UniqueName: \"kubernetes.io/projected/4e2edff7-0c6b-41c0-acd8-f1e7dec006a8-kube-api-access-h4dtl\") pod \"must-gather-qsq8k\" (UID: \"4e2edff7-0c6b-41c0-acd8-f1e7dec006a8\") " pod="openshift-must-gather-ddjjc/must-gather-qsq8k" Apr 04 03:38:19 crc kubenswrapper[4681]: I0404 03:38:19.338902 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ddjjc/must-gather-qsq8k" Apr 04 03:38:19 crc kubenswrapper[4681]: I0404 03:38:19.864760 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ddjjc/must-gather-qsq8k"] Apr 04 03:38:20 crc kubenswrapper[4681]: I0404 03:38:20.196692 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ddjjc/must-gather-qsq8k" event={"ID":"4e2edff7-0c6b-41c0-acd8-f1e7dec006a8","Type":"ContainerStarted","Data":"7b73fe1f5a8453d32cf3569b0832eef61688cfc037b1b72d6f2912d7927b7450"} Apr 04 03:38:21 crc kubenswrapper[4681]: I0404 03:38:21.209991 4681 scope.go:117] "RemoveContainer" containerID="e6ea3afc9053f8a419be029e9129e5308e72192f68fd39e7a983304500440f3a" Apr 04 03:38:21 crc kubenswrapper[4681]: E0404 03:38:21.212500 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:38:29 crc kubenswrapper[4681]: I0404 03:38:29.295349 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ddjjc/must-gather-qsq8k" event={"ID":"4e2edff7-0c6b-41c0-acd8-f1e7dec006a8","Type":"ContainerStarted","Data":"5462215ba614ec0e034faa26abeb209759719e7895f0c030240b753802cff62b"} Apr 04 03:38:29 crc kubenswrapper[4681]: I0404 03:38:29.296208 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ddjjc/must-gather-qsq8k" event={"ID":"4e2edff7-0c6b-41c0-acd8-f1e7dec006a8","Type":"ContainerStarted","Data":"41db316f07b8bd88b8d5d0fad14cdfcc1b60bd67bbd811a9d683966cdcba464b"} Apr 04 03:38:29 crc kubenswrapper[4681]: I0404 03:38:29.312379 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ddjjc/must-gather-qsq8k" podStartSLOduration=3.035445873 podStartE2EDuration="11.31236271s" podCreationTimestamp="2026-04-04 03:38:18 +0000 UTC" firstStartedPulling="2026-04-04 03:38:19.878923834 +0000 UTC m=+6179.544698954" lastFinishedPulling="2026-04-04 03:38:28.155840671 +0000 UTC m=+6187.821615791" observedRunningTime="2026-04-04 03:38:29.307473857 +0000 UTC m=+6188.973248977" watchObservedRunningTime="2026-04-04 03:38:29.31236271 +0000 UTC m=+6188.978137830" Apr 04 03:38:34 crc kubenswrapper[4681]: I0404 03:38:34.973904 4681 scope.go:117] "RemoveContainer" containerID="ab3e37c459ad3c1a45cc0d80f6a5febffe5d2843f2b3b526554828a459c32b44" Apr 04 03:38:35 crc kubenswrapper[4681]: I0404 03:38:35.264890 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ddjjc/crc-debug-v7nrf"] Apr 04 03:38:35 crc kubenswrapper[4681]: I0404 03:38:35.268569 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ddjjc/crc-debug-v7nrf" Apr 04 03:38:35 crc kubenswrapper[4681]: I0404 03:38:35.323182 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfb7d\" (UniqueName: \"kubernetes.io/projected/79acf00d-5a70-4b3f-84db-45491be28514-kube-api-access-tfb7d\") pod \"crc-debug-v7nrf\" (UID: \"79acf00d-5a70-4b3f-84db-45491be28514\") " pod="openshift-must-gather-ddjjc/crc-debug-v7nrf" Apr 04 03:38:35 crc kubenswrapper[4681]: I0404 03:38:35.323321 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/79acf00d-5a70-4b3f-84db-45491be28514-host\") pod \"crc-debug-v7nrf\" (UID: \"79acf00d-5a70-4b3f-84db-45491be28514\") " pod="openshift-must-gather-ddjjc/crc-debug-v7nrf" Apr 04 03:38:35 crc kubenswrapper[4681]: I0404 03:38:35.428698 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfb7d\" (UniqueName: \"kubernetes.io/projected/79acf00d-5a70-4b3f-84db-45491be28514-kube-api-access-tfb7d\") pod \"crc-debug-v7nrf\" (UID: \"79acf00d-5a70-4b3f-84db-45491be28514\") " pod="openshift-must-gather-ddjjc/crc-debug-v7nrf" Apr 04 03:38:35 crc kubenswrapper[4681]: I0404 03:38:35.428808 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/79acf00d-5a70-4b3f-84db-45491be28514-host\") pod \"crc-debug-v7nrf\" (UID: \"79acf00d-5a70-4b3f-84db-45491be28514\") " pod="openshift-must-gather-ddjjc/crc-debug-v7nrf" Apr 04 03:38:35 crc kubenswrapper[4681]: I0404 03:38:35.429071 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/79acf00d-5a70-4b3f-84db-45491be28514-host\") pod \"crc-debug-v7nrf\" (UID: \"79acf00d-5a70-4b3f-84db-45491be28514\") " pod="openshift-must-gather-ddjjc/crc-debug-v7nrf" Apr 04 03:38:35 crc kubenswrapper[4681]: I0404 03:38:35.458804 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfb7d\" (UniqueName: \"kubernetes.io/projected/79acf00d-5a70-4b3f-84db-45491be28514-kube-api-access-tfb7d\") pod \"crc-debug-v7nrf\" (UID: \"79acf00d-5a70-4b3f-84db-45491be28514\") " pod="openshift-must-gather-ddjjc/crc-debug-v7nrf" Apr 04 03:38:35 crc kubenswrapper[4681]: I0404 03:38:35.600431 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ddjjc/crc-debug-v7nrf" Apr 04 03:38:36 crc kubenswrapper[4681]: I0404 03:38:36.201191 4681 scope.go:117] "RemoveContainer" containerID="e6ea3afc9053f8a419be029e9129e5308e72192f68fd39e7a983304500440f3a" Apr 04 03:38:36 crc kubenswrapper[4681]: E0404 03:38:36.201908 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:38:36 crc kubenswrapper[4681]: I0404 03:38:36.364917 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ddjjc/crc-debug-v7nrf" event={"ID":"79acf00d-5a70-4b3f-84db-45491be28514","Type":"ContainerStarted","Data":"8ff358cf186b9a3b51c7cf628e5bf75d4df8f98226847429de4422fdfbee531f"} Apr 04 03:38:47 crc kubenswrapper[4681]: I0404 03:38:47.488152 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ddjjc/crc-debug-v7nrf" event={"ID":"79acf00d-5a70-4b3f-84db-45491be28514","Type":"ContainerStarted","Data":"e14d57763c15be13727d76006f7126720e23efc1b3efa07b1194029e73e5dade"} Apr 04 03:38:47 crc kubenswrapper[4681]: I0404 03:38:47.505091 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ddjjc/crc-debug-v7nrf" podStartSLOduration=1.453556917 podStartE2EDuration="12.505069722s" podCreationTimestamp="2026-04-04 03:38:35 +0000 UTC" firstStartedPulling="2026-04-04 03:38:35.670991659 +0000 UTC m=+6195.336766779" lastFinishedPulling="2026-04-04 03:38:46.722504464 +0000 UTC m=+6206.388279584" observedRunningTime="2026-04-04 03:38:47.504787864 +0000 UTC m=+6207.170562984" watchObservedRunningTime="2026-04-04 03:38:47.505069722 +0000 UTC m=+6207.170844852" Apr 04 03:38:51 crc kubenswrapper[4681]: I0404 03:38:51.211123 4681 scope.go:117] "RemoveContainer" containerID="e6ea3afc9053f8a419be029e9129e5308e72192f68fd39e7a983304500440f3a" Apr 04 03:38:51 crc kubenswrapper[4681]: E0404 03:38:51.211907 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:39:05 crc kubenswrapper[4681]: I0404 03:39:05.203924 4681 scope.go:117] "RemoveContainer" containerID="e6ea3afc9053f8a419be029e9129e5308e72192f68fd39e7a983304500440f3a" Apr 04 03:39:05 crc kubenswrapper[4681]: E0404 03:39:05.204829 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:39:20 crc kubenswrapper[4681]: I0404 03:39:20.202151 4681 scope.go:117] "RemoveContainer" containerID="e6ea3afc9053f8a419be029e9129e5308e72192f68fd39e7a983304500440f3a" Apr 04 03:39:20 crc kubenswrapper[4681]: E0404 03:39:20.203695 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:39:32 crc kubenswrapper[4681]: I0404 03:39:32.944210 4681 generic.go:334] "Generic (PLEG): container finished" podID="79acf00d-5a70-4b3f-84db-45491be28514" containerID="e14d57763c15be13727d76006f7126720e23efc1b3efa07b1194029e73e5dade" exitCode=0 Apr 04 03:39:32 crc kubenswrapper[4681]: I0404 03:39:32.944312 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ddjjc/crc-debug-v7nrf" event={"ID":"79acf00d-5a70-4b3f-84db-45491be28514","Type":"ContainerDied","Data":"e14d57763c15be13727d76006f7126720e23efc1b3efa07b1194029e73e5dade"} Apr 04 03:39:34 crc kubenswrapper[4681]: I0404 03:39:34.083709 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ddjjc/crc-debug-v7nrf" Apr 04 03:39:34 crc kubenswrapper[4681]: I0404 03:39:34.123884 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ddjjc/crc-debug-v7nrf"] Apr 04 03:39:34 crc kubenswrapper[4681]: I0404 03:39:34.134013 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ddjjc/crc-debug-v7nrf"] Apr 04 03:39:34 crc kubenswrapper[4681]: I0404 03:39:34.199469 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/79acf00d-5a70-4b3f-84db-45491be28514-host\") pod \"79acf00d-5a70-4b3f-84db-45491be28514\" (UID: \"79acf00d-5a70-4b3f-84db-45491be28514\") " Apr 04 03:39:34 crc kubenswrapper[4681]: I0404 03:39:34.199637 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79acf00d-5a70-4b3f-84db-45491be28514-host" (OuterVolumeSpecName: "host") pod "79acf00d-5a70-4b3f-84db-45491be28514" (UID: "79acf00d-5a70-4b3f-84db-45491be28514"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 03:39:34 crc kubenswrapper[4681]: I0404 03:39:34.199745 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfb7d\" (UniqueName: \"kubernetes.io/projected/79acf00d-5a70-4b3f-84db-45491be28514-kube-api-access-tfb7d\") pod \"79acf00d-5a70-4b3f-84db-45491be28514\" (UID: \"79acf00d-5a70-4b3f-84db-45491be28514\") " Apr 04 03:39:34 crc kubenswrapper[4681]: I0404 03:39:34.200204 4681 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/79acf00d-5a70-4b3f-84db-45491be28514-host\") on node \"crc\" DevicePath \"\"" Apr 04 03:39:34 crc kubenswrapper[4681]: I0404 03:39:34.205563 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79acf00d-5a70-4b3f-84db-45491be28514-kube-api-access-tfb7d" (OuterVolumeSpecName: "kube-api-access-tfb7d") pod "79acf00d-5a70-4b3f-84db-45491be28514" (UID: "79acf00d-5a70-4b3f-84db-45491be28514"). InnerVolumeSpecName "kube-api-access-tfb7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:39:34 crc kubenswrapper[4681]: I0404 03:39:34.302201 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfb7d\" (UniqueName: \"kubernetes.io/projected/79acf00d-5a70-4b3f-84db-45491be28514-kube-api-access-tfb7d\") on node \"crc\" DevicePath \"\"" Apr 04 03:39:34 crc kubenswrapper[4681]: I0404 03:39:34.966799 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ff358cf186b9a3b51c7cf628e5bf75d4df8f98226847429de4422fdfbee531f" Apr 04 03:39:34 crc kubenswrapper[4681]: I0404 03:39:34.966883 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ddjjc/crc-debug-v7nrf" Apr 04 03:39:35 crc kubenswrapper[4681]: I0404 03:39:35.200786 4681 scope.go:117] "RemoveContainer" containerID="e6ea3afc9053f8a419be029e9129e5308e72192f68fd39e7a983304500440f3a" Apr 04 03:39:35 crc kubenswrapper[4681]: I0404 03:39:35.212180 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79acf00d-5a70-4b3f-84db-45491be28514" path="/var/lib/kubelet/pods/79acf00d-5a70-4b3f-84db-45491be28514/volumes" Apr 04 03:39:35 crc kubenswrapper[4681]: I0404 03:39:35.303161 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ddjjc/crc-debug-gcbtc"] Apr 04 03:39:35 crc kubenswrapper[4681]: E0404 03:39:35.303857 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79acf00d-5a70-4b3f-84db-45491be28514" containerName="container-00" Apr 04 03:39:35 crc kubenswrapper[4681]: I0404 03:39:35.303879 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="79acf00d-5a70-4b3f-84db-45491be28514" containerName="container-00" Apr 04 03:39:35 crc kubenswrapper[4681]: I0404 03:39:35.306746 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="79acf00d-5a70-4b3f-84db-45491be28514" containerName="container-00" Apr 04 03:39:35 crc kubenswrapper[4681]: I0404 03:39:35.307664 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ddjjc/crc-debug-gcbtc" Apr 04 03:39:35 crc kubenswrapper[4681]: I0404 03:39:35.427414 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8jwc\" (UniqueName: \"kubernetes.io/projected/f1b42e8c-a558-437d-a072-9a8a6c3f2a89-kube-api-access-w8jwc\") pod \"crc-debug-gcbtc\" (UID: \"f1b42e8c-a558-437d-a072-9a8a6c3f2a89\") " pod="openshift-must-gather-ddjjc/crc-debug-gcbtc" Apr 04 03:39:35 crc kubenswrapper[4681]: I0404 03:39:35.427460 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f1b42e8c-a558-437d-a072-9a8a6c3f2a89-host\") pod \"crc-debug-gcbtc\" (UID: \"f1b42e8c-a558-437d-a072-9a8a6c3f2a89\") " pod="openshift-must-gather-ddjjc/crc-debug-gcbtc" Apr 04 03:39:35 crc kubenswrapper[4681]: I0404 03:39:35.529626 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8jwc\" (UniqueName: \"kubernetes.io/projected/f1b42e8c-a558-437d-a072-9a8a6c3f2a89-kube-api-access-w8jwc\") pod \"crc-debug-gcbtc\" (UID: \"f1b42e8c-a558-437d-a072-9a8a6c3f2a89\") " pod="openshift-must-gather-ddjjc/crc-debug-gcbtc" Apr 04 03:39:35 crc kubenswrapper[4681]: I0404 03:39:35.529699 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f1b42e8c-a558-437d-a072-9a8a6c3f2a89-host\") pod \"crc-debug-gcbtc\" (UID: \"f1b42e8c-a558-437d-a072-9a8a6c3f2a89\") " pod="openshift-must-gather-ddjjc/crc-debug-gcbtc" Apr 04 03:39:35 crc kubenswrapper[4681]: I0404 03:39:35.529941 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f1b42e8c-a558-437d-a072-9a8a6c3f2a89-host\") pod \"crc-debug-gcbtc\" (UID: \"f1b42e8c-a558-437d-a072-9a8a6c3f2a89\") " pod="openshift-must-gather-ddjjc/crc-debug-gcbtc" Apr 04 03:39:35 crc kubenswrapper[4681]: I0404 03:39:35.549128 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8jwc\" (UniqueName: \"kubernetes.io/projected/f1b42e8c-a558-437d-a072-9a8a6c3f2a89-kube-api-access-w8jwc\") pod \"crc-debug-gcbtc\" (UID: \"f1b42e8c-a558-437d-a072-9a8a6c3f2a89\") " pod="openshift-must-gather-ddjjc/crc-debug-gcbtc" Apr 04 03:39:35 crc kubenswrapper[4681]: I0404 03:39:35.664747 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ddjjc/crc-debug-gcbtc" Apr 04 03:39:35 crc kubenswrapper[4681]: I0404 03:39:35.980255 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ddjjc/crc-debug-gcbtc" event={"ID":"f1b42e8c-a558-437d-a072-9a8a6c3f2a89","Type":"ContainerStarted","Data":"aa822f8370e2930225715beac9723b34a79afacb0da90f54bbc0d32b914075ab"} Apr 04 03:39:35 crc kubenswrapper[4681]: I0404 03:39:35.983679 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerStarted","Data":"19af9a1ef2b1da42b55b30d1de98948b243ad76b2510aa29072f2ddbf1becf60"} Apr 04 03:39:36 crc kubenswrapper[4681]: I0404 03:39:36.992934 4681 generic.go:334] "Generic (PLEG): container finished" podID="f1b42e8c-a558-437d-a072-9a8a6c3f2a89" containerID="ce4e008b96470896a388abb99081dd95a45094a0502868f72d23bd59a81511c4" exitCode=0 Apr 04 03:39:36 crc kubenswrapper[4681]: I0404 03:39:36.993009 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ddjjc/crc-debug-gcbtc" event={"ID":"f1b42e8c-a558-437d-a072-9a8a6c3f2a89","Type":"ContainerDied","Data":"ce4e008b96470896a388abb99081dd95a45094a0502868f72d23bd59a81511c4"} Apr 04 03:39:38 crc kubenswrapper[4681]: I0404 03:39:38.986895 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ddjjc/crc-debug-gcbtc" Apr 04 03:39:39 crc kubenswrapper[4681]: I0404 03:39:39.011879 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ddjjc/crc-debug-gcbtc" event={"ID":"f1b42e8c-a558-437d-a072-9a8a6c3f2a89","Type":"ContainerDied","Data":"aa822f8370e2930225715beac9723b34a79afacb0da90f54bbc0d32b914075ab"} Apr 04 03:39:39 crc kubenswrapper[4681]: I0404 03:39:39.012125 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa822f8370e2930225715beac9723b34a79afacb0da90f54bbc0d32b914075ab" Apr 04 03:39:39 crc kubenswrapper[4681]: I0404 03:39:39.011933 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ddjjc/crc-debug-gcbtc" Apr 04 03:39:39 crc kubenswrapper[4681]: I0404 03:39:39.102300 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f1b42e8c-a558-437d-a072-9a8a6c3f2a89-host\") pod \"f1b42e8c-a558-437d-a072-9a8a6c3f2a89\" (UID: \"f1b42e8c-a558-437d-a072-9a8a6c3f2a89\") " Apr 04 03:39:39 crc kubenswrapper[4681]: I0404 03:39:39.102401 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1b42e8c-a558-437d-a072-9a8a6c3f2a89-host" (OuterVolumeSpecName: "host") pod "f1b42e8c-a558-437d-a072-9a8a6c3f2a89" (UID: "f1b42e8c-a558-437d-a072-9a8a6c3f2a89"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 03:39:39 crc kubenswrapper[4681]: I0404 03:39:39.102459 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8jwc\" (UniqueName: \"kubernetes.io/projected/f1b42e8c-a558-437d-a072-9a8a6c3f2a89-kube-api-access-w8jwc\") pod \"f1b42e8c-a558-437d-a072-9a8a6c3f2a89\" (UID: \"f1b42e8c-a558-437d-a072-9a8a6c3f2a89\") " Apr 04 03:39:39 crc kubenswrapper[4681]: I0404 03:39:39.103737 4681 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f1b42e8c-a558-437d-a072-9a8a6c3f2a89-host\") on node \"crc\" DevicePath \"\"" Apr 04 03:39:39 crc kubenswrapper[4681]: I0404 03:39:39.126451 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1b42e8c-a558-437d-a072-9a8a6c3f2a89-kube-api-access-w8jwc" (OuterVolumeSpecName: "kube-api-access-w8jwc") pod "f1b42e8c-a558-437d-a072-9a8a6c3f2a89" (UID: "f1b42e8c-a558-437d-a072-9a8a6c3f2a89"). InnerVolumeSpecName "kube-api-access-w8jwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:39:39 crc kubenswrapper[4681]: I0404 03:39:39.205373 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8jwc\" (UniqueName: \"kubernetes.io/projected/f1b42e8c-a558-437d-a072-9a8a6c3f2a89-kube-api-access-w8jwc\") on node \"crc\" DevicePath \"\"" Apr 04 03:39:39 crc kubenswrapper[4681]: I0404 03:39:39.648719 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ddjjc/crc-debug-gcbtc"] Apr 04 03:39:39 crc kubenswrapper[4681]: I0404 03:39:39.658496 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ddjjc/crc-debug-gcbtc"] Apr 04 03:39:40 crc kubenswrapper[4681]: I0404 03:39:40.833723 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ddjjc/crc-debug-2nkv6"] Apr 04 03:39:40 crc kubenswrapper[4681]: E0404 03:39:40.841363 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1b42e8c-a558-437d-a072-9a8a6c3f2a89" containerName="container-00" Apr 04 03:39:40 crc kubenswrapper[4681]: I0404 03:39:40.841387 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1b42e8c-a558-437d-a072-9a8a6c3f2a89" containerName="container-00" Apr 04 03:39:40 crc kubenswrapper[4681]: I0404 03:39:40.841629 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1b42e8c-a558-437d-a072-9a8a6c3f2a89" containerName="container-00" Apr 04 03:39:40 crc kubenswrapper[4681]: I0404 03:39:40.842355 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ddjjc/crc-debug-2nkv6" Apr 04 03:39:40 crc kubenswrapper[4681]: I0404 03:39:40.938716 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pplsq\" (UniqueName: \"kubernetes.io/projected/d261cd9d-3bea-44e6-b26e-ae6a01199b6a-kube-api-access-pplsq\") pod \"crc-debug-2nkv6\" (UID: \"d261cd9d-3bea-44e6-b26e-ae6a01199b6a\") " pod="openshift-must-gather-ddjjc/crc-debug-2nkv6" Apr 04 03:39:40 crc kubenswrapper[4681]: I0404 03:39:40.938775 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d261cd9d-3bea-44e6-b26e-ae6a01199b6a-host\") pod \"crc-debug-2nkv6\" (UID: \"d261cd9d-3bea-44e6-b26e-ae6a01199b6a\") " pod="openshift-must-gather-ddjjc/crc-debug-2nkv6" Apr 04 03:39:41 crc kubenswrapper[4681]: I0404 03:39:41.041504 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pplsq\" (UniqueName: \"kubernetes.io/projected/d261cd9d-3bea-44e6-b26e-ae6a01199b6a-kube-api-access-pplsq\") pod \"crc-debug-2nkv6\" (UID: \"d261cd9d-3bea-44e6-b26e-ae6a01199b6a\") " pod="openshift-must-gather-ddjjc/crc-debug-2nkv6" Apr 04 03:39:41 crc kubenswrapper[4681]: I0404 03:39:41.041608 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d261cd9d-3bea-44e6-b26e-ae6a01199b6a-host\") pod \"crc-debug-2nkv6\" (UID: \"d261cd9d-3bea-44e6-b26e-ae6a01199b6a\") " pod="openshift-must-gather-ddjjc/crc-debug-2nkv6" Apr 04 03:39:41 crc kubenswrapper[4681]: I0404 03:39:41.041729 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d261cd9d-3bea-44e6-b26e-ae6a01199b6a-host\") pod \"crc-debug-2nkv6\" (UID: \"d261cd9d-3bea-44e6-b26e-ae6a01199b6a\") " pod="openshift-must-gather-ddjjc/crc-debug-2nkv6" Apr 04 03:39:41 crc kubenswrapper[4681]: I0404 03:39:41.061386 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pplsq\" (UniqueName: \"kubernetes.io/projected/d261cd9d-3bea-44e6-b26e-ae6a01199b6a-kube-api-access-pplsq\") pod \"crc-debug-2nkv6\" (UID: \"d261cd9d-3bea-44e6-b26e-ae6a01199b6a\") " pod="openshift-must-gather-ddjjc/crc-debug-2nkv6" Apr 04 03:39:41 crc kubenswrapper[4681]: I0404 03:39:41.161439 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ddjjc/crc-debug-2nkv6" Apr 04 03:39:41 crc kubenswrapper[4681]: I0404 03:39:41.213166 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1b42e8c-a558-437d-a072-9a8a6c3f2a89" path="/var/lib/kubelet/pods/f1b42e8c-a558-437d-a072-9a8a6c3f2a89/volumes" Apr 04 03:39:42 crc kubenswrapper[4681]: I0404 03:39:42.040996 4681 generic.go:334] "Generic (PLEG): container finished" podID="d261cd9d-3bea-44e6-b26e-ae6a01199b6a" containerID="2da5327d785865854fc0ae9265d4753c4e35a4158a0bd81fc618ffc7ec336214" exitCode=0 Apr 04 03:39:42 crc kubenswrapper[4681]: I0404 03:39:42.041060 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ddjjc/crc-debug-2nkv6" event={"ID":"d261cd9d-3bea-44e6-b26e-ae6a01199b6a","Type":"ContainerDied","Data":"2da5327d785865854fc0ae9265d4753c4e35a4158a0bd81fc618ffc7ec336214"} Apr 04 03:39:42 crc kubenswrapper[4681]: I0404 03:39:42.041327 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ddjjc/crc-debug-2nkv6" event={"ID":"d261cd9d-3bea-44e6-b26e-ae6a01199b6a","Type":"ContainerStarted","Data":"77c56cc98e3bafc3fe352f6eb273b6350f676d457331d2c16cf44c925d9fc23b"} Apr 04 03:39:42 crc kubenswrapper[4681]: I0404 03:39:42.102312 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ddjjc/crc-debug-2nkv6"] Apr 04 03:39:42 crc kubenswrapper[4681]: I0404 03:39:42.112623 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ddjjc/crc-debug-2nkv6"] Apr 04 03:39:43 crc kubenswrapper[4681]: I0404 03:39:43.171800 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ddjjc/crc-debug-2nkv6" Apr 04 03:39:43 crc kubenswrapper[4681]: I0404 03:39:43.285603 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pplsq\" (UniqueName: \"kubernetes.io/projected/d261cd9d-3bea-44e6-b26e-ae6a01199b6a-kube-api-access-pplsq\") pod \"d261cd9d-3bea-44e6-b26e-ae6a01199b6a\" (UID: \"d261cd9d-3bea-44e6-b26e-ae6a01199b6a\") " Apr 04 03:39:43 crc kubenswrapper[4681]: I0404 03:39:43.285832 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d261cd9d-3bea-44e6-b26e-ae6a01199b6a-host\") pod \"d261cd9d-3bea-44e6-b26e-ae6a01199b6a\" (UID: \"d261cd9d-3bea-44e6-b26e-ae6a01199b6a\") " Apr 04 03:39:43 crc kubenswrapper[4681]: I0404 03:39:43.286124 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d261cd9d-3bea-44e6-b26e-ae6a01199b6a-host" (OuterVolumeSpecName: "host") pod "d261cd9d-3bea-44e6-b26e-ae6a01199b6a" (UID: "d261cd9d-3bea-44e6-b26e-ae6a01199b6a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 03:39:43 crc kubenswrapper[4681]: I0404 03:39:43.286726 4681 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d261cd9d-3bea-44e6-b26e-ae6a01199b6a-host\") on node \"crc\" DevicePath \"\"" Apr 04 03:39:43 crc kubenswrapper[4681]: I0404 03:39:43.299454 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d261cd9d-3bea-44e6-b26e-ae6a01199b6a-kube-api-access-pplsq" (OuterVolumeSpecName: "kube-api-access-pplsq") pod "d261cd9d-3bea-44e6-b26e-ae6a01199b6a" (UID: "d261cd9d-3bea-44e6-b26e-ae6a01199b6a"). InnerVolumeSpecName "kube-api-access-pplsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:39:43 crc kubenswrapper[4681]: I0404 03:39:43.389823 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pplsq\" (UniqueName: \"kubernetes.io/projected/d261cd9d-3bea-44e6-b26e-ae6a01199b6a-kube-api-access-pplsq\") on node \"crc\" DevicePath \"\"" Apr 04 03:39:44 crc kubenswrapper[4681]: I0404 03:39:44.060159 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ddjjc/crc-debug-2nkv6" Apr 04 03:39:44 crc kubenswrapper[4681]: I0404 03:39:44.066359 4681 scope.go:117] "RemoveContainer" containerID="2da5327d785865854fc0ae9265d4753c4e35a4158a0bd81fc618ffc7ec336214" Apr 04 03:39:45 crc kubenswrapper[4681]: I0404 03:39:45.214136 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d261cd9d-3bea-44e6-b26e-ae6a01199b6a" path="/var/lib/kubelet/pods/d261cd9d-3bea-44e6-b26e-ae6a01199b6a/volumes" Apr 04 03:40:00 crc kubenswrapper[4681]: I0404 03:40:00.154512 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587900-b6477"] Apr 04 03:40:00 crc kubenswrapper[4681]: E0404 03:40:00.155837 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d261cd9d-3bea-44e6-b26e-ae6a01199b6a" containerName="container-00" Apr 04 03:40:00 crc kubenswrapper[4681]: I0404 03:40:00.155858 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d261cd9d-3bea-44e6-b26e-ae6a01199b6a" containerName="container-00" Apr 04 03:40:00 crc kubenswrapper[4681]: I0404 03:40:00.156111 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d261cd9d-3bea-44e6-b26e-ae6a01199b6a" containerName="container-00" Apr 04 03:40:00 crc kubenswrapper[4681]: I0404 03:40:00.156989 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587900-b6477" Apr 04 03:40:00 crc kubenswrapper[4681]: I0404 03:40:00.159663 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 03:40:00 crc kubenswrapper[4681]: I0404 03:40:00.160141 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 03:40:00 crc kubenswrapper[4681]: I0404 03:40:00.161097 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 03:40:00 crc kubenswrapper[4681]: I0404 03:40:00.181740 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587900-b6477"] Apr 04 03:40:00 crc kubenswrapper[4681]: I0404 03:40:00.234196 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thtgj\" (UniqueName: \"kubernetes.io/projected/cf00ebf8-c237-400a-9ca4-ca71495e1e10-kube-api-access-thtgj\") pod \"auto-csr-approver-29587900-b6477\" (UID: \"cf00ebf8-c237-400a-9ca4-ca71495e1e10\") " pod="openshift-infra/auto-csr-approver-29587900-b6477" Apr 04 03:40:00 crc kubenswrapper[4681]: I0404 03:40:00.340541 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thtgj\" (UniqueName: \"kubernetes.io/projected/cf00ebf8-c237-400a-9ca4-ca71495e1e10-kube-api-access-thtgj\") pod \"auto-csr-approver-29587900-b6477\" (UID: \"cf00ebf8-c237-400a-9ca4-ca71495e1e10\") " pod="openshift-infra/auto-csr-approver-29587900-b6477" Apr 04 03:40:00 crc kubenswrapper[4681]: I0404 03:40:00.359129 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thtgj\" (UniqueName: \"kubernetes.io/projected/cf00ebf8-c237-400a-9ca4-ca71495e1e10-kube-api-access-thtgj\") pod \"auto-csr-approver-29587900-b6477\" (UID: \"cf00ebf8-c237-400a-9ca4-ca71495e1e10\") " pod="openshift-infra/auto-csr-approver-29587900-b6477" Apr 04 03:40:00 crc kubenswrapper[4681]: I0404 03:40:00.477641 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587900-b6477" Apr 04 03:40:00 crc kubenswrapper[4681]: I0404 03:40:00.974957 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587900-b6477"] Apr 04 03:40:01 crc kubenswrapper[4681]: I0404 03:40:01.250641 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587900-b6477" event={"ID":"cf00ebf8-c237-400a-9ca4-ca71495e1e10","Type":"ContainerStarted","Data":"73c8094f1d3c9682647c57a2e5f5cad36ae84a3e68858a41b76922b98b9cb29a"} Apr 04 03:40:02 crc kubenswrapper[4681]: I0404 03:40:02.262971 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587900-b6477" event={"ID":"cf00ebf8-c237-400a-9ca4-ca71495e1e10","Type":"ContainerStarted","Data":"20eec71c90814b033c064a83ea3ecbd638acea9725d097f719f8c0e7411adaa0"} Apr 04 03:40:02 crc kubenswrapper[4681]: I0404 03:40:02.279476 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29587900-b6477" podStartSLOduration=1.323697554 podStartE2EDuration="2.279455236s" podCreationTimestamp="2026-04-04 03:40:00 +0000 UTC" firstStartedPulling="2026-04-04 03:40:00.987749779 +0000 UTC m=+6280.653524919" lastFinishedPulling="2026-04-04 03:40:01.943507481 +0000 UTC m=+6281.609282601" observedRunningTime="2026-04-04 03:40:02.275443077 +0000 UTC m=+6281.941218197" watchObservedRunningTime="2026-04-04 03:40:02.279455236 +0000 UTC m=+6281.945230356" Apr 04 03:40:02 crc kubenswrapper[4681]: E0404 03:40:02.727931 4681 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf00ebf8_c237_400a_9ca4_ca71495e1e10.slice/crio-20eec71c90814b033c064a83ea3ecbd638acea9725d097f719f8c0e7411adaa0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf00ebf8_c237_400a_9ca4_ca71495e1e10.slice/crio-conmon-20eec71c90814b033c064a83ea3ecbd638acea9725d097f719f8c0e7411adaa0.scope\": RecentStats: unable to find data in memory cache]" Apr 04 03:40:03 crc kubenswrapper[4681]: I0404 03:40:03.281898 4681 generic.go:334] "Generic (PLEG): container finished" podID="cf00ebf8-c237-400a-9ca4-ca71495e1e10" containerID="20eec71c90814b033c064a83ea3ecbd638acea9725d097f719f8c0e7411adaa0" exitCode=0 Apr 04 03:40:03 crc kubenswrapper[4681]: I0404 03:40:03.282194 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587900-b6477" event={"ID":"cf00ebf8-c237-400a-9ca4-ca71495e1e10","Type":"ContainerDied","Data":"20eec71c90814b033c064a83ea3ecbd638acea9725d097f719f8c0e7411adaa0"} Apr 04 03:40:04 crc kubenswrapper[4681]: I0404 03:40:04.783017 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587900-b6477" Apr 04 03:40:04 crc kubenswrapper[4681]: I0404 03:40:04.839182 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thtgj\" (UniqueName: \"kubernetes.io/projected/cf00ebf8-c237-400a-9ca4-ca71495e1e10-kube-api-access-thtgj\") pod \"cf00ebf8-c237-400a-9ca4-ca71495e1e10\" (UID: \"cf00ebf8-c237-400a-9ca4-ca71495e1e10\") " Apr 04 03:40:04 crc kubenswrapper[4681]: I0404 03:40:04.857618 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf00ebf8-c237-400a-9ca4-ca71495e1e10-kube-api-access-thtgj" (OuterVolumeSpecName: "kube-api-access-thtgj") pod "cf00ebf8-c237-400a-9ca4-ca71495e1e10" (UID: "cf00ebf8-c237-400a-9ca4-ca71495e1e10"). InnerVolumeSpecName "kube-api-access-thtgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:40:04 crc kubenswrapper[4681]: I0404 03:40:04.941368 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thtgj\" (UniqueName: \"kubernetes.io/projected/cf00ebf8-c237-400a-9ca4-ca71495e1e10-kube-api-access-thtgj\") on node \"crc\" DevicePath \"\"" Apr 04 03:40:05 crc kubenswrapper[4681]: I0404 03:40:05.306128 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587900-b6477" event={"ID":"cf00ebf8-c237-400a-9ca4-ca71495e1e10","Type":"ContainerDied","Data":"73c8094f1d3c9682647c57a2e5f5cad36ae84a3e68858a41b76922b98b9cb29a"} Apr 04 03:40:05 crc kubenswrapper[4681]: I0404 03:40:05.306477 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73c8094f1d3c9682647c57a2e5f5cad36ae84a3e68858a41b76922b98b9cb29a" Apr 04 03:40:05 crc kubenswrapper[4681]: I0404 03:40:05.306198 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587900-b6477" Apr 04 03:40:05 crc kubenswrapper[4681]: I0404 03:40:05.364969 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587894-tbc8x"] Apr 04 03:40:05 crc kubenswrapper[4681]: I0404 03:40:05.378435 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587894-tbc8x"] Apr 04 03:40:07 crc kubenswrapper[4681]: I0404 03:40:07.216702 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88694644-a5da-4e3a-a6eb-7f394a06c826" path="/var/lib/kubelet/pods/88694644-a5da-4e3a-a6eb-7f394a06c826/volumes" Apr 04 03:40:14 crc kubenswrapper[4681]: I0404 03:40:14.590258 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5cb6cbf97d-96269_bce5c08c-6cdc-47ae-9454-ffc500f6e34c/barbican-api/0.log" Apr 04 03:40:14 crc kubenswrapper[4681]: I0404 03:40:14.757152 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5cb6cbf97d-96269_bce5c08c-6cdc-47ae-9454-ffc500f6e34c/barbican-api-log/0.log" Apr 04 03:40:14 crc kubenswrapper[4681]: I0404 03:40:14.824188 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-796868c666-kk4mh_e9699275-8e01-4222-9e46-b90aa70f2a3c/barbican-keystone-listener/0.log" Apr 04 03:40:14 crc kubenswrapper[4681]: I0404 03:40:14.913926 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-796868c666-kk4mh_e9699275-8e01-4222-9e46-b90aa70f2a3c/barbican-keystone-listener-log/0.log" Apr 04 03:40:15 crc kubenswrapper[4681]: I0404 03:40:15.036440 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5699bfbbf-jpbrf_24831041-c157-474d-9e6d-55931683ed21/barbican-worker/0.log" Apr 04 03:40:15 crc kubenswrapper[4681]: I0404 03:40:15.128680 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5699bfbbf-jpbrf_24831041-c157-474d-9e6d-55931683ed21/barbican-worker-log/0.log" Apr 04 03:40:15 crc kubenswrapper[4681]: I0404 03:40:15.446210 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_130000be-4800-4c22-9a54-08918788abad/ceilometer-notification-agent/0.log" Apr 04 03:40:15 crc kubenswrapper[4681]: I0404 03:40:15.488947 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_130000be-4800-4c22-9a54-08918788abad/ceilometer-central-agent/0.log" Apr 04 03:40:15 crc kubenswrapper[4681]: I0404 03:40:15.554898 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64_00befa4c-4be8-4cc4-8e8e-46c0bb3b6592/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:40:15 crc kubenswrapper[4681]: I0404 03:40:15.598107 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_130000be-4800-4c22-9a54-08918788abad/proxy-httpd/0.log" Apr 04 03:40:15 crc kubenswrapper[4681]: I0404 03:40:15.630345 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_130000be-4800-4c22-9a54-08918788abad/sg-core/0.log" Apr 04 03:40:15 crc kubenswrapper[4681]: I0404 03:40:15.896132 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a1f293d4-d146-49d4-a75d-8e972a25b758/cinder-api-log/0.log" Apr 04 03:40:16 crc kubenswrapper[4681]: I0404 03:40:16.195838 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_b36b7670-b847-4635-8dd5-8d5ea0d7825c/probe/0.log" Apr 04 03:40:16 crc kubenswrapper[4681]: I0404 03:40:16.467482 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_df8847f1-00d6-45d1-a106-b2c8c69abb35/cinder-scheduler/0.log" Apr 04 03:40:16 crc kubenswrapper[4681]: I0404 03:40:16.515776 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_df8847f1-00d6-45d1-a106-b2c8c69abb35/probe/0.log" Apr 04 03:40:16 crc kubenswrapper[4681]: I0404 03:40:16.967201 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_1ba04b4d-7697-4313-8759-e95a65957daa/probe/0.log" Apr 04 03:40:17 crc kubenswrapper[4681]: I0404 03:40:17.589680 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_b36b7670-b847-4635-8dd5-8d5ea0d7825c/cinder-backup/0.log" Apr 04 03:40:17 crc kubenswrapper[4681]: I0404 03:40:17.844395 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a1f293d4-d146-49d4-a75d-8e972a25b758/cinder-api/0.log" Apr 04 03:40:17 crc kubenswrapper[4681]: I0404 03:40:17.860896 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_c220dfdf-0f59-4093-b5dd-b2eba1a80fee/probe/0.log" Apr 04 03:40:17 crc kubenswrapper[4681]: I0404 03:40:17.956536 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_1ba04b4d-7697-4313-8759-e95a65957daa/cinder-volume/0.log" Apr 04 03:40:18 crc kubenswrapper[4681]: I0404 03:40:18.366376 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_c220dfdf-0f59-4093-b5dd-b2eba1a80fee/cinder-volume/0.log" Apr 04 03:40:18 crc kubenswrapper[4681]: I0404 03:40:18.399551 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-g74jm_e1248b6b-52bc-4b4a-b901-afa695bb799f/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:40:18 crc kubenswrapper[4681]: I0404 03:40:18.576803 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7444fdbf45-49mp6_f63bd22c-53ff-43aa-bc6d-fd388516ef62/init/0.log" Apr 04 03:40:18 crc kubenswrapper[4681]: I0404 03:40:18.593763 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-6b966_6d18b62e-86ae-4c2b-864c-315581ca4f1a/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:40:18 crc kubenswrapper[4681]: I0404 03:40:18.800182 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7444fdbf45-49mp6_f63bd22c-53ff-43aa-bc6d-fd388516ef62/init/0.log" Apr 04 03:40:18 crc kubenswrapper[4681]: I0404 03:40:18.958665 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7444fdbf45-49mp6_f63bd22c-53ff-43aa-bc6d-fd388516ef62/dnsmasq-dns/0.log" Apr 04 03:40:19 crc kubenswrapper[4681]: I0404 03:40:19.248092 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-99rqr_b3b7061a-37ce-4302-a3a3-f06aff60e3a3/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:40:19 crc kubenswrapper[4681]: I0404 03:40:19.372754 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_9af43da5-4945-49e2-ad66-afe1eefd4f4c/glance-log/0.log" Apr 04 03:40:19 crc kubenswrapper[4681]: I0404 03:40:19.394465 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_9af43da5-4945-49e2-ad66-afe1eefd4f4c/glance-httpd/0.log" Apr 04 03:40:19 crc kubenswrapper[4681]: I0404 03:40:19.540333 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1f0d9a1d-5773-426e-adfa-6a0aae0ec79a/glance-httpd/0.log" Apr 04 03:40:19 crc kubenswrapper[4681]: I0404 03:40:19.596516 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1f0d9a1d-5773-426e-adfa-6a0aae0ec79a/glance-log/0.log" Apr 04 03:40:19 crc kubenswrapper[4681]: I0404 03:40:19.720668 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq_76d7d624-1948-4ecc-ae72-3e40c03ec267/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:40:19 crc kubenswrapper[4681]: I0404 03:40:19.973617 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29587861-lh4pt_dfc4e081-9222-4cea-833f-d9137246664a/keystone-cron/0.log" Apr 04 03:40:20 crc kubenswrapper[4681]: I0404 03:40:20.161121 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_6e40c22d-4a3b-4321-ac7d-f623845423fc/kube-state-metrics/0.log" Apr 04 03:40:20 crc kubenswrapper[4681]: I0404 03:40:20.238302 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-cqsxv_c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:40:20 crc kubenswrapper[4681]: I0404 03:40:20.848902 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5cdf6cfbdd-xgxdx_85cc490e-cee8-405f-b498-41415aae210e/keystone-api/0.log" Apr 04 03:40:21 crc kubenswrapper[4681]: I0404 03:40:21.118875 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-554fd9954f-c5kv8_99648c0a-d8f3-41f8-a03d-7a21a4a84156/neutron-httpd/0.log" Apr 04 03:40:21 crc kubenswrapper[4681]: I0404 03:40:21.218252 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-554fd9954f-c5kv8_99648c0a-d8f3-41f8-a03d-7a21a4a84156/neutron-api/0.log" Apr 04 03:40:21 crc kubenswrapper[4681]: I0404 03:40:21.432414 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_189dfe5e-4211-48c8-bc76-ea9c229c5d65/setup-container/0.log" Apr 04 03:40:21 crc kubenswrapper[4681]: I0404 03:40:21.476389 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl_5c4ac822-458d-449c-b7e9-16ce85e56b63/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:40:21 crc kubenswrapper[4681]: I0404 03:40:21.775413 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_189dfe5e-4211-48c8-bc76-ea9c229c5d65/setup-container/0.log" Apr 04 03:40:21 crc kubenswrapper[4681]: I0404 03:40:21.882438 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_189dfe5e-4211-48c8-bc76-ea9c229c5d65/rabbitmq/0.log" Apr 04 03:40:21 crc kubenswrapper[4681]: I0404 03:40:21.982519 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2_1784fc32-2907-4203-a7cd-0053cfe1d338/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:40:22 crc kubenswrapper[4681]: I0404 03:40:22.459891 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_f63a7210-378a-4a4e-a458-33f19fbc360b/nova-cell0-conductor-conductor/0.log" Apr 04 03:40:22 crc kubenswrapper[4681]: I0404 03:40:22.782952 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_0e7ba727-658d-49f6-9e24-68da37adca06/nova-cell1-conductor-conductor/0.log" Apr 04 03:40:23 crc kubenswrapper[4681]: I0404 03:40:23.155870 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_f5986086-65b9-41b2-bb40-8ad2c6b42d11/nova-cell1-novncproxy-novncproxy/0.log" Apr 04 03:40:23 crc kubenswrapper[4681]: I0404 03:40:23.780720 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_858e598e-35ac-4ca2-a5d5-52e31278378f/nova-api-log/0.log" Apr 04 03:40:23 crc kubenswrapper[4681]: I0404 03:40:23.991870 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c85c4e78-d474-4016-b2b1-e05582da0f60/nova-metadata-log/0.log" Apr 04 03:40:24 crc kubenswrapper[4681]: I0404 03:40:24.731651 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_858e598e-35ac-4ca2-a5d5-52e31278378f/nova-api-api/0.log" Apr 04 03:40:24 crc kubenswrapper[4681]: I0404 03:40:24.809027 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_16143562-a1da-4713-a062-e3b850e170f0/nova-scheduler-scheduler/0.log" Apr 04 03:40:25 crc kubenswrapper[4681]: I0404 03:40:25.023946 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c85c4e78-d474-4016-b2b1-e05582da0f60/nova-metadata-metadata/0.log" Apr 04 03:40:25 crc kubenswrapper[4681]: I0404 03:40:25.071708 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-lf7rx_1c6f1a3c-3cad-4d39-8155-69c4a2ce1378/nova-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:40:25 crc kubenswrapper[4681]: I0404 03:40:25.124955 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7de30d66-63ae-43ca-8d87-33b3fc14f4b2/mysql-bootstrap/0.log" Apr 04 03:40:25 crc kubenswrapper[4681]: I0404 03:40:25.385469 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7de30d66-63ae-43ca-8d87-33b3fc14f4b2/mysql-bootstrap/0.log" Apr 04 03:40:25 crc kubenswrapper[4681]: I0404 03:40:25.407632 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7de30d66-63ae-43ca-8d87-33b3fc14f4b2/galera/0.log" Apr 04 03:40:25 crc kubenswrapper[4681]: I0404 03:40:25.428154 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dd82b7b7-ba75-4588-9dc2-c47ed34762b5/mysql-bootstrap/0.log" Apr 04 03:40:25 crc kubenswrapper[4681]: I0404 03:40:25.653756 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dd82b7b7-ba75-4588-9dc2-c47ed34762b5/mysql-bootstrap/0.log" Apr 04 03:40:25 crc kubenswrapper[4681]: I0404 03:40:25.708502 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dd82b7b7-ba75-4588-9dc2-c47ed34762b5/galera/0.log" Apr 04 03:40:25 crc kubenswrapper[4681]: I0404 03:40:25.739149 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_e453c2ba-d2af-4ad5-8f25-91b386e9f9a6/openstackclient/0.log" Apr 04 03:40:26 crc kubenswrapper[4681]: I0404 03:40:26.028692 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-jz78r_616e7c64-534b-41e8-8ad9-0abf8f05d3d5/ovn-controller/0.log" Apr 04 03:40:26 crc kubenswrapper[4681]: I0404 03:40:26.071848 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-nlvvn_209debba-9c1c-4486-82c7-38424335f889/openstack-network-exporter/0.log" Apr 04 03:40:26 crc kubenswrapper[4681]: I0404 03:40:26.250198 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ndgrb_38cc8476-2432-47d7-ad56-fd155b7680a5/ovsdb-server-init/0.log" Apr 04 03:40:26 crc kubenswrapper[4681]: I0404 03:40:26.447702 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ndgrb_38cc8476-2432-47d7-ad56-fd155b7680a5/ovsdb-server-init/0.log" Apr 04 03:40:26 crc kubenswrapper[4681]: I0404 03:40:26.479482 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ndgrb_38cc8476-2432-47d7-ad56-fd155b7680a5/ovsdb-server/0.log" Apr 04 03:40:26 crc kubenswrapper[4681]: I0404 03:40:26.743983 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_4c79dddc-8bad-4bfb-920f-434aea2c400c/openstack-network-exporter/0.log" Apr 04 03:40:26 crc kubenswrapper[4681]: I0404 03:40:26.857854 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ndgrb_38cc8476-2432-47d7-ad56-fd155b7680a5/ovs-vswitchd/0.log" Apr 04 03:40:26 crc kubenswrapper[4681]: I0404 03:40:26.937016 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_4c79dddc-8bad-4bfb-920f-434aea2c400c/ovn-northd/0.log" Apr 04 03:40:26 crc kubenswrapper[4681]: I0404 03:40:26.958249 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-fh7b5_bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:40:27 crc kubenswrapper[4681]: I0404 03:40:27.079329 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_30fe1cfd-59db-4c85-bf2c-a476faeabd9c/openstack-network-exporter/0.log" Apr 04 03:40:27 crc kubenswrapper[4681]: I0404 03:40:27.246925 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_30fe1cfd-59db-4c85-bf2c-a476faeabd9c/ovsdbserver-nb/0.log" Apr 04 03:40:27 crc kubenswrapper[4681]: I0404 03:40:27.253853 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f2a3604e-5c76-460f-aebb-5e2e89688d74/openstack-network-exporter/0.log" Apr 04 03:40:27 crc kubenswrapper[4681]: I0404 03:40:27.365092 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f2a3604e-5c76-460f-aebb-5e2e89688d74/ovsdbserver-sb/0.log" Apr 04 03:40:27 crc kubenswrapper[4681]: I0404 03:40:27.773682 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-674794d9f6-5s9ps_b5b3ede0-d5ce-41d0-a320-ee0e732c8f86/placement-api/0.log" Apr 04 03:40:27 crc kubenswrapper[4681]: I0404 03:40:27.851175 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_757762e6-7520-4fec-8323-41bf4a53a889/init-config-reloader/0.log" Apr 04 03:40:27 crc kubenswrapper[4681]: I0404 03:40:27.853859 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-674794d9f6-5s9ps_b5b3ede0-d5ce-41d0-a320-ee0e732c8f86/placement-log/0.log" Apr 04 03:40:28 crc kubenswrapper[4681]: I0404 03:40:28.131577 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_757762e6-7520-4fec-8323-41bf4a53a889/init-config-reloader/0.log" Apr 04 03:40:28 crc kubenswrapper[4681]: I0404 03:40:28.131748 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_757762e6-7520-4fec-8323-41bf4a53a889/config-reloader/0.log" Apr 04 03:40:28 crc kubenswrapper[4681]: I0404 03:40:28.165865 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_757762e6-7520-4fec-8323-41bf4a53a889/prometheus/0.log" Apr 04 03:40:28 crc kubenswrapper[4681]: I0404 03:40:28.198575 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_757762e6-7520-4fec-8323-41bf4a53a889/thanos-sidecar/0.log" Apr 04 03:40:28 crc kubenswrapper[4681]: I0404 03:40:28.639096 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_dfd8bf26-d103-4fa4-92d1-b463c9012169/setup-container/0.log" Apr 04 03:40:28 crc kubenswrapper[4681]: I0404 03:40:28.826381 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_dfd8bf26-d103-4fa4-92d1-b463c9012169/setup-container/0.log" Apr 04 03:40:28 crc kubenswrapper[4681]: I0404 03:40:28.883392 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_dfd8bf26-d103-4fa4-92d1-b463c9012169/rabbitmq/0.log" Apr 04 03:40:28 crc kubenswrapper[4681]: I0404 03:40:28.956749 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_caa29c68-1123-4e1c-ba0a-8a34a9be0135/setup-container/0.log" Apr 04 03:40:29 crc kubenswrapper[4681]: I0404 03:40:29.168279 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_caa29c68-1123-4e1c-ba0a-8a34a9be0135/setup-container/0.log" Apr 04 03:40:29 crc kubenswrapper[4681]: I0404 03:40:29.274663 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h_0b001b06-583d-4b8d-974e-e7cf078a514d/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:40:29 crc kubenswrapper[4681]: I0404 03:40:29.277329 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_caa29c68-1123-4e1c-ba0a-8a34a9be0135/rabbitmq/0.log" Apr 04 03:40:29 crc kubenswrapper[4681]: I0404 03:40:29.580259 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-4x4ws_3291d540-df5f-43ec-a016-a06df4e58ce6/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:40:29 crc kubenswrapper[4681]: I0404 03:40:29.591431 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79_3deb575c-2d6c-41a6-9650-3dddc756bb67/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:40:29 crc kubenswrapper[4681]: I0404 03:40:29.870627 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-cwn7s_17d6bc83-830a-47e3-b5c6-96ae2ecfad52/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:40:29 crc kubenswrapper[4681]: I0404 03:40:29.890783 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-bdcnq_00fa5e33-c452-4b88-bd67-bc0e6094d232/ssh-known-hosts-edpm-deployment/0.log" Apr 04 03:40:30 crc kubenswrapper[4681]: I0404 03:40:30.175107 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-8456d9bb7c-dcjw6_cb09ea7e-aac7-4a55-962c-ca71e66e26a8/proxy-server/0.log" Apr 04 03:40:30 crc kubenswrapper[4681]: I0404 03:40:30.291221 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-8456d9bb7c-dcjw6_cb09ea7e-aac7-4a55-962c-ca71e66e26a8/proxy-httpd/0.log" Apr 04 03:40:30 crc kubenswrapper[4681]: I0404 03:40:30.383252 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-gz57l_6d76298d-bafc-4c57-9e19-f77f982a3187/swift-ring-rebalance/0.log" Apr 04 03:40:30 crc kubenswrapper[4681]: I0404 03:40:30.511896 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdc00a76-b945-4eca-98d7-1f126a78785f/account-reaper/0.log" Apr 04 03:40:30 crc kubenswrapper[4681]: I0404 03:40:30.552186 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdc00a76-b945-4eca-98d7-1f126a78785f/account-auditor/0.log" Apr 04 03:40:30 crc kubenswrapper[4681]: I0404 03:40:30.743850 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdc00a76-b945-4eca-98d7-1f126a78785f/account-replicator/0.log" Apr 04 03:40:30 crc kubenswrapper[4681]: I0404 03:40:30.748031 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdc00a76-b945-4eca-98d7-1f126a78785f/account-server/0.log" Apr 04 03:40:30 crc kubenswrapper[4681]: I0404 03:40:30.789824 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdc00a76-b945-4eca-98d7-1f126a78785f/container-auditor/0.log" Apr 04 03:40:30 crc kubenswrapper[4681]: I0404 03:40:30.865915 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdc00a76-b945-4eca-98d7-1f126a78785f/container-replicator/0.log" Apr 04 03:40:31 crc kubenswrapper[4681]: I0404 03:40:31.008121 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdc00a76-b945-4eca-98d7-1f126a78785f/container-server/0.log" Apr 04 03:40:31 crc kubenswrapper[4681]: I0404 03:40:31.052188 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdc00a76-b945-4eca-98d7-1f126a78785f/container-updater/0.log" Apr 04 03:40:31 crc kubenswrapper[4681]: I0404 03:40:31.060386 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdc00a76-b945-4eca-98d7-1f126a78785f/object-auditor/0.log" Apr 04 03:40:31 crc kubenswrapper[4681]: I0404 03:40:31.104786 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdc00a76-b945-4eca-98d7-1f126a78785f/object-expirer/0.log" Apr 04 03:40:31 crc kubenswrapper[4681]: I0404 03:40:31.348649 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdc00a76-b945-4eca-98d7-1f126a78785f/object-replicator/0.log" Apr 04 03:40:31 crc kubenswrapper[4681]: I0404 03:40:31.363615 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdc00a76-b945-4eca-98d7-1f126a78785f/object-server/0.log" Apr 04 03:40:31 crc kubenswrapper[4681]: I0404 03:40:31.402721 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdc00a76-b945-4eca-98d7-1f126a78785f/object-updater/0.log" Apr 04 03:40:31 crc kubenswrapper[4681]: I0404 03:40:31.413821 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdc00a76-b945-4eca-98d7-1f126a78785f/rsync/0.log" Apr 04 03:40:31 crc kubenswrapper[4681]: I0404 03:40:31.620771 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdc00a76-b945-4eca-98d7-1f126a78785f/swift-recon-cron/0.log" Apr 04 03:40:31 crc kubenswrapper[4681]: I0404 03:40:31.880460 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_9d245209-8139-42b0-aae0-5cafddfc00dd/tempest-tests-tempest-tests-runner/0.log" Apr 04 03:40:31 crc kubenswrapper[4681]: I0404 03:40:31.959499 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_4e1e5191-e069-4447-ad1d-00e07ba61407/test-operator-logs-container/0.log" Apr 04 03:40:32 crc kubenswrapper[4681]: I0404 03:40:32.179073 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm_79cff0ca-47f9-4198-abf2-a488089c2ade/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:40:32 crc kubenswrapper[4681]: I0404 03:40:32.341061 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn_d7ef2b80-e8d5-4f17-8617-d3a88ef35137/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:40:33 crc kubenswrapper[4681]: I0404 03:40:33.207503 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_8abb1419-6466-40ac-b2ec-2d6306e02026/watcher-applier/0.log" Apr 04 03:40:34 crc kubenswrapper[4681]: I0404 03:40:34.082478 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_7f43afd0-4f66-4841-a564-7f47a84be4b1/watcher-api-log/0.log" Apr 04 03:40:35 crc kubenswrapper[4681]: I0404 03:40:35.086587 4681 scope.go:117] "RemoveContainer" containerID="a3dfc533218a8b3779f52f8ad941ee296040fd6aee03a14dba61c589fa046d26" Apr 04 03:40:36 crc kubenswrapper[4681]: I0404 03:40:36.922116 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0/watcher-decision-engine/0.log" Apr 04 03:40:38 crc kubenswrapper[4681]: I0404 03:40:38.291893 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_7f43afd0-4f66-4841-a564-7f47a84be4b1/watcher-api/0.log" Apr 04 03:40:43 crc kubenswrapper[4681]: I0404 03:40:43.991691 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_160ce09d-ccb7-4ce9-8bbe-574e115fcc3f/memcached/0.log" Apr 04 03:41:03 crc kubenswrapper[4681]: I0404 03:41:03.579386 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86644c9c9c-kvnd9_895bcf63-b464-4408-a0f2-8217d1a6179b/manager/0.log" Apr 04 03:41:03 crc kubenswrapper[4681]: I0404 03:41:03.806089 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-58689c6fff-rnnzd_6a8bc05a-a0b8-4ba3-b778-b3cb57b19a99/manager/0.log" Apr 04 03:41:03 crc kubenswrapper[4681]: I0404 03:41:03.829893 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d46cccfb9-ttwtp_23b37abe-289b-45e9-b55b-e2985e411401/manager/0.log" Apr 04 03:41:03 crc kubenswrapper[4681]: I0404 03:41:03.968877 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx_622ff9fc-9cc4-4167-86f8-012c6031b393/util/0.log" Apr 04 03:41:04 crc kubenswrapper[4681]: I0404 03:41:04.156155 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx_622ff9fc-9cc4-4167-86f8-012c6031b393/pull/0.log" Apr 04 03:41:04 crc kubenswrapper[4681]: I0404 03:41:04.159610 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx_622ff9fc-9cc4-4167-86f8-012c6031b393/util/0.log" Apr 04 03:41:04 crc kubenswrapper[4681]: I0404 03:41:04.200327 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx_622ff9fc-9cc4-4167-86f8-012c6031b393/pull/0.log" Apr 04 03:41:04 crc kubenswrapper[4681]: I0404 03:41:04.328395 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx_622ff9fc-9cc4-4167-86f8-012c6031b393/util/0.log" Apr 04 03:41:04 crc kubenswrapper[4681]: I0404 03:41:04.329601 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx_622ff9fc-9cc4-4167-86f8-012c6031b393/pull/0.log" Apr 04 03:41:04 crc kubenswrapper[4681]: I0404 03:41:04.386042 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx_622ff9fc-9cc4-4167-86f8-012c6031b393/extract/0.log" Apr 04 03:41:04 crc kubenswrapper[4681]: I0404 03:41:04.593890 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-648bdc7f99-skr68_4513182b-1bdb-40a2-ba02-2e8aa8567819/manager/0.log" Apr 04 03:41:04 crc kubenswrapper[4681]: I0404 03:41:04.617399 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-8684f86954-4z752_06717285-4d9d-4b9d-919e-106dd0ec0274/manager/0.log" Apr 04 03:41:04 crc kubenswrapper[4681]: I0404 03:41:04.807374 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6ccfd84cb4-sq9cm_be876d09-d6fd-46f7-a03c-8c13f72bee75/manager/0.log" Apr 04 03:41:05 crc kubenswrapper[4681]: I0404 03:41:05.045800 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f96574b5-82k6f_1a4403a6-7904-4764-aba4-02a2bcc4bc19/manager/0.log" Apr 04 03:41:05 crc kubenswrapper[4681]: I0404 03:41:05.226564 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-dbf8bb784-4gx6m_80023eb5-5d8e-49ff-bd8d-d0fb3e290ccf/manager/0.log" Apr 04 03:41:05 crc kubenswrapper[4681]: I0404 03:41:05.260213 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7ffb6b7cdc-gcbfv_4536a628-89aa-4f79-b180-9199d3cf390a/manager/0.log" Apr 04 03:41:05 crc kubenswrapper[4681]: I0404 03:41:05.322802 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6b7497dc59-tllnk_82ce5791-77cb-418c-b3d2-7f49f625ccf1/manager/0.log" Apr 04 03:41:05 crc kubenswrapper[4681]: I0404 03:41:05.509361 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6554749d88-tj6wj_28828ebb-13dc-4ba1-98e1-39c6f38e9245/manager/0.log" Apr 04 03:41:05 crc kubenswrapper[4681]: I0404 03:41:05.618214 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-767865f676-74tl4_856d74a1-4df8-446a-a82b-3dcc76f1af70/manager/0.log" Apr 04 03:41:05 crc kubenswrapper[4681]: I0404 03:41:05.771534 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5d6f9fd68c-x7x9p_3ac3008b-06b0-4ab7-a59f-3e7682627410/manager/0.log" Apr 04 03:41:05 crc kubenswrapper[4681]: I0404 03:41:05.814083 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7594f57946-c9j8w_d8331de2-1469-4856-a56c-f1e107779ca4/manager/0.log" Apr 04 03:41:05 crc kubenswrapper[4681]: I0404 03:41:05.988732 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-b7b49d78f-skbql_b54a4f45-de00-4dd5-95d4-f96a21d34189/manager/0.log" Apr 04 03:41:06 crc kubenswrapper[4681]: I0404 03:41:06.112081 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5645c5b4f-jmkvr_08d96c31-f9c1-4308-ba34-bb5135a86eb8/operator/0.log" Apr 04 03:41:06 crc kubenswrapper[4681]: I0404 03:41:06.637620 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-nc9vv_4551c34d-f733-4478-9613-7618e59322b5/registry-server/0.log" Apr 04 03:41:06 crc kubenswrapper[4681]: I0404 03:41:06.875471 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-84464c7c78-brc8n_b50e9e2f-832f-4de0-b38c-cb9f5f0d62ab/manager/0.log" Apr 04 03:41:07 crc kubenswrapper[4681]: I0404 03:41:07.181466 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-zv8zs_2acd20f5-b31c-411a-989c-f0ad12628894/operator/0.log" Apr 04 03:41:07 crc kubenswrapper[4681]: I0404 03:41:07.269191 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-559d8fdb6b-tmg65_6479782a-b4ab-4e90-a9bd-29ef0a41f9d7/manager/0.log" Apr 04 03:41:07 crc kubenswrapper[4681]: I0404 03:41:07.469334 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-fbdcf7f7b-844tj_eb76f1dc-bae9-491f-a58e-3cc1f9c15571/manager/0.log" Apr 04 03:41:07 crc kubenswrapper[4681]: I0404 03:41:07.556511 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-667cfd88d7-2k5wm_df89fca6-3fb4-4d85-95df-4b48e4a1e884/manager/0.log" Apr 04 03:41:07 crc kubenswrapper[4681]: I0404 03:41:07.730673 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56ccc97cf5-j87hk_da0d70da-b61c-41ee-938b-f4a931300f75/manager/0.log" Apr 04 03:41:07 crc kubenswrapper[4681]: I0404 03:41:07.883746 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-d6f76d4c7-2vrfg_8e44912b-0956-49e8-ad3e-140b3d60838e/manager/0.log" Apr 04 03:41:08 crc kubenswrapper[4681]: I0404 03:41:08.090305 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-58b78987f4-nwsmd_a2899081-691e-4ad2-8e98-4fb8b955a0cd/manager/0.log" Apr 04 03:41:27 crc kubenswrapper[4681]: I0404 03:41:27.339964 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-c76br_b8f9c5e4-05ac-48dd-8e04-81b8087e3a72/control-plane-machine-set-operator/0.log" Apr 04 03:41:27 crc kubenswrapper[4681]: I0404 03:41:27.571804 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mftw8_15b64868-afa1-4d70-bfda-799ed31decdb/kube-rbac-proxy/0.log" Apr 04 03:41:27 crc kubenswrapper[4681]: I0404 03:41:27.611644 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mftw8_15b64868-afa1-4d70-bfda-799ed31decdb/machine-api-operator/0.log" Apr 04 03:41:40 crc kubenswrapper[4681]: I0404 03:41:40.196526 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-b7xpz_d71066c7-07f6-471d-9d8d-6746b3f229e9/cert-manager-controller/0.log" Apr 04 03:41:40 crc kubenswrapper[4681]: I0404 03:41:40.286622 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-2vkfl_9fde43ea-36ff-4f94-ba5e-8e1ea1338b1e/cert-manager-cainjector/0.log" Apr 04 03:41:40 crc kubenswrapper[4681]: I0404 03:41:40.364965 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-6ppdd_4d2e304b-f02c-427a-b2a2-f1e8cc7efb70/cert-manager-webhook/0.log" Apr 04 03:41:52 crc kubenswrapper[4681]: I0404 03:41:52.874040 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7b5ddc4dc7-cg7df_04ddbb2f-cda2-457e-8e65-e3b1e1d9ae53/nmstate-console-plugin/0.log" Apr 04 03:41:53 crc kubenswrapper[4681]: I0404 03:41:53.049709 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-l6xdx_e174b98a-0ca7-4dfc-846f-b0395cb9b4a4/nmstate-handler/0.log" Apr 04 03:41:53 crc kubenswrapper[4681]: I0404 03:41:53.065080 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-gc988_ec47a21e-ac21-4720-ac9c-b0b9f50bfc85/kube-rbac-proxy/0.log" Apr 04 03:41:53 crc kubenswrapper[4681]: I0404 03:41:53.105919 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-gc988_ec47a21e-ac21-4720-ac9c-b0b9f50bfc85/nmstate-metrics/0.log" Apr 04 03:41:53 crc kubenswrapper[4681]: I0404 03:41:53.309796 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6b8c6447b-vd5sz_dd449ba7-bc18-4cdb-8f0f-05c997e2274e/nmstate-operator/0.log" Apr 04 03:41:53 crc kubenswrapper[4681]: I0404 03:41:53.346402 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-jn9q8_fbb2aa57-946f-43fb-9380-83a69cced169/nmstate-webhook/0.log" Apr 04 03:41:56 crc kubenswrapper[4681]: I0404 03:41:56.524082 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 03:41:56 crc kubenswrapper[4681]: I0404 03:41:56.524899 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 03:42:00 crc kubenswrapper[4681]: I0404 03:42:00.175336 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587902-88w2x"] Apr 04 03:42:00 crc kubenswrapper[4681]: E0404 03:42:00.176973 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf00ebf8-c237-400a-9ca4-ca71495e1e10" containerName="oc" Apr 04 03:42:00 crc kubenswrapper[4681]: I0404 03:42:00.176990 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf00ebf8-c237-400a-9ca4-ca71495e1e10" containerName="oc" Apr 04 03:42:00 crc kubenswrapper[4681]: I0404 03:42:00.177921 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf00ebf8-c237-400a-9ca4-ca71495e1e10" containerName="oc" Apr 04 03:42:00 crc kubenswrapper[4681]: I0404 03:42:00.188596 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587902-88w2x" Apr 04 03:42:00 crc kubenswrapper[4681]: I0404 03:42:00.197533 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587902-88w2x"] Apr 04 03:42:00 crc kubenswrapper[4681]: I0404 03:42:00.218846 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 03:42:00 crc kubenswrapper[4681]: I0404 03:42:00.219510 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 03:42:00 crc kubenswrapper[4681]: I0404 03:42:00.219818 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 03:42:00 crc kubenswrapper[4681]: I0404 03:42:00.339067 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k22k6\" (UniqueName: \"kubernetes.io/projected/22ae55dc-dec6-4d8a-baf4-c5dd56cecb32-kube-api-access-k22k6\") pod \"auto-csr-approver-29587902-88w2x\" (UID: \"22ae55dc-dec6-4d8a-baf4-c5dd56cecb32\") " pod="openshift-infra/auto-csr-approver-29587902-88w2x" Apr 04 03:42:00 crc kubenswrapper[4681]: I0404 03:42:00.443484 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k22k6\" (UniqueName: \"kubernetes.io/projected/22ae55dc-dec6-4d8a-baf4-c5dd56cecb32-kube-api-access-k22k6\") pod \"auto-csr-approver-29587902-88w2x\" (UID: \"22ae55dc-dec6-4d8a-baf4-c5dd56cecb32\") " pod="openshift-infra/auto-csr-approver-29587902-88w2x" Apr 04 03:42:00 crc kubenswrapper[4681]: I0404 03:42:00.464189 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k22k6\" (UniqueName: \"kubernetes.io/projected/22ae55dc-dec6-4d8a-baf4-c5dd56cecb32-kube-api-access-k22k6\") pod \"auto-csr-approver-29587902-88w2x\" (UID: \"22ae55dc-dec6-4d8a-baf4-c5dd56cecb32\") " pod="openshift-infra/auto-csr-approver-29587902-88w2x" Apr 04 03:42:00 crc kubenswrapper[4681]: I0404 03:42:00.537669 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587902-88w2x" Apr 04 03:42:01 crc kubenswrapper[4681]: I0404 03:42:01.024130 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587902-88w2x"] Apr 04 03:42:01 crc kubenswrapper[4681]: I0404 03:42:01.483213 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587902-88w2x" event={"ID":"22ae55dc-dec6-4d8a-baf4-c5dd56cecb32","Type":"ContainerStarted","Data":"6a2158fdc481d95d898aef410c76d6971cdb0d14c7268a6d78dcab7bee7a9003"} Apr 04 03:42:02 crc kubenswrapper[4681]: I0404 03:42:02.491971 4681 generic.go:334] "Generic (PLEG): container finished" podID="22ae55dc-dec6-4d8a-baf4-c5dd56cecb32" containerID="450b7b9ccc623454c5c59f6c5653923e3ca0753ef29f3f30618c6f21eefd8189" exitCode=0 Apr 04 03:42:02 crc kubenswrapper[4681]: I0404 03:42:02.492086 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587902-88w2x" event={"ID":"22ae55dc-dec6-4d8a-baf4-c5dd56cecb32","Type":"ContainerDied","Data":"450b7b9ccc623454c5c59f6c5653923e3ca0753ef29f3f30618c6f21eefd8189"} Apr 04 03:42:03 crc kubenswrapper[4681]: I0404 03:42:03.925824 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587902-88w2x" Apr 04 03:42:04 crc kubenswrapper[4681]: I0404 03:42:04.020627 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k22k6\" (UniqueName: \"kubernetes.io/projected/22ae55dc-dec6-4d8a-baf4-c5dd56cecb32-kube-api-access-k22k6\") pod \"22ae55dc-dec6-4d8a-baf4-c5dd56cecb32\" (UID: \"22ae55dc-dec6-4d8a-baf4-c5dd56cecb32\") " Apr 04 03:42:04 crc kubenswrapper[4681]: I0404 03:42:04.029591 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22ae55dc-dec6-4d8a-baf4-c5dd56cecb32-kube-api-access-k22k6" (OuterVolumeSpecName: "kube-api-access-k22k6") pod "22ae55dc-dec6-4d8a-baf4-c5dd56cecb32" (UID: "22ae55dc-dec6-4d8a-baf4-c5dd56cecb32"). InnerVolumeSpecName "kube-api-access-k22k6". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:42:04 crc kubenswrapper[4681]: I0404 03:42:04.123402 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k22k6\" (UniqueName: \"kubernetes.io/projected/22ae55dc-dec6-4d8a-baf4-c5dd56cecb32-kube-api-access-k22k6\") on node \"crc\" DevicePath \"\"" Apr 04 03:42:04 crc kubenswrapper[4681]: I0404 03:42:04.517533 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587902-88w2x" event={"ID":"22ae55dc-dec6-4d8a-baf4-c5dd56cecb32","Type":"ContainerDied","Data":"6a2158fdc481d95d898aef410c76d6971cdb0d14c7268a6d78dcab7bee7a9003"} Apr 04 03:42:04 crc kubenswrapper[4681]: I0404 03:42:04.517812 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a2158fdc481d95d898aef410c76d6971cdb0d14c7268a6d78dcab7bee7a9003" Apr 04 03:42:04 crc kubenswrapper[4681]: I0404 03:42:04.517592 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587902-88w2x" Apr 04 03:42:05 crc kubenswrapper[4681]: I0404 03:42:05.025660 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587896-vt9tm"] Apr 04 03:42:05 crc kubenswrapper[4681]: I0404 03:42:05.036093 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587896-vt9tm"] Apr 04 03:42:05 crc kubenswrapper[4681]: I0404 03:42:05.213810 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d61db7d-a894-4d42-8713-f0df549a25d9" path="/var/lib/kubelet/pods/2d61db7d-a894-4d42-8713-f0df549a25d9/volumes" Apr 04 03:42:07 crc kubenswrapper[4681]: I0404 03:42:07.116231 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-86dff4bf76-pkfp4_b23b52f2-8062-48f1-a937-590414fcb369/prometheus-operator/0.log" Apr 04 03:42:07 crc kubenswrapper[4681]: I0404 03:42:07.329048 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-57bc579d78-6gdgv_c11383e1-c1fe-4d1e-ab47-234adca1f589/prometheus-operator-admission-webhook/0.log" Apr 04 03:42:07 crc kubenswrapper[4681]: I0404 03:42:07.413956 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-57bc579d78-bpz42_ca6efa76-cc20-4742-9c8b-1ef70ff6acff/prometheus-operator-admission-webhook/0.log" Apr 04 03:42:07 crc kubenswrapper[4681]: I0404 03:42:07.574564 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-dd944d769-szx5n_a510961d-019d-41d4-8a75-66f69f5d6728/operator/0.log" Apr 04 03:42:07 crc kubenswrapper[4681]: I0404 03:42:07.608310 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-74445bf4b8-4fv89_0c2979a7-06b5-4451-875e-f8e64da75780/perses-operator/0.log" Apr 04 03:42:20 crc kubenswrapper[4681]: I0404 03:42:20.833789 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bb64cd5d7-vnp7h_10953a36-52e8-4614-af9d-7df97c580ffc/kube-rbac-proxy/0.log" Apr 04 03:42:20 crc kubenswrapper[4681]: I0404 03:42:20.887174 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bb64cd5d7-vnp7h_10953a36-52e8-4614-af9d-7df97c580ffc/controller/0.log" Apr 04 03:42:21 crc kubenswrapper[4681]: I0404 03:42:21.031812 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/cp-frr-files/0.log" Apr 04 03:42:21 crc kubenswrapper[4681]: I0404 03:42:21.240328 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/cp-frr-files/0.log" Apr 04 03:42:21 crc kubenswrapper[4681]: I0404 03:42:21.250327 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/cp-reloader/0.log" Apr 04 03:42:21 crc kubenswrapper[4681]: I0404 03:42:21.284244 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/cp-metrics/0.log" Apr 04 03:42:21 crc kubenswrapper[4681]: I0404 03:42:21.297707 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/cp-reloader/0.log" Apr 04 03:42:21 crc kubenswrapper[4681]: I0404 03:42:21.448811 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/cp-metrics/0.log" Apr 04 03:42:21 crc kubenswrapper[4681]: I0404 03:42:21.476928 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/cp-frr-files/0.log" Apr 04 03:42:21 crc kubenswrapper[4681]: I0404 03:42:21.511589 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/cp-reloader/0.log" Apr 04 03:42:21 crc kubenswrapper[4681]: I0404 03:42:21.549939 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/cp-metrics/0.log" Apr 04 03:42:21 crc kubenswrapper[4681]: I0404 03:42:21.702353 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/cp-frr-files/0.log" Apr 04 03:42:21 crc kubenswrapper[4681]: I0404 03:42:21.722622 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/cp-metrics/0.log" Apr 04 03:42:21 crc kubenswrapper[4681]: I0404 03:42:21.731171 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/cp-reloader/0.log" Apr 04 03:42:21 crc kubenswrapper[4681]: I0404 03:42:21.808362 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/controller/0.log" Apr 04 03:42:21 crc kubenswrapper[4681]: I0404 03:42:21.925660 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/frr-metrics/0.log" Apr 04 03:42:21 crc kubenswrapper[4681]: I0404 03:42:21.925718 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/kube-rbac-proxy/0.log" Apr 04 03:42:22 crc kubenswrapper[4681]: I0404 03:42:22.042041 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/kube-rbac-proxy-frr/0.log" Apr 04 03:42:22 crc kubenswrapper[4681]: I0404 03:42:22.139917 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/reloader/0.log" Apr 04 03:42:22 crc kubenswrapper[4681]: I0404 03:42:22.256997 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-wn7bd_82278f5d-bc0c-45d9-9efd-170e322295dd/frr-k8s-webhook-server/0.log" Apr 04 03:42:22 crc kubenswrapper[4681]: I0404 03:42:22.492894 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6b949c746f-bbmhk_bbb46a7c-3e17-4b01-8a75-20a864bee1d3/manager/0.log" Apr 04 03:42:22 crc kubenswrapper[4681]: I0404 03:42:22.597118 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6b54d9cb4b-kbxnx_22266062-5a6f-4352-80ea-f9cb334bf963/webhook-server/0.log" Apr 04 03:42:22 crc kubenswrapper[4681]: I0404 03:42:22.755055 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lvp5w_b9303934-434e-47f9-8c2b-36d6e6320ab2/kube-rbac-proxy/0.log" Apr 04 03:42:23 crc kubenswrapper[4681]: I0404 03:42:23.368555 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lvp5w_b9303934-434e-47f9-8c2b-36d6e6320ab2/speaker/0.log" Apr 04 03:42:23 crc kubenswrapper[4681]: I0404 03:42:23.972831 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/frr/0.log" Apr 04 03:42:26 crc kubenswrapper[4681]: I0404 03:42:26.523728 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 03:42:26 crc kubenswrapper[4681]: I0404 03:42:26.524432 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 03:42:35 crc kubenswrapper[4681]: I0404 03:42:35.249869 4681 scope.go:117] "RemoveContainer" containerID="c35e34c79cd031a4f4015e6ef9f10aeaaa49dacfb8a922f1777c15556eec69b1" Apr 04 03:42:36 crc kubenswrapper[4681]: I0404 03:42:36.004507 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9_992f104f-096e-415f-a791-d2f2d0bd17a7/util/0.log" Apr 04 03:42:36 crc kubenswrapper[4681]: I0404 03:42:36.208552 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9_992f104f-096e-415f-a791-d2f2d0bd17a7/pull/0.log" Apr 04 03:42:36 crc kubenswrapper[4681]: I0404 03:42:36.291787 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9_992f104f-096e-415f-a791-d2f2d0bd17a7/util/0.log" Apr 04 03:42:36 crc kubenswrapper[4681]: I0404 03:42:36.316188 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9_992f104f-096e-415f-a791-d2f2d0bd17a7/pull/0.log" Apr 04 03:42:36 crc kubenswrapper[4681]: I0404 03:42:36.498128 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9_992f104f-096e-415f-a791-d2f2d0bd17a7/extract/0.log" Apr 04 03:42:36 crc kubenswrapper[4681]: I0404 03:42:36.501886 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9_992f104f-096e-415f-a791-d2f2d0bd17a7/util/0.log" Apr 04 03:42:36 crc kubenswrapper[4681]: I0404 03:42:36.502944 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9_992f104f-096e-415f-a791-d2f2d0bd17a7/pull/0.log" Apr 04 03:42:36 crc kubenswrapper[4681]: I0404 03:42:36.695186 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw_c7308318-787f-4347-8939-2f27b367b588/util/0.log" Apr 04 03:42:36 crc kubenswrapper[4681]: I0404 03:42:36.908118 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw_c7308318-787f-4347-8939-2f27b367b588/util/0.log" Apr 04 03:42:36 crc kubenswrapper[4681]: I0404 03:42:36.936203 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw_c7308318-787f-4347-8939-2f27b367b588/pull/0.log" Apr 04 03:42:36 crc kubenswrapper[4681]: I0404 03:42:36.936743 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw_c7308318-787f-4347-8939-2f27b367b588/pull/0.log" Apr 04 03:42:37 crc kubenswrapper[4681]: I0404 03:42:37.110790 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw_c7308318-787f-4347-8939-2f27b367b588/util/0.log" Apr 04 03:42:37 crc kubenswrapper[4681]: I0404 03:42:37.124841 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw_c7308318-787f-4347-8939-2f27b367b588/pull/0.log" Apr 04 03:42:37 crc kubenswrapper[4681]: I0404 03:42:37.199418 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw_c7308318-787f-4347-8939-2f27b367b588/extract/0.log" Apr 04 03:42:37 crc kubenswrapper[4681]: I0404 03:42:37.320512 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668_d625d583-bc5e-4cf4-914b-09f9452b7633/util/0.log" Apr 04 03:42:37 crc kubenswrapper[4681]: I0404 03:42:37.486426 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668_d625d583-bc5e-4cf4-914b-09f9452b7633/pull/0.log" Apr 04 03:42:37 crc kubenswrapper[4681]: I0404 03:42:37.503150 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668_d625d583-bc5e-4cf4-914b-09f9452b7633/pull/0.log" Apr 04 03:42:37 crc kubenswrapper[4681]: I0404 03:42:37.529080 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668_d625d583-bc5e-4cf4-914b-09f9452b7633/util/0.log" Apr 04 03:42:37 crc kubenswrapper[4681]: I0404 03:42:37.706855 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668_d625d583-bc5e-4cf4-914b-09f9452b7633/pull/0.log" Apr 04 03:42:37 crc kubenswrapper[4681]: I0404 03:42:37.709006 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668_d625d583-bc5e-4cf4-914b-09f9452b7633/util/0.log" Apr 04 03:42:37 crc kubenswrapper[4681]: I0404 03:42:37.959678 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668_d625d583-bc5e-4cf4-914b-09f9452b7633/extract/0.log" Apr 04 03:42:38 crc kubenswrapper[4681]: I0404 03:42:38.039226 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s7ppd_bb4a8188-2d15-4ecb-8b44-46d49acb6dd8/extract-utilities/0.log" Apr 04 03:42:38 crc kubenswrapper[4681]: I0404 03:42:38.230700 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s7ppd_bb4a8188-2d15-4ecb-8b44-46d49acb6dd8/extract-utilities/0.log" Apr 04 03:42:38 crc kubenswrapper[4681]: I0404 03:42:38.254910 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s7ppd_bb4a8188-2d15-4ecb-8b44-46d49acb6dd8/extract-content/0.log" Apr 04 03:42:38 crc kubenswrapper[4681]: I0404 03:42:38.255240 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s7ppd_bb4a8188-2d15-4ecb-8b44-46d49acb6dd8/extract-content/0.log" Apr 04 03:42:38 crc kubenswrapper[4681]: I0404 03:42:38.456874 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s7ppd_bb4a8188-2d15-4ecb-8b44-46d49acb6dd8/extract-utilities/0.log" Apr 04 03:42:38 crc kubenswrapper[4681]: I0404 03:42:38.504853 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s7ppd_bb4a8188-2d15-4ecb-8b44-46d49acb6dd8/extract-content/0.log" Apr 04 03:42:38 crc kubenswrapper[4681]: I0404 03:42:38.732356 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pc5lk_f5161cee-bf19-458a-95a3-8cf593f8f78c/extract-utilities/0.log" Apr 04 03:42:38 crc kubenswrapper[4681]: I0404 03:42:38.934920 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pc5lk_f5161cee-bf19-458a-95a3-8cf593f8f78c/extract-utilities/0.log" Apr 04 03:42:38 crc kubenswrapper[4681]: I0404 03:42:38.997409 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pc5lk_f5161cee-bf19-458a-95a3-8cf593f8f78c/extract-content/0.log" Apr 04 03:42:39 crc kubenswrapper[4681]: I0404 03:42:39.014783 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pc5lk_f5161cee-bf19-458a-95a3-8cf593f8f78c/extract-content/0.log" Apr 04 03:42:39 crc kubenswrapper[4681]: I0404 03:42:39.216385 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pc5lk_f5161cee-bf19-458a-95a3-8cf593f8f78c/extract-content/0.log" Apr 04 03:42:39 crc kubenswrapper[4681]: I0404 03:42:39.224119 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pc5lk_f5161cee-bf19-458a-95a3-8cf593f8f78c/extract-utilities/0.log" Apr 04 03:42:39 crc kubenswrapper[4681]: I0404 03:42:39.376896 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s7ppd_bb4a8188-2d15-4ecb-8b44-46d49acb6dd8/registry-server/0.log" Apr 04 03:42:39 crc kubenswrapper[4681]: I0404 03:42:39.571557 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl_2b4070a8-f657-4819-8ab7-b105f33e5560/util/0.log" Apr 04 03:42:39 crc kubenswrapper[4681]: I0404 03:42:39.779309 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl_2b4070a8-f657-4819-8ab7-b105f33e5560/util/0.log" Apr 04 03:42:39 crc kubenswrapper[4681]: I0404 03:42:39.826145 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl_2b4070a8-f657-4819-8ab7-b105f33e5560/pull/0.log" Apr 04 03:42:39 crc kubenswrapper[4681]: I0404 03:42:39.864996 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl_2b4070a8-f657-4819-8ab7-b105f33e5560/pull/0.log" Apr 04 03:42:40 crc kubenswrapper[4681]: I0404 03:42:40.085959 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl_2b4070a8-f657-4819-8ab7-b105f33e5560/util/0.log" Apr 04 03:42:40 crc kubenswrapper[4681]: I0404 03:42:40.147413 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl_2b4070a8-f657-4819-8ab7-b105f33e5560/pull/0.log" Apr 04 03:42:40 crc kubenswrapper[4681]: I0404 03:42:40.199514 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl_2b4070a8-f657-4819-8ab7-b105f33e5560/extract/0.log" Apr 04 03:42:40 crc kubenswrapper[4681]: I0404 03:42:40.268119 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pc5lk_f5161cee-bf19-458a-95a3-8cf593f8f78c/registry-server/0.log" Apr 04 03:42:40 crc kubenswrapper[4681]: I0404 03:42:40.365708 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-hdr5h_76a1fdd0-d5af-45fe-8f41-bed5f036a8e1/marketplace-operator/0.log" Apr 04 03:42:40 crc kubenswrapper[4681]: I0404 03:42:40.439294 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rlcg4_2ed9fccc-c563-4589-8289-6293e52869e4/extract-utilities/0.log" Apr 04 03:42:40 crc kubenswrapper[4681]: I0404 03:42:40.657436 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rlcg4_2ed9fccc-c563-4589-8289-6293e52869e4/extract-utilities/0.log" Apr 04 03:42:40 crc kubenswrapper[4681]: I0404 03:42:40.671036 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rlcg4_2ed9fccc-c563-4589-8289-6293e52869e4/extract-content/0.log" Apr 04 03:42:40 crc kubenswrapper[4681]: I0404 03:42:40.671736 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rlcg4_2ed9fccc-c563-4589-8289-6293e52869e4/extract-content/0.log" Apr 04 03:42:40 crc kubenswrapper[4681]: I0404 03:42:40.898013 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rlcg4_2ed9fccc-c563-4589-8289-6293e52869e4/extract-content/0.log" Apr 04 03:42:40 crc kubenswrapper[4681]: I0404 03:42:40.964236 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rlcg4_2ed9fccc-c563-4589-8289-6293e52869e4/extract-utilities/0.log" Apr 04 03:42:41 crc kubenswrapper[4681]: I0404 03:42:41.006316 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vvbw2_07348284-8b90-475b-92e4-f92c9a4ec127/extract-utilities/0.log" Apr 04 03:42:41 crc kubenswrapper[4681]: I0404 03:42:41.138238 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rlcg4_2ed9fccc-c563-4589-8289-6293e52869e4/registry-server/0.log" Apr 04 03:42:41 crc kubenswrapper[4681]: I0404 03:42:41.208690 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vvbw2_07348284-8b90-475b-92e4-f92c9a4ec127/extract-utilities/0.log" Apr 04 03:42:41 crc kubenswrapper[4681]: I0404 03:42:41.254679 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vvbw2_07348284-8b90-475b-92e4-f92c9a4ec127/extract-content/0.log" Apr 04 03:42:41 crc kubenswrapper[4681]: I0404 03:42:41.282767 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vvbw2_07348284-8b90-475b-92e4-f92c9a4ec127/extract-content/0.log" Apr 04 03:42:41 crc kubenswrapper[4681]: I0404 03:42:41.437623 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vvbw2_07348284-8b90-475b-92e4-f92c9a4ec127/extract-utilities/0.log" Apr 04 03:42:41 crc kubenswrapper[4681]: I0404 03:42:41.561231 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vvbw2_07348284-8b90-475b-92e4-f92c9a4ec127/extract-content/0.log" Apr 04 03:42:42 crc kubenswrapper[4681]: I0404 03:42:42.094293 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vvbw2_07348284-8b90-475b-92e4-f92c9a4ec127/registry-server/0.log" Apr 04 03:42:54 crc kubenswrapper[4681]: I0404 03:42:54.424377 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-86dff4bf76-pkfp4_b23b52f2-8062-48f1-a937-590414fcb369/prometheus-operator/0.log" Apr 04 03:42:54 crc kubenswrapper[4681]: I0404 03:42:54.440094 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-57bc579d78-6gdgv_c11383e1-c1fe-4d1e-ab47-234adca1f589/prometheus-operator-admission-webhook/0.log" Apr 04 03:42:54 crc kubenswrapper[4681]: I0404 03:42:54.478935 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-57bc579d78-bpz42_ca6efa76-cc20-4742-9c8b-1ef70ff6acff/prometheus-operator-admission-webhook/0.log" Apr 04 03:42:54 crc kubenswrapper[4681]: I0404 03:42:54.609846 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-74445bf4b8-4fv89_0c2979a7-06b5-4451-875e-f8e64da75780/perses-operator/0.log" Apr 04 03:42:54 crc kubenswrapper[4681]: I0404 03:42:54.622230 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-dd944d769-szx5n_a510961d-019d-41d4-8a75-66f69f5d6728/operator/0.log" Apr 04 03:42:56 crc kubenswrapper[4681]: I0404 03:42:56.524323 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 03:42:56 crc kubenswrapper[4681]: I0404 03:42:56.525096 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 03:42:56 crc kubenswrapper[4681]: I0404 03:42:56.525149 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 03:42:56 crc kubenswrapper[4681]: I0404 03:42:56.526297 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"19af9a1ef2b1da42b55b30d1de98948b243ad76b2510aa29072f2ddbf1becf60"} pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Apr 04 03:42:56 crc kubenswrapper[4681]: I0404 03:42:56.526483 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" containerID="cri-o://19af9a1ef2b1da42b55b30d1de98948b243ad76b2510aa29072f2ddbf1becf60" gracePeriod=600 Apr 04 03:42:57 crc kubenswrapper[4681]: I0404 03:42:57.054284 4681 generic.go:334] "Generic (PLEG): container finished" podID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerID="19af9a1ef2b1da42b55b30d1de98948b243ad76b2510aa29072f2ddbf1becf60" exitCode=0 Apr 04 03:42:57 crc kubenswrapper[4681]: I0404 03:42:57.054331 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerDied","Data":"19af9a1ef2b1da42b55b30d1de98948b243ad76b2510aa29072f2ddbf1becf60"} Apr 04 03:42:57 crc kubenswrapper[4681]: I0404 03:42:57.054589 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerStarted","Data":"083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649"} Apr 04 03:42:57 crc kubenswrapper[4681]: I0404 03:42:57.054611 4681 scope.go:117] "RemoveContainer" containerID="e6ea3afc9053f8a419be029e9129e5308e72192f68fd39e7a983304500440f3a" Apr 04 03:42:58 crc kubenswrapper[4681]: E0404 03:42:58.093416 4681 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.71:42694->38.129.56.71:44093: write tcp 38.129.56.71:42694->38.129.56.71:44093: write: broken pipe Apr 04 03:43:27 crc kubenswrapper[4681]: I0404 03:43:27.955428 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mjcd9"] Apr 04 03:43:27 crc kubenswrapper[4681]: E0404 03:43:27.956415 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22ae55dc-dec6-4d8a-baf4-c5dd56cecb32" containerName="oc" Apr 04 03:43:27 crc kubenswrapper[4681]: I0404 03:43:27.956429 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="22ae55dc-dec6-4d8a-baf4-c5dd56cecb32" containerName="oc" Apr 04 03:43:27 crc kubenswrapper[4681]: I0404 03:43:27.956618 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="22ae55dc-dec6-4d8a-baf4-c5dd56cecb32" containerName="oc" Apr 04 03:43:27 crc kubenswrapper[4681]: I0404 03:43:27.958020 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mjcd9" Apr 04 03:43:27 crc kubenswrapper[4681]: I0404 03:43:27.982380 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjcd9"] Apr 04 03:43:28 crc kubenswrapper[4681]: I0404 03:43:28.113587 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/993b25c7-5248-466d-b8f7-2c93ef107ddc-catalog-content\") pod \"redhat-marketplace-mjcd9\" (UID: \"993b25c7-5248-466d-b8f7-2c93ef107ddc\") " pod="openshift-marketplace/redhat-marketplace-mjcd9" Apr 04 03:43:28 crc kubenswrapper[4681]: I0404 03:43:28.113687 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqvs9\" (UniqueName: \"kubernetes.io/projected/993b25c7-5248-466d-b8f7-2c93ef107ddc-kube-api-access-xqvs9\") pod \"redhat-marketplace-mjcd9\" (UID: \"993b25c7-5248-466d-b8f7-2c93ef107ddc\") " pod="openshift-marketplace/redhat-marketplace-mjcd9" Apr 04 03:43:28 crc kubenswrapper[4681]: I0404 03:43:28.113714 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/993b25c7-5248-466d-b8f7-2c93ef107ddc-utilities\") pod \"redhat-marketplace-mjcd9\" (UID: \"993b25c7-5248-466d-b8f7-2c93ef107ddc\") " pod="openshift-marketplace/redhat-marketplace-mjcd9" Apr 04 03:43:28 crc kubenswrapper[4681]: I0404 03:43:28.215626 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/993b25c7-5248-466d-b8f7-2c93ef107ddc-utilities\") pod \"redhat-marketplace-mjcd9\" (UID: \"993b25c7-5248-466d-b8f7-2c93ef107ddc\") " pod="openshift-marketplace/redhat-marketplace-mjcd9" Apr 04 03:43:28 crc kubenswrapper[4681]: I0404 03:43:28.216021 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/993b25c7-5248-466d-b8f7-2c93ef107ddc-utilities\") pod \"redhat-marketplace-mjcd9\" (UID: \"993b25c7-5248-466d-b8f7-2c93ef107ddc\") " pod="openshift-marketplace/redhat-marketplace-mjcd9" Apr 04 03:43:28 crc kubenswrapper[4681]: I0404 03:43:28.216207 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/993b25c7-5248-466d-b8f7-2c93ef107ddc-catalog-content\") pod \"redhat-marketplace-mjcd9\" (UID: \"993b25c7-5248-466d-b8f7-2c93ef107ddc\") " pod="openshift-marketplace/redhat-marketplace-mjcd9" Apr 04 03:43:28 crc kubenswrapper[4681]: I0404 03:43:28.216278 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqvs9\" (UniqueName: \"kubernetes.io/projected/993b25c7-5248-466d-b8f7-2c93ef107ddc-kube-api-access-xqvs9\") pod \"redhat-marketplace-mjcd9\" (UID: \"993b25c7-5248-466d-b8f7-2c93ef107ddc\") " pod="openshift-marketplace/redhat-marketplace-mjcd9" Apr 04 03:43:28 crc kubenswrapper[4681]: I0404 03:43:28.216735 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/993b25c7-5248-466d-b8f7-2c93ef107ddc-catalog-content\") pod \"redhat-marketplace-mjcd9\" (UID: \"993b25c7-5248-466d-b8f7-2c93ef107ddc\") " pod="openshift-marketplace/redhat-marketplace-mjcd9" Apr 04 03:43:28 crc kubenswrapper[4681]: I0404 03:43:28.238987 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqvs9\" (UniqueName: \"kubernetes.io/projected/993b25c7-5248-466d-b8f7-2c93ef107ddc-kube-api-access-xqvs9\") pod \"redhat-marketplace-mjcd9\" (UID: \"993b25c7-5248-466d-b8f7-2c93ef107ddc\") " pod="openshift-marketplace/redhat-marketplace-mjcd9" Apr 04 03:43:28 crc kubenswrapper[4681]: I0404 03:43:28.276933 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mjcd9" Apr 04 03:43:28 crc kubenswrapper[4681]: I0404 03:43:28.831766 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjcd9"] Apr 04 03:43:29 crc kubenswrapper[4681]: I0404 03:43:29.382448 4681 generic.go:334] "Generic (PLEG): container finished" podID="993b25c7-5248-466d-b8f7-2c93ef107ddc" containerID="715135b702a0ca5b2b21677062e9ad4a8e60ae99707fb8dbbe98ce8153fd9f5e" exitCode=0 Apr 04 03:43:29 crc kubenswrapper[4681]: I0404 03:43:29.382496 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjcd9" event={"ID":"993b25c7-5248-466d-b8f7-2c93ef107ddc","Type":"ContainerDied","Data":"715135b702a0ca5b2b21677062e9ad4a8e60ae99707fb8dbbe98ce8153fd9f5e"} Apr 04 03:43:29 crc kubenswrapper[4681]: I0404 03:43:29.382524 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjcd9" event={"ID":"993b25c7-5248-466d-b8f7-2c93ef107ddc","Type":"ContainerStarted","Data":"7e767bc7cb18251999f1917c7f06f381dfe0d6d6a092b6e59e2a35efef946b0d"} Apr 04 03:43:29 crc kubenswrapper[4681]: I0404 03:43:29.384678 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 04 03:43:30 crc kubenswrapper[4681]: I0404 03:43:30.394902 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjcd9" event={"ID":"993b25c7-5248-466d-b8f7-2c93ef107ddc","Type":"ContainerStarted","Data":"ccc605bdb907f613f03d69cba9c1de8132f757f83969fc8b1746116324568b1a"} Apr 04 03:43:31 crc kubenswrapper[4681]: I0404 03:43:31.407123 4681 generic.go:334] "Generic (PLEG): container finished" podID="993b25c7-5248-466d-b8f7-2c93ef107ddc" containerID="ccc605bdb907f613f03d69cba9c1de8132f757f83969fc8b1746116324568b1a" exitCode=0 Apr 04 03:43:31 crc kubenswrapper[4681]: I0404 03:43:31.407192 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjcd9" event={"ID":"993b25c7-5248-466d-b8f7-2c93ef107ddc","Type":"ContainerDied","Data":"ccc605bdb907f613f03d69cba9c1de8132f757f83969fc8b1746116324568b1a"} Apr 04 03:43:32 crc kubenswrapper[4681]: I0404 03:43:32.417591 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjcd9" event={"ID":"993b25c7-5248-466d-b8f7-2c93ef107ddc","Type":"ContainerStarted","Data":"0aada0698fe36f09f86eeeb1757da6aaf60290c414bfa4c0f79130b47079b7bf"} Apr 04 03:43:32 crc kubenswrapper[4681]: I0404 03:43:32.443201 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mjcd9" podStartSLOduration=3.071575935 podStartE2EDuration="5.443181599s" podCreationTimestamp="2026-04-04 03:43:27 +0000 UTC" firstStartedPulling="2026-04-04 03:43:29.384436988 +0000 UTC m=+6489.050212108" lastFinishedPulling="2026-04-04 03:43:31.756042652 +0000 UTC m=+6491.421817772" observedRunningTime="2026-04-04 03:43:32.433876036 +0000 UTC m=+6492.099651166" watchObservedRunningTime="2026-04-04 03:43:32.443181599 +0000 UTC m=+6492.108956719" Apr 04 03:43:38 crc kubenswrapper[4681]: I0404 03:43:38.277140 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mjcd9" Apr 04 03:43:38 crc kubenswrapper[4681]: I0404 03:43:38.277701 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mjcd9" Apr 04 03:43:38 crc kubenswrapper[4681]: I0404 03:43:38.329406 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mjcd9" Apr 04 03:43:38 crc kubenswrapper[4681]: I0404 03:43:38.520792 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mjcd9" Apr 04 03:43:38 crc kubenswrapper[4681]: I0404 03:43:38.578777 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjcd9"] Apr 04 03:43:40 crc kubenswrapper[4681]: I0404 03:43:40.501481 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mjcd9" podUID="993b25c7-5248-466d-b8f7-2c93ef107ddc" containerName="registry-server" containerID="cri-o://0aada0698fe36f09f86eeeb1757da6aaf60290c414bfa4c0f79130b47079b7bf" gracePeriod=2 Apr 04 03:43:41 crc kubenswrapper[4681]: I0404 03:43:41.011723 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mjcd9" Apr 04 03:43:41 crc kubenswrapper[4681]: I0404 03:43:41.148736 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/993b25c7-5248-466d-b8f7-2c93ef107ddc-utilities\") pod \"993b25c7-5248-466d-b8f7-2c93ef107ddc\" (UID: \"993b25c7-5248-466d-b8f7-2c93ef107ddc\") " Apr 04 03:43:41 crc kubenswrapper[4681]: I0404 03:43:41.148909 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqvs9\" (UniqueName: \"kubernetes.io/projected/993b25c7-5248-466d-b8f7-2c93ef107ddc-kube-api-access-xqvs9\") pod \"993b25c7-5248-466d-b8f7-2c93ef107ddc\" (UID: \"993b25c7-5248-466d-b8f7-2c93ef107ddc\") " Apr 04 03:43:41 crc kubenswrapper[4681]: I0404 03:43:41.149095 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/993b25c7-5248-466d-b8f7-2c93ef107ddc-catalog-content\") pod \"993b25c7-5248-466d-b8f7-2c93ef107ddc\" (UID: \"993b25c7-5248-466d-b8f7-2c93ef107ddc\") " Apr 04 03:43:41 crc kubenswrapper[4681]: I0404 03:43:41.149525 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/993b25c7-5248-466d-b8f7-2c93ef107ddc-utilities" (OuterVolumeSpecName: "utilities") pod "993b25c7-5248-466d-b8f7-2c93ef107ddc" (UID: "993b25c7-5248-466d-b8f7-2c93ef107ddc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:43:41 crc kubenswrapper[4681]: I0404 03:43:41.159563 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/993b25c7-5248-466d-b8f7-2c93ef107ddc-kube-api-access-xqvs9" (OuterVolumeSpecName: "kube-api-access-xqvs9") pod "993b25c7-5248-466d-b8f7-2c93ef107ddc" (UID: "993b25c7-5248-466d-b8f7-2c93ef107ddc"). InnerVolumeSpecName "kube-api-access-xqvs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:43:41 crc kubenswrapper[4681]: I0404 03:43:41.179910 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/993b25c7-5248-466d-b8f7-2c93ef107ddc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "993b25c7-5248-466d-b8f7-2c93ef107ddc" (UID: "993b25c7-5248-466d-b8f7-2c93ef107ddc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:43:41 crc kubenswrapper[4681]: I0404 03:43:41.252159 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/993b25c7-5248-466d-b8f7-2c93ef107ddc-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 03:43:41 crc kubenswrapper[4681]: I0404 03:43:41.252191 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/993b25c7-5248-466d-b8f7-2c93ef107ddc-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 03:43:41 crc kubenswrapper[4681]: I0404 03:43:41.252202 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqvs9\" (UniqueName: \"kubernetes.io/projected/993b25c7-5248-466d-b8f7-2c93ef107ddc-kube-api-access-xqvs9\") on node \"crc\" DevicePath \"\"" Apr 04 03:43:41 crc kubenswrapper[4681]: I0404 03:43:41.517200 4681 generic.go:334] "Generic (PLEG): container finished" podID="993b25c7-5248-466d-b8f7-2c93ef107ddc" containerID="0aada0698fe36f09f86eeeb1757da6aaf60290c414bfa4c0f79130b47079b7bf" exitCode=0 Apr 04 03:43:41 crc kubenswrapper[4681]: I0404 03:43:41.517249 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjcd9" event={"ID":"993b25c7-5248-466d-b8f7-2c93ef107ddc","Type":"ContainerDied","Data":"0aada0698fe36f09f86eeeb1757da6aaf60290c414bfa4c0f79130b47079b7bf"} Apr 04 03:43:41 crc kubenswrapper[4681]: I0404 03:43:41.517314 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mjcd9" event={"ID":"993b25c7-5248-466d-b8f7-2c93ef107ddc","Type":"ContainerDied","Data":"7e767bc7cb18251999f1917c7f06f381dfe0d6d6a092b6e59e2a35efef946b0d"} Apr 04 03:43:41 crc kubenswrapper[4681]: I0404 03:43:41.517336 4681 scope.go:117] "RemoveContainer" containerID="0aada0698fe36f09f86eeeb1757da6aaf60290c414bfa4c0f79130b47079b7bf" Apr 04 03:43:41 crc kubenswrapper[4681]: I0404 03:43:41.517485 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mjcd9" Apr 04 03:43:41 crc kubenswrapper[4681]: I0404 03:43:41.547717 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjcd9"] Apr 04 03:43:41 crc kubenswrapper[4681]: I0404 03:43:41.560645 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mjcd9"] Apr 04 03:43:41 crc kubenswrapper[4681]: I0404 03:43:41.562486 4681 scope.go:117] "RemoveContainer" containerID="ccc605bdb907f613f03d69cba9c1de8132f757f83969fc8b1746116324568b1a" Apr 04 03:43:41 crc kubenswrapper[4681]: I0404 03:43:41.596984 4681 scope.go:117] "RemoveContainer" containerID="715135b702a0ca5b2b21677062e9ad4a8e60ae99707fb8dbbe98ce8153fd9f5e" Apr 04 03:43:41 crc kubenswrapper[4681]: I0404 03:43:41.643082 4681 scope.go:117] "RemoveContainer" containerID="0aada0698fe36f09f86eeeb1757da6aaf60290c414bfa4c0f79130b47079b7bf" Apr 04 03:43:41 crc kubenswrapper[4681]: E0404 03:43:41.643750 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0aada0698fe36f09f86eeeb1757da6aaf60290c414bfa4c0f79130b47079b7bf\": container with ID starting with 0aada0698fe36f09f86eeeb1757da6aaf60290c414bfa4c0f79130b47079b7bf not found: ID does not exist" containerID="0aada0698fe36f09f86eeeb1757da6aaf60290c414bfa4c0f79130b47079b7bf" Apr 04 03:43:41 crc kubenswrapper[4681]: I0404 03:43:41.643791 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0aada0698fe36f09f86eeeb1757da6aaf60290c414bfa4c0f79130b47079b7bf"} err="failed to get container status \"0aada0698fe36f09f86eeeb1757da6aaf60290c414bfa4c0f79130b47079b7bf\": rpc error: code = NotFound desc = could not find container \"0aada0698fe36f09f86eeeb1757da6aaf60290c414bfa4c0f79130b47079b7bf\": container with ID starting with 0aada0698fe36f09f86eeeb1757da6aaf60290c414bfa4c0f79130b47079b7bf not found: ID does not exist" Apr 04 03:43:41 crc kubenswrapper[4681]: I0404 03:43:41.643821 4681 scope.go:117] "RemoveContainer" containerID="ccc605bdb907f613f03d69cba9c1de8132f757f83969fc8b1746116324568b1a" Apr 04 03:43:41 crc kubenswrapper[4681]: E0404 03:43:41.644237 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccc605bdb907f613f03d69cba9c1de8132f757f83969fc8b1746116324568b1a\": container with ID starting with ccc605bdb907f613f03d69cba9c1de8132f757f83969fc8b1746116324568b1a not found: ID does not exist" containerID="ccc605bdb907f613f03d69cba9c1de8132f757f83969fc8b1746116324568b1a" Apr 04 03:43:41 crc kubenswrapper[4681]: I0404 03:43:41.644257 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccc605bdb907f613f03d69cba9c1de8132f757f83969fc8b1746116324568b1a"} err="failed to get container status \"ccc605bdb907f613f03d69cba9c1de8132f757f83969fc8b1746116324568b1a\": rpc error: code = NotFound desc = could not find container \"ccc605bdb907f613f03d69cba9c1de8132f757f83969fc8b1746116324568b1a\": container with ID starting with ccc605bdb907f613f03d69cba9c1de8132f757f83969fc8b1746116324568b1a not found: ID does not exist" Apr 04 03:43:41 crc kubenswrapper[4681]: I0404 03:43:41.644284 4681 scope.go:117] "RemoveContainer" containerID="715135b702a0ca5b2b21677062e9ad4a8e60ae99707fb8dbbe98ce8153fd9f5e" Apr 04 03:43:41 crc kubenswrapper[4681]: E0404 03:43:41.644538 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"715135b702a0ca5b2b21677062e9ad4a8e60ae99707fb8dbbe98ce8153fd9f5e\": container with ID starting with 715135b702a0ca5b2b21677062e9ad4a8e60ae99707fb8dbbe98ce8153fd9f5e not found: ID does not exist" containerID="715135b702a0ca5b2b21677062e9ad4a8e60ae99707fb8dbbe98ce8153fd9f5e" Apr 04 03:43:41 crc kubenswrapper[4681]: I0404 03:43:41.644559 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"715135b702a0ca5b2b21677062e9ad4a8e60ae99707fb8dbbe98ce8153fd9f5e"} err="failed to get container status \"715135b702a0ca5b2b21677062e9ad4a8e60ae99707fb8dbbe98ce8153fd9f5e\": rpc error: code = NotFound desc = could not find container \"715135b702a0ca5b2b21677062e9ad4a8e60ae99707fb8dbbe98ce8153fd9f5e\": container with ID starting with 715135b702a0ca5b2b21677062e9ad4a8e60ae99707fb8dbbe98ce8153fd9f5e not found: ID does not exist" Apr 04 03:43:43 crc kubenswrapper[4681]: I0404 03:43:43.229993 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="993b25c7-5248-466d-b8f7-2c93ef107ddc" path="/var/lib/kubelet/pods/993b25c7-5248-466d-b8f7-2c93ef107ddc/volumes" Apr 04 03:44:00 crc kubenswrapper[4681]: I0404 03:44:00.182016 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587904-8g84d"] Apr 04 03:44:00 crc kubenswrapper[4681]: E0404 03:44:00.183752 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="993b25c7-5248-466d-b8f7-2c93ef107ddc" containerName="extract-content" Apr 04 03:44:00 crc kubenswrapper[4681]: I0404 03:44:00.183779 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="993b25c7-5248-466d-b8f7-2c93ef107ddc" containerName="extract-content" Apr 04 03:44:00 crc kubenswrapper[4681]: E0404 03:44:00.183817 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="993b25c7-5248-466d-b8f7-2c93ef107ddc" containerName="registry-server" Apr 04 03:44:00 crc kubenswrapper[4681]: I0404 03:44:00.183827 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="993b25c7-5248-466d-b8f7-2c93ef107ddc" containerName="registry-server" Apr 04 03:44:00 crc kubenswrapper[4681]: E0404 03:44:00.183861 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="993b25c7-5248-466d-b8f7-2c93ef107ddc" containerName="extract-utilities" Apr 04 03:44:00 crc kubenswrapper[4681]: I0404 03:44:00.183869 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="993b25c7-5248-466d-b8f7-2c93ef107ddc" containerName="extract-utilities" Apr 04 03:44:00 crc kubenswrapper[4681]: I0404 03:44:00.184099 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="993b25c7-5248-466d-b8f7-2c93ef107ddc" containerName="registry-server" Apr 04 03:44:00 crc kubenswrapper[4681]: I0404 03:44:00.185189 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587904-8g84d" Apr 04 03:44:00 crc kubenswrapper[4681]: I0404 03:44:00.187748 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 03:44:00 crc kubenswrapper[4681]: I0404 03:44:00.188576 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 03:44:00 crc kubenswrapper[4681]: I0404 03:44:00.191736 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 03:44:00 crc kubenswrapper[4681]: I0404 03:44:00.195174 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587904-8g84d"] Apr 04 03:44:00 crc kubenswrapper[4681]: I0404 03:44:00.284465 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7krxt\" (UniqueName: \"kubernetes.io/projected/508cebce-fa34-4131-a195-70dc04ddc013-kube-api-access-7krxt\") pod \"auto-csr-approver-29587904-8g84d\" (UID: \"508cebce-fa34-4131-a195-70dc04ddc013\") " pod="openshift-infra/auto-csr-approver-29587904-8g84d" Apr 04 03:44:00 crc kubenswrapper[4681]: I0404 03:44:00.386408 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7krxt\" (UniqueName: \"kubernetes.io/projected/508cebce-fa34-4131-a195-70dc04ddc013-kube-api-access-7krxt\") pod \"auto-csr-approver-29587904-8g84d\" (UID: \"508cebce-fa34-4131-a195-70dc04ddc013\") " pod="openshift-infra/auto-csr-approver-29587904-8g84d" Apr 04 03:44:00 crc kubenswrapper[4681]: I0404 03:44:00.418154 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7krxt\" (UniqueName: \"kubernetes.io/projected/508cebce-fa34-4131-a195-70dc04ddc013-kube-api-access-7krxt\") pod \"auto-csr-approver-29587904-8g84d\" (UID: \"508cebce-fa34-4131-a195-70dc04ddc013\") " pod="openshift-infra/auto-csr-approver-29587904-8g84d" Apr 04 03:44:00 crc kubenswrapper[4681]: I0404 03:44:00.507810 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587904-8g84d" Apr 04 03:44:00 crc kubenswrapper[4681]: I0404 03:44:00.959384 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587904-8g84d"] Apr 04 03:44:01 crc kubenswrapper[4681]: I0404 03:44:01.739478 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587904-8g84d" event={"ID":"508cebce-fa34-4131-a195-70dc04ddc013","Type":"ContainerStarted","Data":"563e5f601706911aec8be7feee1314f590566f1bd4151ae6451e4a95aa36bba3"} Apr 04 03:44:02 crc kubenswrapper[4681]: I0404 03:44:02.762468 4681 generic.go:334] "Generic (PLEG): container finished" podID="508cebce-fa34-4131-a195-70dc04ddc013" containerID="279ad1e9e16f83ac7fd7211027fac9579d8d2586fe1d318e6a07d53d5eac05e7" exitCode=0 Apr 04 03:44:02 crc kubenswrapper[4681]: I0404 03:44:02.762619 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587904-8g84d" event={"ID":"508cebce-fa34-4131-a195-70dc04ddc013","Type":"ContainerDied","Data":"279ad1e9e16f83ac7fd7211027fac9579d8d2586fe1d318e6a07d53d5eac05e7"} Apr 04 03:44:04 crc kubenswrapper[4681]: I0404 03:44:04.147050 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587904-8g84d" Apr 04 03:44:04 crc kubenswrapper[4681]: I0404 03:44:04.296542 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7krxt\" (UniqueName: \"kubernetes.io/projected/508cebce-fa34-4131-a195-70dc04ddc013-kube-api-access-7krxt\") pod \"508cebce-fa34-4131-a195-70dc04ddc013\" (UID: \"508cebce-fa34-4131-a195-70dc04ddc013\") " Apr 04 03:44:04 crc kubenswrapper[4681]: I0404 03:44:04.304804 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/508cebce-fa34-4131-a195-70dc04ddc013-kube-api-access-7krxt" (OuterVolumeSpecName: "kube-api-access-7krxt") pod "508cebce-fa34-4131-a195-70dc04ddc013" (UID: "508cebce-fa34-4131-a195-70dc04ddc013"). InnerVolumeSpecName "kube-api-access-7krxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:44:04 crc kubenswrapper[4681]: I0404 03:44:04.401403 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7krxt\" (UniqueName: \"kubernetes.io/projected/508cebce-fa34-4131-a195-70dc04ddc013-kube-api-access-7krxt\") on node \"crc\" DevicePath \"\"" Apr 04 03:44:04 crc kubenswrapper[4681]: I0404 03:44:04.786227 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587904-8g84d" event={"ID":"508cebce-fa34-4131-a195-70dc04ddc013","Type":"ContainerDied","Data":"563e5f601706911aec8be7feee1314f590566f1bd4151ae6451e4a95aa36bba3"} Apr 04 03:44:04 crc kubenswrapper[4681]: I0404 03:44:04.786294 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="563e5f601706911aec8be7feee1314f590566f1bd4151ae6451e4a95aa36bba3" Apr 04 03:44:04 crc kubenswrapper[4681]: I0404 03:44:04.786350 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587904-8g84d" Apr 04 03:44:05 crc kubenswrapper[4681]: I0404 03:44:05.241581 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587898-kcmp6"] Apr 04 03:44:05 crc kubenswrapper[4681]: I0404 03:44:05.251658 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587898-kcmp6"] Apr 04 03:44:07 crc kubenswrapper[4681]: I0404 03:44:07.216759 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20aaf7bf-469a-462c-a60a-d44305ae3848" path="/var/lib/kubelet/pods/20aaf7bf-469a-462c-a60a-d44305ae3848/volumes" Apr 04 03:44:35 crc kubenswrapper[4681]: I0404 03:44:35.355910 4681 scope.go:117] "RemoveContainer" containerID="ab3bba527904c622cdfd010e79df10068efaaf0560449381446e1541b89f8bae" Apr 04 03:44:56 crc kubenswrapper[4681]: I0404 03:44:56.524988 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 03:44:56 crc kubenswrapper[4681]: I0404 03:44:56.525527 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 03:45:00 crc kubenswrapper[4681]: I0404 03:45:00.157191 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587905-2hnfz"] Apr 04 03:45:00 crc kubenswrapper[4681]: E0404 03:45:00.158020 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="508cebce-fa34-4131-a195-70dc04ddc013" containerName="oc" Apr 04 03:45:00 crc kubenswrapper[4681]: I0404 03:45:00.158033 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="508cebce-fa34-4131-a195-70dc04ddc013" containerName="oc" Apr 04 03:45:00 crc kubenswrapper[4681]: I0404 03:45:00.158231 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="508cebce-fa34-4131-a195-70dc04ddc013" containerName="oc" Apr 04 03:45:00 crc kubenswrapper[4681]: I0404 03:45:00.158918 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587905-2hnfz" Apr 04 03:45:00 crc kubenswrapper[4681]: I0404 03:45:00.167740 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Apr 04 03:45:00 crc kubenswrapper[4681]: I0404 03:45:00.167905 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Apr 04 03:45:00 crc kubenswrapper[4681]: I0404 03:45:00.188395 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587905-2hnfz"] Apr 04 03:45:00 crc kubenswrapper[4681]: I0404 03:45:00.305003 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b3be5c9-491d-4ed4-a81e-b050b8e1be65-secret-volume\") pod \"collect-profiles-29587905-2hnfz\" (UID: \"6b3be5c9-491d-4ed4-a81e-b050b8e1be65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587905-2hnfz" Apr 04 03:45:00 crc kubenswrapper[4681]: I0404 03:45:00.305630 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b3be5c9-491d-4ed4-a81e-b050b8e1be65-config-volume\") pod \"collect-profiles-29587905-2hnfz\" (UID: \"6b3be5c9-491d-4ed4-a81e-b050b8e1be65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587905-2hnfz" Apr 04 03:45:00 crc kubenswrapper[4681]: I0404 03:45:00.305900 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crjf2\" (UniqueName: \"kubernetes.io/projected/6b3be5c9-491d-4ed4-a81e-b050b8e1be65-kube-api-access-crjf2\") pod \"collect-profiles-29587905-2hnfz\" (UID: \"6b3be5c9-491d-4ed4-a81e-b050b8e1be65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587905-2hnfz" Apr 04 03:45:00 crc kubenswrapper[4681]: I0404 03:45:00.408976 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crjf2\" (UniqueName: \"kubernetes.io/projected/6b3be5c9-491d-4ed4-a81e-b050b8e1be65-kube-api-access-crjf2\") pod \"collect-profiles-29587905-2hnfz\" (UID: \"6b3be5c9-491d-4ed4-a81e-b050b8e1be65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587905-2hnfz" Apr 04 03:45:00 crc kubenswrapper[4681]: I0404 03:45:00.409116 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b3be5c9-491d-4ed4-a81e-b050b8e1be65-secret-volume\") pod \"collect-profiles-29587905-2hnfz\" (UID: \"6b3be5c9-491d-4ed4-a81e-b050b8e1be65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587905-2hnfz" Apr 04 03:45:00 crc kubenswrapper[4681]: I0404 03:45:00.409158 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b3be5c9-491d-4ed4-a81e-b050b8e1be65-config-volume\") pod \"collect-profiles-29587905-2hnfz\" (UID: \"6b3be5c9-491d-4ed4-a81e-b050b8e1be65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587905-2hnfz" Apr 04 03:45:00 crc kubenswrapper[4681]: I0404 03:45:00.410494 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b3be5c9-491d-4ed4-a81e-b050b8e1be65-config-volume\") pod \"collect-profiles-29587905-2hnfz\" (UID: \"6b3be5c9-491d-4ed4-a81e-b050b8e1be65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587905-2hnfz" Apr 04 03:45:00 crc kubenswrapper[4681]: I0404 03:45:00.432326 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b3be5c9-491d-4ed4-a81e-b050b8e1be65-secret-volume\") pod \"collect-profiles-29587905-2hnfz\" (UID: \"6b3be5c9-491d-4ed4-a81e-b050b8e1be65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587905-2hnfz" Apr 04 03:45:00 crc kubenswrapper[4681]: I0404 03:45:00.461657 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crjf2\" (UniqueName: \"kubernetes.io/projected/6b3be5c9-491d-4ed4-a81e-b050b8e1be65-kube-api-access-crjf2\") pod \"collect-profiles-29587905-2hnfz\" (UID: \"6b3be5c9-491d-4ed4-a81e-b050b8e1be65\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587905-2hnfz" Apr 04 03:45:00 crc kubenswrapper[4681]: I0404 03:45:00.489809 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587905-2hnfz" Apr 04 03:45:01 crc kubenswrapper[4681]: I0404 03:45:01.102021 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587905-2hnfz"] Apr 04 03:45:01 crc kubenswrapper[4681]: I0404 03:45:01.427614 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587905-2hnfz" event={"ID":"6b3be5c9-491d-4ed4-a81e-b050b8e1be65","Type":"ContainerStarted","Data":"0a6d777b5cfa9ad6b3ffaca41317453ce49596d77003e5d04e5a2b7b66991887"} Apr 04 03:45:01 crc kubenswrapper[4681]: I0404 03:45:01.427983 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587905-2hnfz" event={"ID":"6b3be5c9-491d-4ed4-a81e-b050b8e1be65","Type":"ContainerStarted","Data":"8459495424c5c92f5ba15e328a844b1ec42d73ac22e56777699dfcbe26d2ede3"} Apr 04 03:45:01 crc kubenswrapper[4681]: I0404 03:45:01.457902 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29587905-2hnfz" podStartSLOduration=1.457875265 podStartE2EDuration="1.457875265s" podCreationTimestamp="2026-04-04 03:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 03:45:01.445213739 +0000 UTC m=+6581.110988859" watchObservedRunningTime="2026-04-04 03:45:01.457875265 +0000 UTC m=+6581.123650385" Apr 04 03:45:02 crc kubenswrapper[4681]: I0404 03:45:02.438504 4681 generic.go:334] "Generic (PLEG): container finished" podID="6b3be5c9-491d-4ed4-a81e-b050b8e1be65" containerID="0a6d777b5cfa9ad6b3ffaca41317453ce49596d77003e5d04e5a2b7b66991887" exitCode=0 Apr 04 03:45:02 crc kubenswrapper[4681]: I0404 03:45:02.439232 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587905-2hnfz" event={"ID":"6b3be5c9-491d-4ed4-a81e-b050b8e1be65","Type":"ContainerDied","Data":"0a6d777b5cfa9ad6b3ffaca41317453ce49596d77003e5d04e5a2b7b66991887"} Apr 04 03:45:03 crc kubenswrapper[4681]: I0404 03:45:03.833650 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587905-2hnfz" Apr 04 03:45:04 crc kubenswrapper[4681]: I0404 03:45:04.014492 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b3be5c9-491d-4ed4-a81e-b050b8e1be65-secret-volume\") pod \"6b3be5c9-491d-4ed4-a81e-b050b8e1be65\" (UID: \"6b3be5c9-491d-4ed4-a81e-b050b8e1be65\") " Apr 04 03:45:04 crc kubenswrapper[4681]: I0404 03:45:04.014631 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crjf2\" (UniqueName: \"kubernetes.io/projected/6b3be5c9-491d-4ed4-a81e-b050b8e1be65-kube-api-access-crjf2\") pod \"6b3be5c9-491d-4ed4-a81e-b050b8e1be65\" (UID: \"6b3be5c9-491d-4ed4-a81e-b050b8e1be65\") " Apr 04 03:45:04 crc kubenswrapper[4681]: I0404 03:45:04.014910 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b3be5c9-491d-4ed4-a81e-b050b8e1be65-config-volume\") pod \"6b3be5c9-491d-4ed4-a81e-b050b8e1be65\" (UID: \"6b3be5c9-491d-4ed4-a81e-b050b8e1be65\") " Apr 04 03:45:04 crc kubenswrapper[4681]: I0404 03:45:04.017181 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b3be5c9-491d-4ed4-a81e-b050b8e1be65-config-volume" (OuterVolumeSpecName: "config-volume") pod "6b3be5c9-491d-4ed4-a81e-b050b8e1be65" (UID: "6b3be5c9-491d-4ed4-a81e-b050b8e1be65"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 03:45:04 crc kubenswrapper[4681]: I0404 03:45:04.022985 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b3be5c9-491d-4ed4-a81e-b050b8e1be65-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6b3be5c9-491d-4ed4-a81e-b050b8e1be65" (UID: "6b3be5c9-491d-4ed4-a81e-b050b8e1be65"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 03:45:04 crc kubenswrapper[4681]: I0404 03:45:04.023253 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b3be5c9-491d-4ed4-a81e-b050b8e1be65-kube-api-access-crjf2" (OuterVolumeSpecName: "kube-api-access-crjf2") pod "6b3be5c9-491d-4ed4-a81e-b050b8e1be65" (UID: "6b3be5c9-491d-4ed4-a81e-b050b8e1be65"). InnerVolumeSpecName "kube-api-access-crjf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:45:04 crc kubenswrapper[4681]: I0404 03:45:04.118085 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crjf2\" (UniqueName: \"kubernetes.io/projected/6b3be5c9-491d-4ed4-a81e-b050b8e1be65-kube-api-access-crjf2\") on node \"crc\" DevicePath \"\"" Apr 04 03:45:04 crc kubenswrapper[4681]: I0404 03:45:04.118336 4681 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b3be5c9-491d-4ed4-a81e-b050b8e1be65-config-volume\") on node \"crc\" DevicePath \"\"" Apr 04 03:45:04 crc kubenswrapper[4681]: I0404 03:45:04.118419 4681 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6b3be5c9-491d-4ed4-a81e-b050b8e1be65-secret-volume\") on node \"crc\" DevicePath \"\"" Apr 04 03:45:04 crc kubenswrapper[4681]: I0404 03:45:04.323285 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587860-stj6k"] Apr 04 03:45:04 crc kubenswrapper[4681]: I0404 03:45:04.332915 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587860-stj6k"] Apr 04 03:45:04 crc kubenswrapper[4681]: I0404 03:45:04.458272 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587905-2hnfz" event={"ID":"6b3be5c9-491d-4ed4-a81e-b050b8e1be65","Type":"ContainerDied","Data":"8459495424c5c92f5ba15e328a844b1ec42d73ac22e56777699dfcbe26d2ede3"} Apr 04 03:45:04 crc kubenswrapper[4681]: I0404 03:45:04.458317 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8459495424c5c92f5ba15e328a844b1ec42d73ac22e56777699dfcbe26d2ede3" Apr 04 03:45:04 crc kubenswrapper[4681]: I0404 03:45:04.458334 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587905-2hnfz" Apr 04 03:45:05 crc kubenswrapper[4681]: I0404 03:45:05.214386 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7239abc9-4b26-4c13-90f7-db97bcd1a76c" path="/var/lib/kubelet/pods/7239abc9-4b26-4c13-90f7-db97bcd1a76c/volumes" Apr 04 03:45:06 crc kubenswrapper[4681]: I0404 03:45:06.479311 4681 generic.go:334] "Generic (PLEG): container finished" podID="4e2edff7-0c6b-41c0-acd8-f1e7dec006a8" containerID="41db316f07b8bd88b8d5d0fad14cdfcc1b60bd67bbd811a9d683966cdcba464b" exitCode=0 Apr 04 03:45:06 crc kubenswrapper[4681]: I0404 03:45:06.479415 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ddjjc/must-gather-qsq8k" event={"ID":"4e2edff7-0c6b-41c0-acd8-f1e7dec006a8","Type":"ContainerDied","Data":"41db316f07b8bd88b8d5d0fad14cdfcc1b60bd67bbd811a9d683966cdcba464b"} Apr 04 03:45:06 crc kubenswrapper[4681]: I0404 03:45:06.480241 4681 scope.go:117] "RemoveContainer" containerID="41db316f07b8bd88b8d5d0fad14cdfcc1b60bd67bbd811a9d683966cdcba464b" Apr 04 03:45:07 crc kubenswrapper[4681]: I0404 03:45:07.447748 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ddjjc_must-gather-qsq8k_4e2edff7-0c6b-41c0-acd8-f1e7dec006a8/gather/0.log" Apr 04 03:45:16 crc kubenswrapper[4681]: I0404 03:45:16.517414 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ddjjc/must-gather-qsq8k"] Apr 04 03:45:16 crc kubenswrapper[4681]: I0404 03:45:16.518214 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-ddjjc/must-gather-qsq8k" podUID="4e2edff7-0c6b-41c0-acd8-f1e7dec006a8" containerName="copy" containerID="cri-o://5462215ba614ec0e034faa26abeb209759719e7895f0c030240b753802cff62b" gracePeriod=2 Apr 04 03:45:16 crc kubenswrapper[4681]: I0404 03:45:16.528723 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ddjjc/must-gather-qsq8k"] Apr 04 03:45:17 crc kubenswrapper[4681]: I0404 03:45:17.064095 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ddjjc_must-gather-qsq8k_4e2edff7-0c6b-41c0-acd8-f1e7dec006a8/copy/0.log" Apr 04 03:45:17 crc kubenswrapper[4681]: I0404 03:45:17.064765 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ddjjc/must-gather-qsq8k" Apr 04 03:45:17 crc kubenswrapper[4681]: I0404 03:45:17.195211 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e2edff7-0c6b-41c0-acd8-f1e7dec006a8-must-gather-output\") pod \"4e2edff7-0c6b-41c0-acd8-f1e7dec006a8\" (UID: \"4e2edff7-0c6b-41c0-acd8-f1e7dec006a8\") " Apr 04 03:45:17 crc kubenswrapper[4681]: I0404 03:45:17.195278 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4dtl\" (UniqueName: \"kubernetes.io/projected/4e2edff7-0c6b-41c0-acd8-f1e7dec006a8-kube-api-access-h4dtl\") pod \"4e2edff7-0c6b-41c0-acd8-f1e7dec006a8\" (UID: \"4e2edff7-0c6b-41c0-acd8-f1e7dec006a8\") " Apr 04 03:45:17 crc kubenswrapper[4681]: I0404 03:45:17.206637 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e2edff7-0c6b-41c0-acd8-f1e7dec006a8-kube-api-access-h4dtl" (OuterVolumeSpecName: "kube-api-access-h4dtl") pod "4e2edff7-0c6b-41c0-acd8-f1e7dec006a8" (UID: "4e2edff7-0c6b-41c0-acd8-f1e7dec006a8"). InnerVolumeSpecName "kube-api-access-h4dtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:45:17 crc kubenswrapper[4681]: I0404 03:45:17.298345 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4dtl\" (UniqueName: \"kubernetes.io/projected/4e2edff7-0c6b-41c0-acd8-f1e7dec006a8-kube-api-access-h4dtl\") on node \"crc\" DevicePath \"\"" Apr 04 03:45:17 crc kubenswrapper[4681]: I0404 03:45:17.383546 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e2edff7-0c6b-41c0-acd8-f1e7dec006a8-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "4e2edff7-0c6b-41c0-acd8-f1e7dec006a8" (UID: "4e2edff7-0c6b-41c0-acd8-f1e7dec006a8"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:45:17 crc kubenswrapper[4681]: I0404 03:45:17.403891 4681 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/4e2edff7-0c6b-41c0-acd8-f1e7dec006a8-must-gather-output\") on node \"crc\" DevicePath \"\"" Apr 04 03:45:17 crc kubenswrapper[4681]: I0404 03:45:17.603715 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ddjjc_must-gather-qsq8k_4e2edff7-0c6b-41c0-acd8-f1e7dec006a8/copy/0.log" Apr 04 03:45:17 crc kubenswrapper[4681]: I0404 03:45:17.605022 4681 generic.go:334] "Generic (PLEG): container finished" podID="4e2edff7-0c6b-41c0-acd8-f1e7dec006a8" containerID="5462215ba614ec0e034faa26abeb209759719e7895f0c030240b753802cff62b" exitCode=143 Apr 04 03:45:17 crc kubenswrapper[4681]: I0404 03:45:17.605125 4681 scope.go:117] "RemoveContainer" containerID="5462215ba614ec0e034faa26abeb209759719e7895f0c030240b753802cff62b" Apr 04 03:45:17 crc kubenswrapper[4681]: I0404 03:45:17.605054 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ddjjc/must-gather-qsq8k" Apr 04 03:45:17 crc kubenswrapper[4681]: I0404 03:45:17.629102 4681 scope.go:117] "RemoveContainer" containerID="41db316f07b8bd88b8d5d0fad14cdfcc1b60bd67bbd811a9d683966cdcba464b" Apr 04 03:45:17 crc kubenswrapper[4681]: I0404 03:45:17.693091 4681 scope.go:117] "RemoveContainer" containerID="5462215ba614ec0e034faa26abeb209759719e7895f0c030240b753802cff62b" Apr 04 03:45:17 crc kubenswrapper[4681]: E0404 03:45:17.693512 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5462215ba614ec0e034faa26abeb209759719e7895f0c030240b753802cff62b\": container with ID starting with 5462215ba614ec0e034faa26abeb209759719e7895f0c030240b753802cff62b not found: ID does not exist" containerID="5462215ba614ec0e034faa26abeb209759719e7895f0c030240b753802cff62b" Apr 04 03:45:17 crc kubenswrapper[4681]: I0404 03:45:17.693544 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5462215ba614ec0e034faa26abeb209759719e7895f0c030240b753802cff62b"} err="failed to get container status \"5462215ba614ec0e034faa26abeb209759719e7895f0c030240b753802cff62b\": rpc error: code = NotFound desc = could not find container \"5462215ba614ec0e034faa26abeb209759719e7895f0c030240b753802cff62b\": container with ID starting with 5462215ba614ec0e034faa26abeb209759719e7895f0c030240b753802cff62b not found: ID does not exist" Apr 04 03:45:17 crc kubenswrapper[4681]: I0404 03:45:17.693562 4681 scope.go:117] "RemoveContainer" containerID="41db316f07b8bd88b8d5d0fad14cdfcc1b60bd67bbd811a9d683966cdcba464b" Apr 04 03:45:17 crc kubenswrapper[4681]: E0404 03:45:17.693773 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41db316f07b8bd88b8d5d0fad14cdfcc1b60bd67bbd811a9d683966cdcba464b\": container with ID starting with 41db316f07b8bd88b8d5d0fad14cdfcc1b60bd67bbd811a9d683966cdcba464b not found: ID does not exist" containerID="41db316f07b8bd88b8d5d0fad14cdfcc1b60bd67bbd811a9d683966cdcba464b" Apr 04 03:45:17 crc kubenswrapper[4681]: I0404 03:45:17.693801 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41db316f07b8bd88b8d5d0fad14cdfcc1b60bd67bbd811a9d683966cdcba464b"} err="failed to get container status \"41db316f07b8bd88b8d5d0fad14cdfcc1b60bd67bbd811a9d683966cdcba464b\": rpc error: code = NotFound desc = could not find container \"41db316f07b8bd88b8d5d0fad14cdfcc1b60bd67bbd811a9d683966cdcba464b\": container with ID starting with 41db316f07b8bd88b8d5d0fad14cdfcc1b60bd67bbd811a9d683966cdcba464b not found: ID does not exist" Apr 04 03:45:19 crc kubenswrapper[4681]: I0404 03:45:19.214549 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e2edff7-0c6b-41c0-acd8-f1e7dec006a8" path="/var/lib/kubelet/pods/4e2edff7-0c6b-41c0-acd8-f1e7dec006a8/volumes" Apr 04 03:45:26 crc kubenswrapper[4681]: I0404 03:45:26.523996 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 03:45:26 crc kubenswrapper[4681]: I0404 03:45:26.524701 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 03:45:35 crc kubenswrapper[4681]: I0404 03:45:35.494357 4681 scope.go:117] "RemoveContainer" containerID="e14d57763c15be13727d76006f7126720e23efc1b3efa07b1194029e73e5dade" Apr 04 03:45:35 crc kubenswrapper[4681]: I0404 03:45:35.523472 4681 scope.go:117] "RemoveContainer" containerID="a0182d3095aeeabaa53f8384729cd9df8d13962bac1c1ded65eb8e2263e933e3" Apr 04 03:45:56 crc kubenswrapper[4681]: I0404 03:45:56.523799 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 03:45:56 crc kubenswrapper[4681]: I0404 03:45:56.524191 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 03:45:56 crc kubenswrapper[4681]: I0404 03:45:56.524242 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 03:45:56 crc kubenswrapper[4681]: I0404 03:45:56.525044 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649"} pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Apr 04 03:45:56 crc kubenswrapper[4681]: I0404 03:45:56.525098 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" containerID="cri-o://083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649" gracePeriod=600 Apr 04 03:45:56 crc kubenswrapper[4681]: E0404 03:45:56.644345 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:45:57 crc kubenswrapper[4681]: I0404 03:45:57.024728 4681 generic.go:334] "Generic (PLEG): container finished" podID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerID="083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649" exitCode=0 Apr 04 03:45:57 crc kubenswrapper[4681]: I0404 03:45:57.024793 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerDied","Data":"083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649"} Apr 04 03:45:57 crc kubenswrapper[4681]: I0404 03:45:57.024860 4681 scope.go:117] "RemoveContainer" containerID="19af9a1ef2b1da42b55b30d1de98948b243ad76b2510aa29072f2ddbf1becf60" Apr 04 03:45:57 crc kubenswrapper[4681]: I0404 03:45:57.026781 4681 scope.go:117] "RemoveContainer" containerID="083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649" Apr 04 03:45:57 crc kubenswrapper[4681]: E0404 03:45:57.027177 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:46:00 crc kubenswrapper[4681]: I0404 03:46:00.147963 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587906-pk76c"] Apr 04 03:46:00 crc kubenswrapper[4681]: E0404 03:46:00.149430 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e2edff7-0c6b-41c0-acd8-f1e7dec006a8" containerName="gather" Apr 04 03:46:00 crc kubenswrapper[4681]: I0404 03:46:00.149456 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e2edff7-0c6b-41c0-acd8-f1e7dec006a8" containerName="gather" Apr 04 03:46:00 crc kubenswrapper[4681]: E0404 03:46:00.149486 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b3be5c9-491d-4ed4-a81e-b050b8e1be65" containerName="collect-profiles" Apr 04 03:46:00 crc kubenswrapper[4681]: I0404 03:46:00.149494 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b3be5c9-491d-4ed4-a81e-b050b8e1be65" containerName="collect-profiles" Apr 04 03:46:00 crc kubenswrapper[4681]: E0404 03:46:00.149521 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e2edff7-0c6b-41c0-acd8-f1e7dec006a8" containerName="copy" Apr 04 03:46:00 crc kubenswrapper[4681]: I0404 03:46:00.149528 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e2edff7-0c6b-41c0-acd8-f1e7dec006a8" containerName="copy" Apr 04 03:46:00 crc kubenswrapper[4681]: I0404 03:46:00.149823 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b3be5c9-491d-4ed4-a81e-b050b8e1be65" containerName="collect-profiles" Apr 04 03:46:00 crc kubenswrapper[4681]: I0404 03:46:00.149934 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e2edff7-0c6b-41c0-acd8-f1e7dec006a8" containerName="gather" Apr 04 03:46:00 crc kubenswrapper[4681]: I0404 03:46:00.150003 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e2edff7-0c6b-41c0-acd8-f1e7dec006a8" containerName="copy" Apr 04 03:46:00 crc kubenswrapper[4681]: I0404 03:46:00.151809 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587906-pk76c" Apr 04 03:46:00 crc kubenswrapper[4681]: I0404 03:46:00.154939 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 03:46:00 crc kubenswrapper[4681]: I0404 03:46:00.155189 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 03:46:00 crc kubenswrapper[4681]: I0404 03:46:00.155371 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 03:46:00 crc kubenswrapper[4681]: I0404 03:46:00.158917 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587906-pk76c"] Apr 04 03:46:00 crc kubenswrapper[4681]: I0404 03:46:00.255215 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf77n\" (UniqueName: \"kubernetes.io/projected/1c60aa57-c814-4a04-a0ba-383b8e59e477-kube-api-access-xf77n\") pod \"auto-csr-approver-29587906-pk76c\" (UID: \"1c60aa57-c814-4a04-a0ba-383b8e59e477\") " pod="openshift-infra/auto-csr-approver-29587906-pk76c" Apr 04 03:46:00 crc kubenswrapper[4681]: I0404 03:46:00.357754 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf77n\" (UniqueName: \"kubernetes.io/projected/1c60aa57-c814-4a04-a0ba-383b8e59e477-kube-api-access-xf77n\") pod \"auto-csr-approver-29587906-pk76c\" (UID: \"1c60aa57-c814-4a04-a0ba-383b8e59e477\") " pod="openshift-infra/auto-csr-approver-29587906-pk76c" Apr 04 03:46:00 crc kubenswrapper[4681]: I0404 03:46:00.382274 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf77n\" (UniqueName: \"kubernetes.io/projected/1c60aa57-c814-4a04-a0ba-383b8e59e477-kube-api-access-xf77n\") pod \"auto-csr-approver-29587906-pk76c\" (UID: \"1c60aa57-c814-4a04-a0ba-383b8e59e477\") " pod="openshift-infra/auto-csr-approver-29587906-pk76c" Apr 04 03:46:00 crc kubenswrapper[4681]: I0404 03:46:00.487841 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587906-pk76c" Apr 04 03:46:01 crc kubenswrapper[4681]: I0404 03:46:01.030860 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587906-pk76c"] Apr 04 03:46:01 crc kubenswrapper[4681]: I0404 03:46:01.068192 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587906-pk76c" event={"ID":"1c60aa57-c814-4a04-a0ba-383b8e59e477","Type":"ContainerStarted","Data":"43aa02f6f2feed3eaf1f01abfe3d27f6bfd8864a1ce918ce70ac3907d8f9f305"} Apr 04 03:46:03 crc kubenswrapper[4681]: I0404 03:46:03.090524 4681 generic.go:334] "Generic (PLEG): container finished" podID="1c60aa57-c814-4a04-a0ba-383b8e59e477" containerID="38b7eb56b05feacd37ca7367b559afe3fd69f8db6f8d3065cb4935fc3c745891" exitCode=0 Apr 04 03:46:03 crc kubenswrapper[4681]: I0404 03:46:03.090611 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587906-pk76c" event={"ID":"1c60aa57-c814-4a04-a0ba-383b8e59e477","Type":"ContainerDied","Data":"38b7eb56b05feacd37ca7367b559afe3fd69f8db6f8d3065cb4935fc3c745891"} Apr 04 03:46:04 crc kubenswrapper[4681]: I0404 03:46:04.482555 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587906-pk76c" Apr 04 03:46:04 crc kubenswrapper[4681]: I0404 03:46:04.550404 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf77n\" (UniqueName: \"kubernetes.io/projected/1c60aa57-c814-4a04-a0ba-383b8e59e477-kube-api-access-xf77n\") pod \"1c60aa57-c814-4a04-a0ba-383b8e59e477\" (UID: \"1c60aa57-c814-4a04-a0ba-383b8e59e477\") " Apr 04 03:46:04 crc kubenswrapper[4681]: I0404 03:46:04.556391 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c60aa57-c814-4a04-a0ba-383b8e59e477-kube-api-access-xf77n" (OuterVolumeSpecName: "kube-api-access-xf77n") pod "1c60aa57-c814-4a04-a0ba-383b8e59e477" (UID: "1c60aa57-c814-4a04-a0ba-383b8e59e477"). InnerVolumeSpecName "kube-api-access-xf77n". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:46:04 crc kubenswrapper[4681]: I0404 03:46:04.653710 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf77n\" (UniqueName: \"kubernetes.io/projected/1c60aa57-c814-4a04-a0ba-383b8e59e477-kube-api-access-xf77n\") on node \"crc\" DevicePath \"\"" Apr 04 03:46:05 crc kubenswrapper[4681]: I0404 03:46:05.110732 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587906-pk76c" event={"ID":"1c60aa57-c814-4a04-a0ba-383b8e59e477","Type":"ContainerDied","Data":"43aa02f6f2feed3eaf1f01abfe3d27f6bfd8864a1ce918ce70ac3907d8f9f305"} Apr 04 03:46:05 crc kubenswrapper[4681]: I0404 03:46:05.110779 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43aa02f6f2feed3eaf1f01abfe3d27f6bfd8864a1ce918ce70ac3907d8f9f305" Apr 04 03:46:05 crc kubenswrapper[4681]: I0404 03:46:05.110812 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587906-pk76c" Apr 04 03:46:05 crc kubenswrapper[4681]: I0404 03:46:05.572440 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587900-b6477"] Apr 04 03:46:05 crc kubenswrapper[4681]: I0404 03:46:05.601455 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587900-b6477"] Apr 04 03:46:07 crc kubenswrapper[4681]: I0404 03:46:07.215704 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf00ebf8-c237-400a-9ca4-ca71495e1e10" path="/var/lib/kubelet/pods/cf00ebf8-c237-400a-9ca4-ca71495e1e10/volumes" Apr 04 03:46:11 crc kubenswrapper[4681]: I0404 03:46:11.210650 4681 scope.go:117] "RemoveContainer" containerID="083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649" Apr 04 03:46:11 crc kubenswrapper[4681]: E0404 03:46:11.211616 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:46:20 crc kubenswrapper[4681]: I0404 03:46:20.314611 4681 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-8456d9bb7c-dcjw6" podUID="cb09ea7e-aac7-4a55-962c-ca71e66e26a8" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Apr 04 03:46:24 crc kubenswrapper[4681]: I0404 03:46:24.200929 4681 scope.go:117] "RemoveContainer" containerID="083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649" Apr 04 03:46:24 crc kubenswrapper[4681]: E0404 03:46:24.201744 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:46:35 crc kubenswrapper[4681]: I0404 03:46:35.655193 4681 scope.go:117] "RemoveContainer" containerID="ce4e008b96470896a388abb99081dd95a45094a0502868f72d23bd59a81511c4" Apr 04 03:46:35 crc kubenswrapper[4681]: I0404 03:46:35.682447 4681 scope.go:117] "RemoveContainer" containerID="20eec71c90814b033c064a83ea3ecbd638acea9725d097f719f8c0e7411adaa0" Apr 04 03:46:36 crc kubenswrapper[4681]: I0404 03:46:36.200684 4681 scope.go:117] "RemoveContainer" containerID="083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649" Apr 04 03:46:36 crc kubenswrapper[4681]: E0404 03:46:36.201196 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:46:50 crc kubenswrapper[4681]: I0404 03:46:50.202047 4681 scope.go:117] "RemoveContainer" containerID="083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649" Apr 04 03:46:50 crc kubenswrapper[4681]: E0404 03:46:50.203045 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:47:02 crc kubenswrapper[4681]: I0404 03:47:02.201946 4681 scope.go:117] "RemoveContainer" containerID="083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649" Apr 04 03:47:02 crc kubenswrapper[4681]: E0404 03:47:02.203013 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:47:17 crc kubenswrapper[4681]: I0404 03:47:17.201151 4681 scope.go:117] "RemoveContainer" containerID="083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649" Apr 04 03:47:17 crc kubenswrapper[4681]: E0404 03:47:17.202430 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:47:28 crc kubenswrapper[4681]: I0404 03:47:28.201838 4681 scope.go:117] "RemoveContainer" containerID="083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649" Apr 04 03:47:28 crc kubenswrapper[4681]: E0404 03:47:28.202720 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:47:43 crc kubenswrapper[4681]: I0404 03:47:43.201506 4681 scope.go:117] "RemoveContainer" containerID="083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649" Apr 04 03:47:43 crc kubenswrapper[4681]: E0404 03:47:43.202386 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:47:58 crc kubenswrapper[4681]: I0404 03:47:58.201610 4681 scope.go:117] "RemoveContainer" containerID="083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649" Apr 04 03:47:58 crc kubenswrapper[4681]: E0404 03:47:58.202849 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:48:00 crc kubenswrapper[4681]: I0404 03:48:00.158797 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587908-gfh2l"] Apr 04 03:48:00 crc kubenswrapper[4681]: E0404 03:48:00.160367 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c60aa57-c814-4a04-a0ba-383b8e59e477" containerName="oc" Apr 04 03:48:00 crc kubenswrapper[4681]: I0404 03:48:00.160396 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c60aa57-c814-4a04-a0ba-383b8e59e477" containerName="oc" Apr 04 03:48:00 crc kubenswrapper[4681]: I0404 03:48:00.160927 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c60aa57-c814-4a04-a0ba-383b8e59e477" containerName="oc" Apr 04 03:48:00 crc kubenswrapper[4681]: I0404 03:48:00.163942 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587908-gfh2l" Apr 04 03:48:00 crc kubenswrapper[4681]: I0404 03:48:00.166738 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 03:48:00 crc kubenswrapper[4681]: I0404 03:48:00.166871 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 03:48:00 crc kubenswrapper[4681]: I0404 03:48:00.167170 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 03:48:00 crc kubenswrapper[4681]: I0404 03:48:00.175428 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587908-gfh2l"] Apr 04 03:48:00 crc kubenswrapper[4681]: I0404 03:48:00.235028 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vtfj\" (UniqueName: \"kubernetes.io/projected/073524c9-3e47-4f2f-a3ef-df6456d79233-kube-api-access-6vtfj\") pod \"auto-csr-approver-29587908-gfh2l\" (UID: \"073524c9-3e47-4f2f-a3ef-df6456d79233\") " pod="openshift-infra/auto-csr-approver-29587908-gfh2l" Apr 04 03:48:00 crc kubenswrapper[4681]: I0404 03:48:00.337296 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vtfj\" (UniqueName: \"kubernetes.io/projected/073524c9-3e47-4f2f-a3ef-df6456d79233-kube-api-access-6vtfj\") pod \"auto-csr-approver-29587908-gfh2l\" (UID: \"073524c9-3e47-4f2f-a3ef-df6456d79233\") " pod="openshift-infra/auto-csr-approver-29587908-gfh2l" Apr 04 03:48:00 crc kubenswrapper[4681]: I0404 03:48:00.362287 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vtfj\" (UniqueName: \"kubernetes.io/projected/073524c9-3e47-4f2f-a3ef-df6456d79233-kube-api-access-6vtfj\") pod \"auto-csr-approver-29587908-gfh2l\" (UID: \"073524c9-3e47-4f2f-a3ef-df6456d79233\") " pod="openshift-infra/auto-csr-approver-29587908-gfh2l" Apr 04 03:48:00 crc kubenswrapper[4681]: I0404 03:48:00.484013 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587908-gfh2l" Apr 04 03:48:00 crc kubenswrapper[4681]: I0404 03:48:00.959186 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587908-gfh2l"] Apr 04 03:48:01 crc kubenswrapper[4681]: I0404 03:48:01.372021 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587908-gfh2l" event={"ID":"073524c9-3e47-4f2f-a3ef-df6456d79233","Type":"ContainerStarted","Data":"9300c3be0eae2dce0f815c8513df3ba4b4bc2960c28da96fecdbf7a3216c9c79"} Apr 04 03:48:02 crc kubenswrapper[4681]: I0404 03:48:02.382546 4681 generic.go:334] "Generic (PLEG): container finished" podID="073524c9-3e47-4f2f-a3ef-df6456d79233" containerID="7284fe24a1257b3c6ac51ccc986e4afa1574270dad4337663ee0f05999511b4a" exitCode=0 Apr 04 03:48:02 crc kubenswrapper[4681]: I0404 03:48:02.382602 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587908-gfh2l" event={"ID":"073524c9-3e47-4f2f-a3ef-df6456d79233","Type":"ContainerDied","Data":"7284fe24a1257b3c6ac51ccc986e4afa1574270dad4337663ee0f05999511b4a"} Apr 04 03:48:03 crc kubenswrapper[4681]: I0404 03:48:03.838831 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587908-gfh2l" Apr 04 03:48:03 crc kubenswrapper[4681]: I0404 03:48:03.912415 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vtfj\" (UniqueName: \"kubernetes.io/projected/073524c9-3e47-4f2f-a3ef-df6456d79233-kube-api-access-6vtfj\") pod \"073524c9-3e47-4f2f-a3ef-df6456d79233\" (UID: \"073524c9-3e47-4f2f-a3ef-df6456d79233\") " Apr 04 03:48:03 crc kubenswrapper[4681]: I0404 03:48:03.922192 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/073524c9-3e47-4f2f-a3ef-df6456d79233-kube-api-access-6vtfj" (OuterVolumeSpecName: "kube-api-access-6vtfj") pod "073524c9-3e47-4f2f-a3ef-df6456d79233" (UID: "073524c9-3e47-4f2f-a3ef-df6456d79233"). InnerVolumeSpecName "kube-api-access-6vtfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:48:04 crc kubenswrapper[4681]: I0404 03:48:04.014720 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vtfj\" (UniqueName: \"kubernetes.io/projected/073524c9-3e47-4f2f-a3ef-df6456d79233-kube-api-access-6vtfj\") on node \"crc\" DevicePath \"\"" Apr 04 03:48:04 crc kubenswrapper[4681]: I0404 03:48:04.405871 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587908-gfh2l" event={"ID":"073524c9-3e47-4f2f-a3ef-df6456d79233","Type":"ContainerDied","Data":"9300c3be0eae2dce0f815c8513df3ba4b4bc2960c28da96fecdbf7a3216c9c79"} Apr 04 03:48:04 crc kubenswrapper[4681]: I0404 03:48:04.405915 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9300c3be0eae2dce0f815c8513df3ba4b4bc2960c28da96fecdbf7a3216c9c79" Apr 04 03:48:04 crc kubenswrapper[4681]: I0404 03:48:04.405980 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587908-gfh2l" Apr 04 03:48:04 crc kubenswrapper[4681]: I0404 03:48:04.919186 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587902-88w2x"] Apr 04 03:48:04 crc kubenswrapper[4681]: I0404 03:48:04.928309 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587902-88w2x"] Apr 04 03:48:05 crc kubenswrapper[4681]: I0404 03:48:05.214883 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22ae55dc-dec6-4d8a-baf4-c5dd56cecb32" path="/var/lib/kubelet/pods/22ae55dc-dec6-4d8a-baf4-c5dd56cecb32/volumes" Apr 04 03:48:12 crc kubenswrapper[4681]: I0404 03:48:12.201772 4681 scope.go:117] "RemoveContainer" containerID="083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649" Apr 04 03:48:12 crc kubenswrapper[4681]: E0404 03:48:12.203944 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:48:23 crc kubenswrapper[4681]: I0404 03:48:23.201947 4681 scope.go:117] "RemoveContainer" containerID="083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649" Apr 04 03:48:23 crc kubenswrapper[4681]: E0404 03:48:23.202875 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:48:35 crc kubenswrapper[4681]: I0404 03:48:35.201667 4681 scope.go:117] "RemoveContainer" containerID="083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649" Apr 04 03:48:35 crc kubenswrapper[4681]: E0404 03:48:35.202219 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:48:35 crc kubenswrapper[4681]: I0404 03:48:35.839579 4681 scope.go:117] "RemoveContainer" containerID="450b7b9ccc623454c5c59f6c5653923e3ca0753ef29f3f30618c6f21eefd8189" Apr 04 03:48:47 crc kubenswrapper[4681]: I0404 03:48:47.201667 4681 scope.go:117] "RemoveContainer" containerID="083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649" Apr 04 03:48:47 crc kubenswrapper[4681]: E0404 03:48:47.202566 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:48:56 crc kubenswrapper[4681]: I0404 03:48:56.716408 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-shsrj/must-gather-cz9zd"] Apr 04 03:48:56 crc kubenswrapper[4681]: E0404 03:48:56.717388 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="073524c9-3e47-4f2f-a3ef-df6456d79233" containerName="oc" Apr 04 03:48:56 crc kubenswrapper[4681]: I0404 03:48:56.717402 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="073524c9-3e47-4f2f-a3ef-df6456d79233" containerName="oc" Apr 04 03:48:56 crc kubenswrapper[4681]: I0404 03:48:56.717582 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="073524c9-3e47-4f2f-a3ef-df6456d79233" containerName="oc" Apr 04 03:48:56 crc kubenswrapper[4681]: I0404 03:48:56.718681 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-shsrj/must-gather-cz9zd" Apr 04 03:48:56 crc kubenswrapper[4681]: I0404 03:48:56.720694 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-shsrj"/"kube-root-ca.crt" Apr 04 03:48:56 crc kubenswrapper[4681]: I0404 03:48:56.725631 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-shsrj"/"openshift-service-ca.crt" Apr 04 03:48:56 crc kubenswrapper[4681]: I0404 03:48:56.734339 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-shsrj/must-gather-cz9zd"] Apr 04 03:48:56 crc kubenswrapper[4681]: I0404 03:48:56.789390 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4rh4\" (UniqueName: \"kubernetes.io/projected/ed8fde71-911f-4017-8b4d-05022b816eb3-kube-api-access-q4rh4\") pod \"must-gather-cz9zd\" (UID: \"ed8fde71-911f-4017-8b4d-05022b816eb3\") " pod="openshift-must-gather-shsrj/must-gather-cz9zd" Apr 04 03:48:56 crc kubenswrapper[4681]: I0404 03:48:56.789497 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ed8fde71-911f-4017-8b4d-05022b816eb3-must-gather-output\") pod \"must-gather-cz9zd\" (UID: \"ed8fde71-911f-4017-8b4d-05022b816eb3\") " pod="openshift-must-gather-shsrj/must-gather-cz9zd" Apr 04 03:48:56 crc kubenswrapper[4681]: I0404 03:48:56.898446 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ed8fde71-911f-4017-8b4d-05022b816eb3-must-gather-output\") pod \"must-gather-cz9zd\" (UID: \"ed8fde71-911f-4017-8b4d-05022b816eb3\") " pod="openshift-must-gather-shsrj/must-gather-cz9zd" Apr 04 03:48:56 crc kubenswrapper[4681]: I0404 03:48:56.898677 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4rh4\" (UniqueName: \"kubernetes.io/projected/ed8fde71-911f-4017-8b4d-05022b816eb3-kube-api-access-q4rh4\") pod \"must-gather-cz9zd\" (UID: \"ed8fde71-911f-4017-8b4d-05022b816eb3\") " pod="openshift-must-gather-shsrj/must-gather-cz9zd" Apr 04 03:48:56 crc kubenswrapper[4681]: I0404 03:48:56.898830 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ed8fde71-911f-4017-8b4d-05022b816eb3-must-gather-output\") pod \"must-gather-cz9zd\" (UID: \"ed8fde71-911f-4017-8b4d-05022b816eb3\") " pod="openshift-must-gather-shsrj/must-gather-cz9zd" Apr 04 03:48:56 crc kubenswrapper[4681]: I0404 03:48:56.931250 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4rh4\" (UniqueName: \"kubernetes.io/projected/ed8fde71-911f-4017-8b4d-05022b816eb3-kube-api-access-q4rh4\") pod \"must-gather-cz9zd\" (UID: \"ed8fde71-911f-4017-8b4d-05022b816eb3\") " pod="openshift-must-gather-shsrj/must-gather-cz9zd" Apr 04 03:48:57 crc kubenswrapper[4681]: I0404 03:48:57.047536 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-shsrj/must-gather-cz9zd" Apr 04 03:48:57 crc kubenswrapper[4681]: I0404 03:48:57.561845 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-shsrj/must-gather-cz9zd"] Apr 04 03:48:58 crc kubenswrapper[4681]: I0404 03:48:58.075414 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-shsrj/must-gather-cz9zd" event={"ID":"ed8fde71-911f-4017-8b4d-05022b816eb3","Type":"ContainerStarted","Data":"a9aac46f39d170e70365d09fbd71b802885a9d54a0aa9d07a5edb065db1a131f"} Apr 04 03:48:58 crc kubenswrapper[4681]: I0404 03:48:58.075453 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-shsrj/must-gather-cz9zd" event={"ID":"ed8fde71-911f-4017-8b4d-05022b816eb3","Type":"ContainerStarted","Data":"58bb4175e32e6d600c17d5911ec206f71b27b953f71db4685e724aa8719937ae"} Apr 04 03:48:58 crc kubenswrapper[4681]: I0404 03:48:58.200767 4681 scope.go:117] "RemoveContainer" containerID="083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649" Apr 04 03:48:58 crc kubenswrapper[4681]: E0404 03:48:58.201470 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:48:59 crc kubenswrapper[4681]: I0404 03:48:59.089232 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-shsrj/must-gather-cz9zd" event={"ID":"ed8fde71-911f-4017-8b4d-05022b816eb3","Type":"ContainerStarted","Data":"0d31dcd1b540e27b71f8c94420d80fd4ab2a099778d9be9239c89b7d1529b069"} Apr 04 03:48:59 crc kubenswrapper[4681]: I0404 03:48:59.116226 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-shsrj/must-gather-cz9zd" podStartSLOduration=3.116194997 podStartE2EDuration="3.116194997s" podCreationTimestamp="2026-04-04 03:48:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 03:48:59.103982413 +0000 UTC m=+6818.769757573" watchObservedRunningTime="2026-04-04 03:48:59.116194997 +0000 UTC m=+6818.781970157" Apr 04 03:49:01 crc kubenswrapper[4681]: I0404 03:49:01.754028 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-shsrj/crc-debug-56ht6"] Apr 04 03:49:01 crc kubenswrapper[4681]: I0404 03:49:01.755929 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-shsrj/crc-debug-56ht6" Apr 04 03:49:01 crc kubenswrapper[4681]: I0404 03:49:01.757863 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-shsrj"/"default-dockercfg-knfdp" Apr 04 03:49:01 crc kubenswrapper[4681]: I0404 03:49:01.799350 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvgv6\" (UniqueName: \"kubernetes.io/projected/0e28f045-1622-4517-8692-a1a17d82a0f8-kube-api-access-wvgv6\") pod \"crc-debug-56ht6\" (UID: \"0e28f045-1622-4517-8692-a1a17d82a0f8\") " pod="openshift-must-gather-shsrj/crc-debug-56ht6" Apr 04 03:49:01 crc kubenswrapper[4681]: I0404 03:49:01.799642 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0e28f045-1622-4517-8692-a1a17d82a0f8-host\") pod \"crc-debug-56ht6\" (UID: \"0e28f045-1622-4517-8692-a1a17d82a0f8\") " pod="openshift-must-gather-shsrj/crc-debug-56ht6" Apr 04 03:49:01 crc kubenswrapper[4681]: I0404 03:49:01.902251 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvgv6\" (UniqueName: \"kubernetes.io/projected/0e28f045-1622-4517-8692-a1a17d82a0f8-kube-api-access-wvgv6\") pod \"crc-debug-56ht6\" (UID: \"0e28f045-1622-4517-8692-a1a17d82a0f8\") " pod="openshift-must-gather-shsrj/crc-debug-56ht6" Apr 04 03:49:01 crc kubenswrapper[4681]: I0404 03:49:01.902356 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0e28f045-1622-4517-8692-a1a17d82a0f8-host\") pod \"crc-debug-56ht6\" (UID: \"0e28f045-1622-4517-8692-a1a17d82a0f8\") " pod="openshift-must-gather-shsrj/crc-debug-56ht6" Apr 04 03:49:01 crc kubenswrapper[4681]: I0404 03:49:01.902590 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0e28f045-1622-4517-8692-a1a17d82a0f8-host\") pod \"crc-debug-56ht6\" (UID: \"0e28f045-1622-4517-8692-a1a17d82a0f8\") " pod="openshift-must-gather-shsrj/crc-debug-56ht6" Apr 04 03:49:01 crc kubenswrapper[4681]: I0404 03:49:01.930459 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvgv6\" (UniqueName: \"kubernetes.io/projected/0e28f045-1622-4517-8692-a1a17d82a0f8-kube-api-access-wvgv6\") pod \"crc-debug-56ht6\" (UID: \"0e28f045-1622-4517-8692-a1a17d82a0f8\") " pod="openshift-must-gather-shsrj/crc-debug-56ht6" Apr 04 03:49:02 crc kubenswrapper[4681]: I0404 03:49:02.078920 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-shsrj/crc-debug-56ht6" Apr 04 03:49:02 crc kubenswrapper[4681]: W0404 03:49:02.109713 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e28f045_1622_4517_8692_a1a17d82a0f8.slice/crio-395979ae9f2900a79c6a8e0b403ddb3c664313b1499afec0f211684cecf0db44 WatchSource:0}: Error finding container 395979ae9f2900a79c6a8e0b403ddb3c664313b1499afec0f211684cecf0db44: Status 404 returned error can't find the container with id 395979ae9f2900a79c6a8e0b403ddb3c664313b1499afec0f211684cecf0db44 Apr 04 03:49:02 crc kubenswrapper[4681]: I0404 03:49:02.141083 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-shsrj/crc-debug-56ht6" event={"ID":"0e28f045-1622-4517-8692-a1a17d82a0f8","Type":"ContainerStarted","Data":"395979ae9f2900a79c6a8e0b403ddb3c664313b1499afec0f211684cecf0db44"} Apr 04 03:49:03 crc kubenswrapper[4681]: I0404 03:49:03.162184 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-shsrj/crc-debug-56ht6" event={"ID":"0e28f045-1622-4517-8692-a1a17d82a0f8","Type":"ContainerStarted","Data":"974b6d735a7b5811f9b68eda03a171d82b2f51eece1f0da10abfe87e96cacd27"} Apr 04 03:49:03 crc kubenswrapper[4681]: I0404 03:49:03.194763 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-shsrj/crc-debug-56ht6" podStartSLOduration=2.194727352 podStartE2EDuration="2.194727352s" podCreationTimestamp="2026-04-04 03:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 03:49:03.177704867 +0000 UTC m=+6822.843479997" watchObservedRunningTime="2026-04-04 03:49:03.194727352 +0000 UTC m=+6822.860502472" Apr 04 03:49:09 crc kubenswrapper[4681]: I0404 03:49:09.201500 4681 scope.go:117] "RemoveContainer" containerID="083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649" Apr 04 03:49:09 crc kubenswrapper[4681]: E0404 03:49:09.202394 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:49:23 crc kubenswrapper[4681]: I0404 03:49:23.200943 4681 scope.go:117] "RemoveContainer" containerID="083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649" Apr 04 03:49:23 crc kubenswrapper[4681]: E0404 03:49:23.201587 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:49:37 crc kubenswrapper[4681]: I0404 03:49:37.200971 4681 scope.go:117] "RemoveContainer" containerID="083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649" Apr 04 03:49:37 crc kubenswrapper[4681]: E0404 03:49:37.201699 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:49:41 crc kubenswrapper[4681]: I0404 03:49:41.514636 4681 generic.go:334] "Generic (PLEG): container finished" podID="0e28f045-1622-4517-8692-a1a17d82a0f8" containerID="974b6d735a7b5811f9b68eda03a171d82b2f51eece1f0da10abfe87e96cacd27" exitCode=0 Apr 04 03:49:41 crc kubenswrapper[4681]: I0404 03:49:41.514690 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-shsrj/crc-debug-56ht6" event={"ID":"0e28f045-1622-4517-8692-a1a17d82a0f8","Type":"ContainerDied","Data":"974b6d735a7b5811f9b68eda03a171d82b2f51eece1f0da10abfe87e96cacd27"} Apr 04 03:49:42 crc kubenswrapper[4681]: I0404 03:49:42.641958 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-shsrj/crc-debug-56ht6" Apr 04 03:49:42 crc kubenswrapper[4681]: I0404 03:49:42.677778 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-shsrj/crc-debug-56ht6"] Apr 04 03:49:42 crc kubenswrapper[4681]: I0404 03:49:42.689123 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-shsrj/crc-debug-56ht6"] Apr 04 03:49:42 crc kubenswrapper[4681]: I0404 03:49:42.784045 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvgv6\" (UniqueName: \"kubernetes.io/projected/0e28f045-1622-4517-8692-a1a17d82a0f8-kube-api-access-wvgv6\") pod \"0e28f045-1622-4517-8692-a1a17d82a0f8\" (UID: \"0e28f045-1622-4517-8692-a1a17d82a0f8\") " Apr 04 03:49:42 crc kubenswrapper[4681]: I0404 03:49:42.784529 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0e28f045-1622-4517-8692-a1a17d82a0f8-host\") pod \"0e28f045-1622-4517-8692-a1a17d82a0f8\" (UID: \"0e28f045-1622-4517-8692-a1a17d82a0f8\") " Apr 04 03:49:42 crc kubenswrapper[4681]: I0404 03:49:42.784749 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0e28f045-1622-4517-8692-a1a17d82a0f8-host" (OuterVolumeSpecName: "host") pod "0e28f045-1622-4517-8692-a1a17d82a0f8" (UID: "0e28f045-1622-4517-8692-a1a17d82a0f8"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 03:49:42 crc kubenswrapper[4681]: I0404 03:49:42.785170 4681 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0e28f045-1622-4517-8692-a1a17d82a0f8-host\") on node \"crc\" DevicePath \"\"" Apr 04 03:49:42 crc kubenswrapper[4681]: I0404 03:49:42.803067 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e28f045-1622-4517-8692-a1a17d82a0f8-kube-api-access-wvgv6" (OuterVolumeSpecName: "kube-api-access-wvgv6") pod "0e28f045-1622-4517-8692-a1a17d82a0f8" (UID: "0e28f045-1622-4517-8692-a1a17d82a0f8"). InnerVolumeSpecName "kube-api-access-wvgv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:49:42 crc kubenswrapper[4681]: I0404 03:49:42.887043 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvgv6\" (UniqueName: \"kubernetes.io/projected/0e28f045-1622-4517-8692-a1a17d82a0f8-kube-api-access-wvgv6\") on node \"crc\" DevicePath \"\"" Apr 04 03:49:43 crc kubenswrapper[4681]: I0404 03:49:43.212205 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e28f045-1622-4517-8692-a1a17d82a0f8" path="/var/lib/kubelet/pods/0e28f045-1622-4517-8692-a1a17d82a0f8/volumes" Apr 04 03:49:43 crc kubenswrapper[4681]: I0404 03:49:43.537203 4681 scope.go:117] "RemoveContainer" containerID="974b6d735a7b5811f9b68eda03a171d82b2f51eece1f0da10abfe87e96cacd27" Apr 04 03:49:43 crc kubenswrapper[4681]: I0404 03:49:43.537408 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-shsrj/crc-debug-56ht6" Apr 04 03:49:43 crc kubenswrapper[4681]: I0404 03:49:43.925903 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-shsrj/crc-debug-s5vc5"] Apr 04 03:49:43 crc kubenswrapper[4681]: E0404 03:49:43.926561 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e28f045-1622-4517-8692-a1a17d82a0f8" containerName="container-00" Apr 04 03:49:43 crc kubenswrapper[4681]: I0404 03:49:43.930161 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e28f045-1622-4517-8692-a1a17d82a0f8" containerName="container-00" Apr 04 03:49:43 crc kubenswrapper[4681]: I0404 03:49:43.930745 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e28f045-1622-4517-8692-a1a17d82a0f8" containerName="container-00" Apr 04 03:49:43 crc kubenswrapper[4681]: I0404 03:49:43.931536 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-shsrj/crc-debug-s5vc5" Apr 04 03:49:43 crc kubenswrapper[4681]: I0404 03:49:43.933954 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-shsrj"/"default-dockercfg-knfdp" Apr 04 03:49:44 crc kubenswrapper[4681]: I0404 03:49:44.009093 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/463d21bd-4dc5-49d4-8ae2-e1efd2e5e306-host\") pod \"crc-debug-s5vc5\" (UID: \"463d21bd-4dc5-49d4-8ae2-e1efd2e5e306\") " pod="openshift-must-gather-shsrj/crc-debug-s5vc5" Apr 04 03:49:44 crc kubenswrapper[4681]: I0404 03:49:44.009473 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t42s8\" (UniqueName: \"kubernetes.io/projected/463d21bd-4dc5-49d4-8ae2-e1efd2e5e306-kube-api-access-t42s8\") pod \"crc-debug-s5vc5\" (UID: \"463d21bd-4dc5-49d4-8ae2-e1efd2e5e306\") " pod="openshift-must-gather-shsrj/crc-debug-s5vc5" Apr 04 03:49:44 crc kubenswrapper[4681]: I0404 03:49:44.111855 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/463d21bd-4dc5-49d4-8ae2-e1efd2e5e306-host\") pod \"crc-debug-s5vc5\" (UID: \"463d21bd-4dc5-49d4-8ae2-e1efd2e5e306\") " pod="openshift-must-gather-shsrj/crc-debug-s5vc5" Apr 04 03:49:44 crc kubenswrapper[4681]: I0404 03:49:44.111983 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/463d21bd-4dc5-49d4-8ae2-e1efd2e5e306-host\") pod \"crc-debug-s5vc5\" (UID: \"463d21bd-4dc5-49d4-8ae2-e1efd2e5e306\") " pod="openshift-must-gather-shsrj/crc-debug-s5vc5" Apr 04 03:49:44 crc kubenswrapper[4681]: I0404 03:49:44.112075 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t42s8\" (UniqueName: \"kubernetes.io/projected/463d21bd-4dc5-49d4-8ae2-e1efd2e5e306-kube-api-access-t42s8\") pod \"crc-debug-s5vc5\" (UID: \"463d21bd-4dc5-49d4-8ae2-e1efd2e5e306\") " pod="openshift-must-gather-shsrj/crc-debug-s5vc5" Apr 04 03:49:44 crc kubenswrapper[4681]: I0404 03:49:44.137111 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t42s8\" (UniqueName: \"kubernetes.io/projected/463d21bd-4dc5-49d4-8ae2-e1efd2e5e306-kube-api-access-t42s8\") pod \"crc-debug-s5vc5\" (UID: \"463d21bd-4dc5-49d4-8ae2-e1efd2e5e306\") " pod="openshift-must-gather-shsrj/crc-debug-s5vc5" Apr 04 03:49:44 crc kubenswrapper[4681]: I0404 03:49:44.248732 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-shsrj/crc-debug-s5vc5" Apr 04 03:49:44 crc kubenswrapper[4681]: I0404 03:49:44.549808 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-shsrj/crc-debug-s5vc5" event={"ID":"463d21bd-4dc5-49d4-8ae2-e1efd2e5e306","Type":"ContainerStarted","Data":"952f7547f679ecce0df449e96acc50ad394d07a91a98d67230c06d7b5fe01ec5"} Apr 04 03:49:44 crc kubenswrapper[4681]: I0404 03:49:44.550103 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-shsrj/crc-debug-s5vc5" event={"ID":"463d21bd-4dc5-49d4-8ae2-e1efd2e5e306","Type":"ContainerStarted","Data":"1244ff48dc1eab9adbca513d27455ac108698d426b56caff08f93ab1a56c630c"} Apr 04 03:49:44 crc kubenswrapper[4681]: I0404 03:49:44.571200 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-shsrj/crc-debug-s5vc5" podStartSLOduration=1.5711825 podStartE2EDuration="1.5711825s" podCreationTimestamp="2026-04-04 03:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 03:49:44.568019093 +0000 UTC m=+6864.233794213" watchObservedRunningTime="2026-04-04 03:49:44.5711825 +0000 UTC m=+6864.236957620" Apr 04 03:49:45 crc kubenswrapper[4681]: I0404 03:49:45.562728 4681 generic.go:334] "Generic (PLEG): container finished" podID="463d21bd-4dc5-49d4-8ae2-e1efd2e5e306" containerID="952f7547f679ecce0df449e96acc50ad394d07a91a98d67230c06d7b5fe01ec5" exitCode=0 Apr 04 03:49:45 crc kubenswrapper[4681]: I0404 03:49:45.562763 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-shsrj/crc-debug-s5vc5" event={"ID":"463d21bd-4dc5-49d4-8ae2-e1efd2e5e306","Type":"ContainerDied","Data":"952f7547f679ecce0df449e96acc50ad394d07a91a98d67230c06d7b5fe01ec5"} Apr 04 03:49:46 crc kubenswrapper[4681]: I0404 03:49:46.689495 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-shsrj/crc-debug-s5vc5" Apr 04 03:49:46 crc kubenswrapper[4681]: I0404 03:49:46.762643 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/463d21bd-4dc5-49d4-8ae2-e1efd2e5e306-host\") pod \"463d21bd-4dc5-49d4-8ae2-e1efd2e5e306\" (UID: \"463d21bd-4dc5-49d4-8ae2-e1efd2e5e306\") " Apr 04 03:49:46 crc kubenswrapper[4681]: I0404 03:49:46.762689 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t42s8\" (UniqueName: \"kubernetes.io/projected/463d21bd-4dc5-49d4-8ae2-e1efd2e5e306-kube-api-access-t42s8\") pod \"463d21bd-4dc5-49d4-8ae2-e1efd2e5e306\" (UID: \"463d21bd-4dc5-49d4-8ae2-e1efd2e5e306\") " Apr 04 03:49:46 crc kubenswrapper[4681]: I0404 03:49:46.762724 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/463d21bd-4dc5-49d4-8ae2-e1efd2e5e306-host" (OuterVolumeSpecName: "host") pod "463d21bd-4dc5-49d4-8ae2-e1efd2e5e306" (UID: "463d21bd-4dc5-49d4-8ae2-e1efd2e5e306"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 03:49:46 crc kubenswrapper[4681]: I0404 03:49:46.763137 4681 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/463d21bd-4dc5-49d4-8ae2-e1efd2e5e306-host\") on node \"crc\" DevicePath \"\"" Apr 04 03:49:46 crc kubenswrapper[4681]: I0404 03:49:46.767823 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/463d21bd-4dc5-49d4-8ae2-e1efd2e5e306-kube-api-access-t42s8" (OuterVolumeSpecName: "kube-api-access-t42s8") pod "463d21bd-4dc5-49d4-8ae2-e1efd2e5e306" (UID: "463d21bd-4dc5-49d4-8ae2-e1efd2e5e306"). InnerVolumeSpecName "kube-api-access-t42s8". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:49:46 crc kubenswrapper[4681]: I0404 03:49:46.839841 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2kdfl"] Apr 04 03:49:46 crc kubenswrapper[4681]: E0404 03:49:46.840433 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="463d21bd-4dc5-49d4-8ae2-e1efd2e5e306" containerName="container-00" Apr 04 03:49:46 crc kubenswrapper[4681]: I0404 03:49:46.840461 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="463d21bd-4dc5-49d4-8ae2-e1efd2e5e306" containerName="container-00" Apr 04 03:49:46 crc kubenswrapper[4681]: I0404 03:49:46.840731 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="463d21bd-4dc5-49d4-8ae2-e1efd2e5e306" containerName="container-00" Apr 04 03:49:46 crc kubenswrapper[4681]: I0404 03:49:46.851742 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2kdfl" Apr 04 03:49:46 crc kubenswrapper[4681]: I0404 03:49:46.865013 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t42s8\" (UniqueName: \"kubernetes.io/projected/463d21bd-4dc5-49d4-8ae2-e1efd2e5e306-kube-api-access-t42s8\") on node \"crc\" DevicePath \"\"" Apr 04 03:49:46 crc kubenswrapper[4681]: I0404 03:49:46.877559 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2kdfl"] Apr 04 03:49:46 crc kubenswrapper[4681]: I0404 03:49:46.967324 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24bb8be5-7c3c-4824-b8c9-7867dab6dfc9-catalog-content\") pod \"redhat-operators-2kdfl\" (UID: \"24bb8be5-7c3c-4824-b8c9-7867dab6dfc9\") " pod="openshift-marketplace/redhat-operators-2kdfl" Apr 04 03:49:46 crc kubenswrapper[4681]: I0404 03:49:46.967462 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24bb8be5-7c3c-4824-b8c9-7867dab6dfc9-utilities\") pod \"redhat-operators-2kdfl\" (UID: \"24bb8be5-7c3c-4824-b8c9-7867dab6dfc9\") " pod="openshift-marketplace/redhat-operators-2kdfl" Apr 04 03:49:46 crc kubenswrapper[4681]: I0404 03:49:46.967520 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb8wg\" (UniqueName: \"kubernetes.io/projected/24bb8be5-7c3c-4824-b8c9-7867dab6dfc9-kube-api-access-tb8wg\") pod \"redhat-operators-2kdfl\" (UID: \"24bb8be5-7c3c-4824-b8c9-7867dab6dfc9\") " pod="openshift-marketplace/redhat-operators-2kdfl" Apr 04 03:49:47 crc kubenswrapper[4681]: I0404 03:49:47.003999 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-shsrj/crc-debug-s5vc5"] Apr 04 03:49:47 crc kubenswrapper[4681]: I0404 03:49:47.015998 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-shsrj/crc-debug-s5vc5"] Apr 04 03:49:47 crc kubenswrapper[4681]: I0404 03:49:47.069735 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb8wg\" (UniqueName: \"kubernetes.io/projected/24bb8be5-7c3c-4824-b8c9-7867dab6dfc9-kube-api-access-tb8wg\") pod \"redhat-operators-2kdfl\" (UID: \"24bb8be5-7c3c-4824-b8c9-7867dab6dfc9\") " pod="openshift-marketplace/redhat-operators-2kdfl" Apr 04 03:49:47 crc kubenswrapper[4681]: I0404 03:49:47.069927 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24bb8be5-7c3c-4824-b8c9-7867dab6dfc9-catalog-content\") pod \"redhat-operators-2kdfl\" (UID: \"24bb8be5-7c3c-4824-b8c9-7867dab6dfc9\") " pod="openshift-marketplace/redhat-operators-2kdfl" Apr 04 03:49:47 crc kubenswrapper[4681]: I0404 03:49:47.070031 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24bb8be5-7c3c-4824-b8c9-7867dab6dfc9-utilities\") pod \"redhat-operators-2kdfl\" (UID: \"24bb8be5-7c3c-4824-b8c9-7867dab6dfc9\") " pod="openshift-marketplace/redhat-operators-2kdfl" Apr 04 03:49:47 crc kubenswrapper[4681]: I0404 03:49:47.070534 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24bb8be5-7c3c-4824-b8c9-7867dab6dfc9-catalog-content\") pod \"redhat-operators-2kdfl\" (UID: \"24bb8be5-7c3c-4824-b8c9-7867dab6dfc9\") " pod="openshift-marketplace/redhat-operators-2kdfl" Apr 04 03:49:47 crc kubenswrapper[4681]: I0404 03:49:47.070553 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24bb8be5-7c3c-4824-b8c9-7867dab6dfc9-utilities\") pod \"redhat-operators-2kdfl\" (UID: \"24bb8be5-7c3c-4824-b8c9-7867dab6dfc9\") " pod="openshift-marketplace/redhat-operators-2kdfl" Apr 04 03:49:47 crc kubenswrapper[4681]: I0404 03:49:47.090111 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb8wg\" (UniqueName: \"kubernetes.io/projected/24bb8be5-7c3c-4824-b8c9-7867dab6dfc9-kube-api-access-tb8wg\") pod \"redhat-operators-2kdfl\" (UID: \"24bb8be5-7c3c-4824-b8c9-7867dab6dfc9\") " pod="openshift-marketplace/redhat-operators-2kdfl" Apr 04 03:49:47 crc kubenswrapper[4681]: I0404 03:49:47.179717 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2kdfl" Apr 04 03:49:47 crc kubenswrapper[4681]: I0404 03:49:47.228563 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="463d21bd-4dc5-49d4-8ae2-e1efd2e5e306" path="/var/lib/kubelet/pods/463d21bd-4dc5-49d4-8ae2-e1efd2e5e306/volumes" Apr 04 03:49:47 crc kubenswrapper[4681]: I0404 03:49:47.591518 4681 scope.go:117] "RemoveContainer" containerID="952f7547f679ecce0df449e96acc50ad394d07a91a98d67230c06d7b5fe01ec5" Apr 04 03:49:47 crc kubenswrapper[4681]: I0404 03:49:47.591558 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-shsrj/crc-debug-s5vc5" Apr 04 03:49:47 crc kubenswrapper[4681]: I0404 03:49:47.682966 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2kdfl"] Apr 04 03:49:47 crc kubenswrapper[4681]: W0404 03:49:47.683153 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24bb8be5_7c3c_4824_b8c9_7867dab6dfc9.slice/crio-ec58d682984882350694ef6950ce5b67a8cd52a681cbdeb4695b81ae3280eee5 WatchSource:0}: Error finding container ec58d682984882350694ef6950ce5b67a8cd52a681cbdeb4695b81ae3280eee5: Status 404 returned error can't find the container with id ec58d682984882350694ef6950ce5b67a8cd52a681cbdeb4695b81ae3280eee5 Apr 04 03:49:48 crc kubenswrapper[4681]: I0404 03:49:48.242325 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-shsrj/crc-debug-c8p98"] Apr 04 03:49:48 crc kubenswrapper[4681]: I0404 03:49:48.243918 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-shsrj/crc-debug-c8p98" Apr 04 03:49:48 crc kubenswrapper[4681]: I0404 03:49:48.246842 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-shsrj"/"default-dockercfg-knfdp" Apr 04 03:49:48 crc kubenswrapper[4681]: I0404 03:49:48.292746 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4sr8\" (UniqueName: \"kubernetes.io/projected/6d84378a-8210-40de-a378-fad6e89fd0fe-kube-api-access-h4sr8\") pod \"crc-debug-c8p98\" (UID: \"6d84378a-8210-40de-a378-fad6e89fd0fe\") " pod="openshift-must-gather-shsrj/crc-debug-c8p98" Apr 04 03:49:48 crc kubenswrapper[4681]: I0404 03:49:48.292994 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6d84378a-8210-40de-a378-fad6e89fd0fe-host\") pod \"crc-debug-c8p98\" (UID: \"6d84378a-8210-40de-a378-fad6e89fd0fe\") " pod="openshift-must-gather-shsrj/crc-debug-c8p98" Apr 04 03:49:48 crc kubenswrapper[4681]: I0404 03:49:48.395638 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4sr8\" (UniqueName: \"kubernetes.io/projected/6d84378a-8210-40de-a378-fad6e89fd0fe-kube-api-access-h4sr8\") pod \"crc-debug-c8p98\" (UID: \"6d84378a-8210-40de-a378-fad6e89fd0fe\") " pod="openshift-must-gather-shsrj/crc-debug-c8p98" Apr 04 03:49:48 crc kubenswrapper[4681]: I0404 03:49:48.395815 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6d84378a-8210-40de-a378-fad6e89fd0fe-host\") pod \"crc-debug-c8p98\" (UID: \"6d84378a-8210-40de-a378-fad6e89fd0fe\") " pod="openshift-must-gather-shsrj/crc-debug-c8p98" Apr 04 03:49:48 crc kubenswrapper[4681]: I0404 03:49:48.395959 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6d84378a-8210-40de-a378-fad6e89fd0fe-host\") pod \"crc-debug-c8p98\" (UID: \"6d84378a-8210-40de-a378-fad6e89fd0fe\") " pod="openshift-must-gather-shsrj/crc-debug-c8p98" Apr 04 03:49:48 crc kubenswrapper[4681]: I0404 03:49:48.426327 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4sr8\" (UniqueName: \"kubernetes.io/projected/6d84378a-8210-40de-a378-fad6e89fd0fe-kube-api-access-h4sr8\") pod \"crc-debug-c8p98\" (UID: \"6d84378a-8210-40de-a378-fad6e89fd0fe\") " pod="openshift-must-gather-shsrj/crc-debug-c8p98" Apr 04 03:49:48 crc kubenswrapper[4681]: I0404 03:49:48.566424 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-shsrj/crc-debug-c8p98" Apr 04 03:49:48 crc kubenswrapper[4681]: I0404 03:49:48.605243 4681 generic.go:334] "Generic (PLEG): container finished" podID="24bb8be5-7c3c-4824-b8c9-7867dab6dfc9" containerID="dc5d0ae3518793caf245004e82bae8c0e04253c9af4fbd8ead027231eb496de6" exitCode=0 Apr 04 03:49:48 crc kubenswrapper[4681]: I0404 03:49:48.605312 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kdfl" event={"ID":"24bb8be5-7c3c-4824-b8c9-7867dab6dfc9","Type":"ContainerDied","Data":"dc5d0ae3518793caf245004e82bae8c0e04253c9af4fbd8ead027231eb496de6"} Apr 04 03:49:48 crc kubenswrapper[4681]: I0404 03:49:48.605365 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kdfl" event={"ID":"24bb8be5-7c3c-4824-b8c9-7867dab6dfc9","Type":"ContainerStarted","Data":"ec58d682984882350694ef6950ce5b67a8cd52a681cbdeb4695b81ae3280eee5"} Apr 04 03:49:48 crc kubenswrapper[4681]: I0404 03:49:48.606691 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 04 03:49:48 crc kubenswrapper[4681]: I0404 03:49:48.644030 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2xqmx"] Apr 04 03:49:48 crc kubenswrapper[4681]: I0404 03:49:48.646353 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2xqmx" Apr 04 03:49:48 crc kubenswrapper[4681]: I0404 03:49:48.671328 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2xqmx"] Apr 04 03:49:48 crc kubenswrapper[4681]: I0404 03:49:48.705573 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42pj4\" (UniqueName: \"kubernetes.io/projected/3f2c8491-ce1c-472d-90b7-9e0da48d9f19-kube-api-access-42pj4\") pod \"certified-operators-2xqmx\" (UID: \"3f2c8491-ce1c-472d-90b7-9e0da48d9f19\") " pod="openshift-marketplace/certified-operators-2xqmx" Apr 04 03:49:48 crc kubenswrapper[4681]: I0404 03:49:48.705743 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f2c8491-ce1c-472d-90b7-9e0da48d9f19-catalog-content\") pod \"certified-operators-2xqmx\" (UID: \"3f2c8491-ce1c-472d-90b7-9e0da48d9f19\") " pod="openshift-marketplace/certified-operators-2xqmx" Apr 04 03:49:48 crc kubenswrapper[4681]: I0404 03:49:48.705864 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f2c8491-ce1c-472d-90b7-9e0da48d9f19-utilities\") pod \"certified-operators-2xqmx\" (UID: \"3f2c8491-ce1c-472d-90b7-9e0da48d9f19\") " pod="openshift-marketplace/certified-operators-2xqmx" Apr 04 03:49:48 crc kubenswrapper[4681]: I0404 03:49:48.810059 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42pj4\" (UniqueName: \"kubernetes.io/projected/3f2c8491-ce1c-472d-90b7-9e0da48d9f19-kube-api-access-42pj4\") pod \"certified-operators-2xqmx\" (UID: \"3f2c8491-ce1c-472d-90b7-9e0da48d9f19\") " pod="openshift-marketplace/certified-operators-2xqmx" Apr 04 03:49:48 crc kubenswrapper[4681]: I0404 03:49:48.810112 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f2c8491-ce1c-472d-90b7-9e0da48d9f19-catalog-content\") pod \"certified-operators-2xqmx\" (UID: \"3f2c8491-ce1c-472d-90b7-9e0da48d9f19\") " pod="openshift-marketplace/certified-operators-2xqmx" Apr 04 03:49:48 crc kubenswrapper[4681]: I0404 03:49:48.810184 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f2c8491-ce1c-472d-90b7-9e0da48d9f19-utilities\") pod \"certified-operators-2xqmx\" (UID: \"3f2c8491-ce1c-472d-90b7-9e0da48d9f19\") " pod="openshift-marketplace/certified-operators-2xqmx" Apr 04 03:49:48 crc kubenswrapper[4681]: I0404 03:49:48.810688 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f2c8491-ce1c-472d-90b7-9e0da48d9f19-utilities\") pod \"certified-operators-2xqmx\" (UID: \"3f2c8491-ce1c-472d-90b7-9e0da48d9f19\") " pod="openshift-marketplace/certified-operators-2xqmx" Apr 04 03:49:48 crc kubenswrapper[4681]: I0404 03:49:48.810800 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f2c8491-ce1c-472d-90b7-9e0da48d9f19-catalog-content\") pod \"certified-operators-2xqmx\" (UID: \"3f2c8491-ce1c-472d-90b7-9e0da48d9f19\") " pod="openshift-marketplace/certified-operators-2xqmx" Apr 04 03:49:48 crc kubenswrapper[4681]: I0404 03:49:48.835129 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42pj4\" (UniqueName: \"kubernetes.io/projected/3f2c8491-ce1c-472d-90b7-9e0da48d9f19-kube-api-access-42pj4\") pod \"certified-operators-2xqmx\" (UID: \"3f2c8491-ce1c-472d-90b7-9e0da48d9f19\") " pod="openshift-marketplace/certified-operators-2xqmx" Apr 04 03:49:49 crc kubenswrapper[4681]: I0404 03:49:49.089323 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2xqmx" Apr 04 03:49:49 crc kubenswrapper[4681]: I0404 03:49:49.205376 4681 scope.go:117] "RemoveContainer" containerID="083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649" Apr 04 03:49:49 crc kubenswrapper[4681]: E0404 03:49:49.205671 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:49:49 crc kubenswrapper[4681]: I0404 03:49:49.235216 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gjxsw"] Apr 04 03:49:49 crc kubenswrapper[4681]: I0404 03:49:49.242738 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjxsw" Apr 04 03:49:49 crc kubenswrapper[4681]: I0404 03:49:49.312760 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gjxsw"] Apr 04 03:49:49 crc kubenswrapper[4681]: I0404 03:49:49.325261 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9h5dv\" (UniqueName: \"kubernetes.io/projected/6389da48-b7bc-4590-896b-891bc1cca82a-kube-api-access-9h5dv\") pod \"community-operators-gjxsw\" (UID: \"6389da48-b7bc-4590-896b-891bc1cca82a\") " pod="openshift-marketplace/community-operators-gjxsw" Apr 04 03:49:49 crc kubenswrapper[4681]: I0404 03:49:49.325606 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6389da48-b7bc-4590-896b-891bc1cca82a-utilities\") pod \"community-operators-gjxsw\" (UID: \"6389da48-b7bc-4590-896b-891bc1cca82a\") " pod="openshift-marketplace/community-operators-gjxsw" Apr 04 03:49:49 crc kubenswrapper[4681]: I0404 03:49:49.325725 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6389da48-b7bc-4590-896b-891bc1cca82a-catalog-content\") pod \"community-operators-gjxsw\" (UID: \"6389da48-b7bc-4590-896b-891bc1cca82a\") " pod="openshift-marketplace/community-operators-gjxsw" Apr 04 03:49:49 crc kubenswrapper[4681]: I0404 03:49:49.436828 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9h5dv\" (UniqueName: \"kubernetes.io/projected/6389da48-b7bc-4590-896b-891bc1cca82a-kube-api-access-9h5dv\") pod \"community-operators-gjxsw\" (UID: \"6389da48-b7bc-4590-896b-891bc1cca82a\") " pod="openshift-marketplace/community-operators-gjxsw" Apr 04 03:49:49 crc kubenswrapper[4681]: I0404 03:49:49.436998 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6389da48-b7bc-4590-896b-891bc1cca82a-utilities\") pod \"community-operators-gjxsw\" (UID: \"6389da48-b7bc-4590-896b-891bc1cca82a\") " pod="openshift-marketplace/community-operators-gjxsw" Apr 04 03:49:49 crc kubenswrapper[4681]: I0404 03:49:49.437196 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6389da48-b7bc-4590-896b-891bc1cca82a-catalog-content\") pod \"community-operators-gjxsw\" (UID: \"6389da48-b7bc-4590-896b-891bc1cca82a\") " pod="openshift-marketplace/community-operators-gjxsw" Apr 04 03:49:49 crc kubenswrapper[4681]: I0404 03:49:49.437859 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6389da48-b7bc-4590-896b-891bc1cca82a-utilities\") pod \"community-operators-gjxsw\" (UID: \"6389da48-b7bc-4590-896b-891bc1cca82a\") " pod="openshift-marketplace/community-operators-gjxsw" Apr 04 03:49:49 crc kubenswrapper[4681]: I0404 03:49:49.437877 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6389da48-b7bc-4590-896b-891bc1cca82a-catalog-content\") pod \"community-operators-gjxsw\" (UID: \"6389da48-b7bc-4590-896b-891bc1cca82a\") " pod="openshift-marketplace/community-operators-gjxsw" Apr 04 03:49:49 crc kubenswrapper[4681]: I0404 03:49:49.473179 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9h5dv\" (UniqueName: \"kubernetes.io/projected/6389da48-b7bc-4590-896b-891bc1cca82a-kube-api-access-9h5dv\") pod \"community-operators-gjxsw\" (UID: \"6389da48-b7bc-4590-896b-891bc1cca82a\") " pod="openshift-marketplace/community-operators-gjxsw" Apr 04 03:49:49 crc kubenswrapper[4681]: I0404 03:49:49.619386 4681 generic.go:334] "Generic (PLEG): container finished" podID="6d84378a-8210-40de-a378-fad6e89fd0fe" containerID="c69a4957b9b270f5f958cf0f06b0f9cdf4c96de8001f8f791d732b8ab83355db" exitCode=0 Apr 04 03:49:49 crc kubenswrapper[4681]: I0404 03:49:49.619427 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-shsrj/crc-debug-c8p98" event={"ID":"6d84378a-8210-40de-a378-fad6e89fd0fe","Type":"ContainerDied","Data":"c69a4957b9b270f5f958cf0f06b0f9cdf4c96de8001f8f791d732b8ab83355db"} Apr 04 03:49:49 crc kubenswrapper[4681]: I0404 03:49:49.619452 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-shsrj/crc-debug-c8p98" event={"ID":"6d84378a-8210-40de-a378-fad6e89fd0fe","Type":"ContainerStarted","Data":"0cf75eeb34b63e0a2bc6bb6bbde3d8cfc136cefea5f85841b684ecdc29eecfa8"} Apr 04 03:49:49 crc kubenswrapper[4681]: I0404 03:49:49.624195 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjxsw" Apr 04 03:49:49 crc kubenswrapper[4681]: I0404 03:49:49.681736 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2xqmx"] Apr 04 03:49:49 crc kubenswrapper[4681]: I0404 03:49:49.801642 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-shsrj/crc-debug-c8p98"] Apr 04 03:49:49 crc kubenswrapper[4681]: I0404 03:49:49.813654 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-shsrj/crc-debug-c8p98"] Apr 04 03:49:50 crc kubenswrapper[4681]: I0404 03:49:50.282923 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gjxsw"] Apr 04 03:49:50 crc kubenswrapper[4681]: I0404 03:49:50.632151 4681 generic.go:334] "Generic (PLEG): container finished" podID="3f2c8491-ce1c-472d-90b7-9e0da48d9f19" containerID="b9defc4ef6a01e7b8f2376f3b2842c7a58f373afb1fcbd99033274863bcc0809" exitCode=0 Apr 04 03:49:50 crc kubenswrapper[4681]: I0404 03:49:50.632231 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2xqmx" event={"ID":"3f2c8491-ce1c-472d-90b7-9e0da48d9f19","Type":"ContainerDied","Data":"b9defc4ef6a01e7b8f2376f3b2842c7a58f373afb1fcbd99033274863bcc0809"} Apr 04 03:49:50 crc kubenswrapper[4681]: I0404 03:49:50.632526 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2xqmx" event={"ID":"3f2c8491-ce1c-472d-90b7-9e0da48d9f19","Type":"ContainerStarted","Data":"af795ac533d78ee6a68d34466de6456a3d3951fe6414d6b18a873b35453551e0"} Apr 04 03:49:50 crc kubenswrapper[4681]: I0404 03:49:50.635311 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kdfl" event={"ID":"24bb8be5-7c3c-4824-b8c9-7867dab6dfc9","Type":"ContainerStarted","Data":"1f4c4f4b41be471e2be460390517c701d56f22e61bb22d2a7924bdd59ce221a8"} Apr 04 03:49:50 crc kubenswrapper[4681]: I0404 03:49:50.640750 4681 generic.go:334] "Generic (PLEG): container finished" podID="6389da48-b7bc-4590-896b-891bc1cca82a" containerID="1452ec6533fc602f9b12ea9d2a9297f27471e32826d34cccc42d76bed9efbc6c" exitCode=0 Apr 04 03:49:50 crc kubenswrapper[4681]: I0404 03:49:50.640953 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjxsw" event={"ID":"6389da48-b7bc-4590-896b-891bc1cca82a","Type":"ContainerDied","Data":"1452ec6533fc602f9b12ea9d2a9297f27471e32826d34cccc42d76bed9efbc6c"} Apr 04 03:49:50 crc kubenswrapper[4681]: I0404 03:49:50.641059 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjxsw" event={"ID":"6389da48-b7bc-4590-896b-891bc1cca82a","Type":"ContainerStarted","Data":"7fe9233525e008f021334fda1256d4d02fee4ab3e352e316ad66bec3f163489e"} Apr 04 03:49:50 crc kubenswrapper[4681]: I0404 03:49:50.737191 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-shsrj/crc-debug-c8p98" Apr 04 03:49:50 crc kubenswrapper[4681]: I0404 03:49:50.876122 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4sr8\" (UniqueName: \"kubernetes.io/projected/6d84378a-8210-40de-a378-fad6e89fd0fe-kube-api-access-h4sr8\") pod \"6d84378a-8210-40de-a378-fad6e89fd0fe\" (UID: \"6d84378a-8210-40de-a378-fad6e89fd0fe\") " Apr 04 03:49:50 crc kubenswrapper[4681]: I0404 03:49:50.876336 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6d84378a-8210-40de-a378-fad6e89fd0fe-host\") pod \"6d84378a-8210-40de-a378-fad6e89fd0fe\" (UID: \"6d84378a-8210-40de-a378-fad6e89fd0fe\") " Apr 04 03:49:50 crc kubenswrapper[4681]: I0404 03:49:50.876515 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d84378a-8210-40de-a378-fad6e89fd0fe-host" (OuterVolumeSpecName: "host") pod "6d84378a-8210-40de-a378-fad6e89fd0fe" (UID: "6d84378a-8210-40de-a378-fad6e89fd0fe"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Apr 04 03:49:50 crc kubenswrapper[4681]: I0404 03:49:50.877238 4681 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6d84378a-8210-40de-a378-fad6e89fd0fe-host\") on node \"crc\" DevicePath \"\"" Apr 04 03:49:50 crc kubenswrapper[4681]: I0404 03:49:50.889419 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d84378a-8210-40de-a378-fad6e89fd0fe-kube-api-access-h4sr8" (OuterVolumeSpecName: "kube-api-access-h4sr8") pod "6d84378a-8210-40de-a378-fad6e89fd0fe" (UID: "6d84378a-8210-40de-a378-fad6e89fd0fe"). InnerVolumeSpecName "kube-api-access-h4sr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:49:50 crc kubenswrapper[4681]: I0404 03:49:50.979244 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4sr8\" (UniqueName: \"kubernetes.io/projected/6d84378a-8210-40de-a378-fad6e89fd0fe-kube-api-access-h4sr8\") on node \"crc\" DevicePath \"\"" Apr 04 03:49:51 crc kubenswrapper[4681]: I0404 03:49:51.214462 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d84378a-8210-40de-a378-fad6e89fd0fe" path="/var/lib/kubelet/pods/6d84378a-8210-40de-a378-fad6e89fd0fe/volumes" Apr 04 03:49:51 crc kubenswrapper[4681]: I0404 03:49:51.656470 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjxsw" event={"ID":"6389da48-b7bc-4590-896b-891bc1cca82a","Type":"ContainerStarted","Data":"7d653d6d42ee169714d4b3cff6e096bd079081688cfeba7c8237431a48890b7f"} Apr 04 03:49:51 crc kubenswrapper[4681]: I0404 03:49:51.662042 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-shsrj/crc-debug-c8p98" Apr 04 03:49:51 crc kubenswrapper[4681]: I0404 03:49:51.662722 4681 scope.go:117] "RemoveContainer" containerID="c69a4957b9b270f5f958cf0f06b0f9cdf4c96de8001f8f791d732b8ab83355db" Apr 04 03:49:52 crc kubenswrapper[4681]: I0404 03:49:52.673510 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2xqmx" event={"ID":"3f2c8491-ce1c-472d-90b7-9e0da48d9f19","Type":"ContainerStarted","Data":"a7936327ce9b1e930cc33f33434c6b1869f0d785c2eaad9f3e0f195d2db61a5e"} Apr 04 03:49:54 crc kubenswrapper[4681]: I0404 03:49:54.694853 4681 generic.go:334] "Generic (PLEG): container finished" podID="6389da48-b7bc-4590-896b-891bc1cca82a" containerID="7d653d6d42ee169714d4b3cff6e096bd079081688cfeba7c8237431a48890b7f" exitCode=0 Apr 04 03:49:54 crc kubenswrapper[4681]: I0404 03:49:54.694925 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjxsw" event={"ID":"6389da48-b7bc-4590-896b-891bc1cca82a","Type":"ContainerDied","Data":"7d653d6d42ee169714d4b3cff6e096bd079081688cfeba7c8237431a48890b7f"} Apr 04 03:49:56 crc kubenswrapper[4681]: I0404 03:49:56.722554 4681 generic.go:334] "Generic (PLEG): container finished" podID="3f2c8491-ce1c-472d-90b7-9e0da48d9f19" containerID="a7936327ce9b1e930cc33f33434c6b1869f0d785c2eaad9f3e0f195d2db61a5e" exitCode=0 Apr 04 03:49:56 crc kubenswrapper[4681]: I0404 03:49:56.722621 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2xqmx" event={"ID":"3f2c8491-ce1c-472d-90b7-9e0da48d9f19","Type":"ContainerDied","Data":"a7936327ce9b1e930cc33f33434c6b1869f0d785c2eaad9f3e0f195d2db61a5e"} Apr 04 03:49:56 crc kubenswrapper[4681]: I0404 03:49:56.727286 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjxsw" event={"ID":"6389da48-b7bc-4590-896b-891bc1cca82a","Type":"ContainerStarted","Data":"f722f3ef07b8c8c3951e58d1eabd041999ba7e8f7ff76bb51ac44d3cd428fab1"} Apr 04 03:49:56 crc kubenswrapper[4681]: I0404 03:49:56.760911 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gjxsw" podStartSLOduration=2.579863608 podStartE2EDuration="7.760894922s" podCreationTimestamp="2026-04-04 03:49:49 +0000 UTC" firstStartedPulling="2026-04-04 03:49:50.644759625 +0000 UTC m=+6870.310534765" lastFinishedPulling="2026-04-04 03:49:55.825790959 +0000 UTC m=+6875.491566079" observedRunningTime="2026-04-04 03:49:56.756999485 +0000 UTC m=+6876.422774605" watchObservedRunningTime="2026-04-04 03:49:56.760894922 +0000 UTC m=+6876.426670042" Apr 04 03:49:58 crc kubenswrapper[4681]: I0404 03:49:58.750706 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2xqmx" event={"ID":"3f2c8491-ce1c-472d-90b7-9e0da48d9f19","Type":"ContainerStarted","Data":"6af68f954770c076d9f0e41ec0cb4e8e7f8a8d9b420531d51bde085312898666"} Apr 04 03:49:58 crc kubenswrapper[4681]: I0404 03:49:58.763739 4681 generic.go:334] "Generic (PLEG): container finished" podID="24bb8be5-7c3c-4824-b8c9-7867dab6dfc9" containerID="1f4c4f4b41be471e2be460390517c701d56f22e61bb22d2a7924bdd59ce221a8" exitCode=0 Apr 04 03:49:58 crc kubenswrapper[4681]: I0404 03:49:58.763816 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kdfl" event={"ID":"24bb8be5-7c3c-4824-b8c9-7867dab6dfc9","Type":"ContainerDied","Data":"1f4c4f4b41be471e2be460390517c701d56f22e61bb22d2a7924bdd59ce221a8"} Apr 04 03:49:58 crc kubenswrapper[4681]: I0404 03:49:58.778648 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2xqmx" podStartSLOduration=4.08757564 podStartE2EDuration="10.778625261s" podCreationTimestamp="2026-04-04 03:49:48 +0000 UTC" firstStartedPulling="2026-04-04 03:49:50.634189307 +0000 UTC m=+6870.299964427" lastFinishedPulling="2026-04-04 03:49:57.325238928 +0000 UTC m=+6876.991014048" observedRunningTime="2026-04-04 03:49:58.77268993 +0000 UTC m=+6878.438465050" watchObservedRunningTime="2026-04-04 03:49:58.778625261 +0000 UTC m=+6878.444400381" Apr 04 03:49:59 crc kubenswrapper[4681]: I0404 03:49:59.091465 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2xqmx" Apr 04 03:49:59 crc kubenswrapper[4681]: I0404 03:49:59.091548 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2xqmx" Apr 04 03:49:59 crc kubenswrapper[4681]: I0404 03:49:59.625064 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gjxsw" Apr 04 03:49:59 crc kubenswrapper[4681]: I0404 03:49:59.625416 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gjxsw" Apr 04 03:50:00 crc kubenswrapper[4681]: I0404 03:50:00.154837 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587910-jwxww"] Apr 04 03:50:00 crc kubenswrapper[4681]: E0404 03:50:00.155504 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d84378a-8210-40de-a378-fad6e89fd0fe" containerName="container-00" Apr 04 03:50:00 crc kubenswrapper[4681]: I0404 03:50:00.155518 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d84378a-8210-40de-a378-fad6e89fd0fe" containerName="container-00" Apr 04 03:50:00 crc kubenswrapper[4681]: I0404 03:50:00.155740 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d84378a-8210-40de-a378-fad6e89fd0fe" containerName="container-00" Apr 04 03:50:00 crc kubenswrapper[4681]: I0404 03:50:00.156424 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587910-jwxww" Apr 04 03:50:00 crc kubenswrapper[4681]: I0404 03:50:00.159656 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 03:50:00 crc kubenswrapper[4681]: I0404 03:50:00.160028 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 03:50:00 crc kubenswrapper[4681]: I0404 03:50:00.160202 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 03:50:00 crc kubenswrapper[4681]: I0404 03:50:00.169650 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587910-jwxww"] Apr 04 03:50:00 crc kubenswrapper[4681]: I0404 03:50:00.196650 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-2xqmx" podUID="3f2c8491-ce1c-472d-90b7-9e0da48d9f19" containerName="registry-server" probeResult="failure" output=< Apr 04 03:50:00 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Apr 04 03:50:00 crc kubenswrapper[4681]: > Apr 04 03:50:00 crc kubenswrapper[4681]: I0404 03:50:00.301894 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5trh\" (UniqueName: \"kubernetes.io/projected/d0c05a9b-d399-4e3d-b738-aa14b9f23067-kube-api-access-d5trh\") pod \"auto-csr-approver-29587910-jwxww\" (UID: \"d0c05a9b-d399-4e3d-b738-aa14b9f23067\") " pod="openshift-infra/auto-csr-approver-29587910-jwxww" Apr 04 03:50:00 crc kubenswrapper[4681]: I0404 03:50:00.403868 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5trh\" (UniqueName: \"kubernetes.io/projected/d0c05a9b-d399-4e3d-b738-aa14b9f23067-kube-api-access-d5trh\") pod \"auto-csr-approver-29587910-jwxww\" (UID: \"d0c05a9b-d399-4e3d-b738-aa14b9f23067\") " pod="openshift-infra/auto-csr-approver-29587910-jwxww" Apr 04 03:50:00 crc kubenswrapper[4681]: I0404 03:50:00.427115 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5trh\" (UniqueName: \"kubernetes.io/projected/d0c05a9b-d399-4e3d-b738-aa14b9f23067-kube-api-access-d5trh\") pod \"auto-csr-approver-29587910-jwxww\" (UID: \"d0c05a9b-d399-4e3d-b738-aa14b9f23067\") " pod="openshift-infra/auto-csr-approver-29587910-jwxww" Apr 04 03:50:00 crc kubenswrapper[4681]: I0404 03:50:00.475171 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587910-jwxww" Apr 04 03:50:00 crc kubenswrapper[4681]: I0404 03:50:00.683773 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-gjxsw" podUID="6389da48-b7bc-4590-896b-891bc1cca82a" containerName="registry-server" probeResult="failure" output=< Apr 04 03:50:00 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Apr 04 03:50:00 crc kubenswrapper[4681]: > Apr 04 03:50:00 crc kubenswrapper[4681]: I0404 03:50:00.787891 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kdfl" event={"ID":"24bb8be5-7c3c-4824-b8c9-7867dab6dfc9","Type":"ContainerStarted","Data":"939857dd69429df12ca94771dcdfba9be51f493cc07b3f04de0455380a8452ee"} Apr 04 03:50:00 crc kubenswrapper[4681]: I0404 03:50:00.839986 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2kdfl" podStartSLOduration=4.119779432 podStartE2EDuration="14.839959671s" podCreationTimestamp="2026-04-04 03:49:46 +0000 UTC" firstStartedPulling="2026-04-04 03:49:48.606495245 +0000 UTC m=+6868.272270365" lastFinishedPulling="2026-04-04 03:49:59.326675474 +0000 UTC m=+6878.992450604" observedRunningTime="2026-04-04 03:50:00.821495577 +0000 UTC m=+6880.487270687" watchObservedRunningTime="2026-04-04 03:50:00.839959671 +0000 UTC m=+6880.505734791" Apr 04 03:50:01 crc kubenswrapper[4681]: W0404 03:50:01.058162 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd0c05a9b_d399_4e3d_b738_aa14b9f23067.slice/crio-9375757569ee362478615ab484aa64a3cc7e69207b2544103f7d51ff19c49ff0 WatchSource:0}: Error finding container 9375757569ee362478615ab484aa64a3cc7e69207b2544103f7d51ff19c49ff0: Status 404 returned error can't find the container with id 9375757569ee362478615ab484aa64a3cc7e69207b2544103f7d51ff19c49ff0 Apr 04 03:50:01 crc kubenswrapper[4681]: I0404 03:50:01.072595 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587910-jwxww"] Apr 04 03:50:01 crc kubenswrapper[4681]: I0404 03:50:01.797127 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587910-jwxww" event={"ID":"d0c05a9b-d399-4e3d-b738-aa14b9f23067","Type":"ContainerStarted","Data":"9375757569ee362478615ab484aa64a3cc7e69207b2544103f7d51ff19c49ff0"} Apr 04 03:50:02 crc kubenswrapper[4681]: I0404 03:50:02.201777 4681 scope.go:117] "RemoveContainer" containerID="083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649" Apr 04 03:50:02 crc kubenswrapper[4681]: E0404 03:50:02.202224 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:50:02 crc kubenswrapper[4681]: I0404 03:50:02.808922 4681 generic.go:334] "Generic (PLEG): container finished" podID="d0c05a9b-d399-4e3d-b738-aa14b9f23067" containerID="b50b6a74e8263be1077722438d5cded0a552aab000461ffe00960f6f02ff2254" exitCode=0 Apr 04 03:50:02 crc kubenswrapper[4681]: I0404 03:50:02.808962 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587910-jwxww" event={"ID":"d0c05a9b-d399-4e3d-b738-aa14b9f23067","Type":"ContainerDied","Data":"b50b6a74e8263be1077722438d5cded0a552aab000461ffe00960f6f02ff2254"} Apr 04 03:50:04 crc kubenswrapper[4681]: I0404 03:50:04.250859 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587910-jwxww" Apr 04 03:50:04 crc kubenswrapper[4681]: I0404 03:50:04.389920 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5trh\" (UniqueName: \"kubernetes.io/projected/d0c05a9b-d399-4e3d-b738-aa14b9f23067-kube-api-access-d5trh\") pod \"d0c05a9b-d399-4e3d-b738-aa14b9f23067\" (UID: \"d0c05a9b-d399-4e3d-b738-aa14b9f23067\") " Apr 04 03:50:04 crc kubenswrapper[4681]: I0404 03:50:04.418576 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0c05a9b-d399-4e3d-b738-aa14b9f23067-kube-api-access-d5trh" (OuterVolumeSpecName: "kube-api-access-d5trh") pod "d0c05a9b-d399-4e3d-b738-aa14b9f23067" (UID: "d0c05a9b-d399-4e3d-b738-aa14b9f23067"). InnerVolumeSpecName "kube-api-access-d5trh". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:50:04 crc kubenswrapper[4681]: I0404 03:50:04.492880 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5trh\" (UniqueName: \"kubernetes.io/projected/d0c05a9b-d399-4e3d-b738-aa14b9f23067-kube-api-access-d5trh\") on node \"crc\" DevicePath \"\"" Apr 04 03:50:04 crc kubenswrapper[4681]: I0404 03:50:04.839030 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587910-jwxww" event={"ID":"d0c05a9b-d399-4e3d-b738-aa14b9f23067","Type":"ContainerDied","Data":"9375757569ee362478615ab484aa64a3cc7e69207b2544103f7d51ff19c49ff0"} Apr 04 03:50:04 crc kubenswrapper[4681]: I0404 03:50:04.839069 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9375757569ee362478615ab484aa64a3cc7e69207b2544103f7d51ff19c49ff0" Apr 04 03:50:04 crc kubenswrapper[4681]: I0404 03:50:04.839120 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587910-jwxww" Apr 04 03:50:05 crc kubenswrapper[4681]: I0404 03:50:05.322567 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587904-8g84d"] Apr 04 03:50:05 crc kubenswrapper[4681]: I0404 03:50:05.331527 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587904-8g84d"] Apr 04 03:50:07 crc kubenswrapper[4681]: I0404 03:50:07.180829 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2kdfl" Apr 04 03:50:07 crc kubenswrapper[4681]: I0404 03:50:07.181245 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2kdfl" Apr 04 03:50:07 crc kubenswrapper[4681]: I0404 03:50:07.223891 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="508cebce-fa34-4131-a195-70dc04ddc013" path="/var/lib/kubelet/pods/508cebce-fa34-4131-a195-70dc04ddc013/volumes" Apr 04 03:50:08 crc kubenswrapper[4681]: I0404 03:50:08.234179 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2kdfl" podUID="24bb8be5-7c3c-4824-b8c9-7867dab6dfc9" containerName="registry-server" probeResult="failure" output=< Apr 04 03:50:08 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Apr 04 03:50:08 crc kubenswrapper[4681]: > Apr 04 03:50:09 crc kubenswrapper[4681]: I0404 03:50:09.143168 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2xqmx" Apr 04 03:50:09 crc kubenswrapper[4681]: I0404 03:50:09.211699 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2xqmx" Apr 04 03:50:09 crc kubenswrapper[4681]: I0404 03:50:09.380637 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2xqmx"] Apr 04 03:50:10 crc kubenswrapper[4681]: I0404 03:50:10.679378 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-gjxsw" podUID="6389da48-b7bc-4590-896b-891bc1cca82a" containerName="registry-server" probeResult="failure" output=< Apr 04 03:50:10 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Apr 04 03:50:10 crc kubenswrapper[4681]: > Apr 04 03:50:10 crc kubenswrapper[4681]: I0404 03:50:10.892634 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2xqmx" podUID="3f2c8491-ce1c-472d-90b7-9e0da48d9f19" containerName="registry-server" containerID="cri-o://6af68f954770c076d9f0e41ec0cb4e8e7f8a8d9b420531d51bde085312898666" gracePeriod=2 Apr 04 03:50:11 crc kubenswrapper[4681]: I0404 03:50:11.427957 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2xqmx" Apr 04 03:50:11 crc kubenswrapper[4681]: I0404 03:50:11.610168 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f2c8491-ce1c-472d-90b7-9e0da48d9f19-utilities\") pod \"3f2c8491-ce1c-472d-90b7-9e0da48d9f19\" (UID: \"3f2c8491-ce1c-472d-90b7-9e0da48d9f19\") " Apr 04 03:50:11 crc kubenswrapper[4681]: I0404 03:50:11.610430 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f2c8491-ce1c-472d-90b7-9e0da48d9f19-catalog-content\") pod \"3f2c8491-ce1c-472d-90b7-9e0da48d9f19\" (UID: \"3f2c8491-ce1c-472d-90b7-9e0da48d9f19\") " Apr 04 03:50:11 crc kubenswrapper[4681]: I0404 03:50:11.610473 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42pj4\" (UniqueName: \"kubernetes.io/projected/3f2c8491-ce1c-472d-90b7-9e0da48d9f19-kube-api-access-42pj4\") pod \"3f2c8491-ce1c-472d-90b7-9e0da48d9f19\" (UID: \"3f2c8491-ce1c-472d-90b7-9e0da48d9f19\") " Apr 04 03:50:11 crc kubenswrapper[4681]: I0404 03:50:11.612205 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f2c8491-ce1c-472d-90b7-9e0da48d9f19-utilities" (OuterVolumeSpecName: "utilities") pod "3f2c8491-ce1c-472d-90b7-9e0da48d9f19" (UID: "3f2c8491-ce1c-472d-90b7-9e0da48d9f19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:50:11 crc kubenswrapper[4681]: I0404 03:50:11.616895 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f2c8491-ce1c-472d-90b7-9e0da48d9f19-kube-api-access-42pj4" (OuterVolumeSpecName: "kube-api-access-42pj4") pod "3f2c8491-ce1c-472d-90b7-9e0da48d9f19" (UID: "3f2c8491-ce1c-472d-90b7-9e0da48d9f19"). InnerVolumeSpecName "kube-api-access-42pj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:50:11 crc kubenswrapper[4681]: I0404 03:50:11.669661 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f2c8491-ce1c-472d-90b7-9e0da48d9f19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f2c8491-ce1c-472d-90b7-9e0da48d9f19" (UID: "3f2c8491-ce1c-472d-90b7-9e0da48d9f19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:50:11 crc kubenswrapper[4681]: I0404 03:50:11.712893 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f2c8491-ce1c-472d-90b7-9e0da48d9f19-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 03:50:11 crc kubenswrapper[4681]: I0404 03:50:11.712929 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f2c8491-ce1c-472d-90b7-9e0da48d9f19-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 03:50:11 crc kubenswrapper[4681]: I0404 03:50:11.712940 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42pj4\" (UniqueName: \"kubernetes.io/projected/3f2c8491-ce1c-472d-90b7-9e0da48d9f19-kube-api-access-42pj4\") on node \"crc\" DevicePath \"\"" Apr 04 03:50:11 crc kubenswrapper[4681]: I0404 03:50:11.903764 4681 generic.go:334] "Generic (PLEG): container finished" podID="3f2c8491-ce1c-472d-90b7-9e0da48d9f19" containerID="6af68f954770c076d9f0e41ec0cb4e8e7f8a8d9b420531d51bde085312898666" exitCode=0 Apr 04 03:50:11 crc kubenswrapper[4681]: I0404 03:50:11.903975 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2xqmx" event={"ID":"3f2c8491-ce1c-472d-90b7-9e0da48d9f19","Type":"ContainerDied","Data":"6af68f954770c076d9f0e41ec0cb4e8e7f8a8d9b420531d51bde085312898666"} Apr 04 03:50:11 crc kubenswrapper[4681]: I0404 03:50:11.904084 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2xqmx" event={"ID":"3f2c8491-ce1c-472d-90b7-9e0da48d9f19","Type":"ContainerDied","Data":"af795ac533d78ee6a68d34466de6456a3d3951fe6414d6b18a873b35453551e0"} Apr 04 03:50:11 crc kubenswrapper[4681]: I0404 03:50:11.904168 4681 scope.go:117] "RemoveContainer" containerID="6af68f954770c076d9f0e41ec0cb4e8e7f8a8d9b420531d51bde085312898666" Apr 04 03:50:11 crc kubenswrapper[4681]: I0404 03:50:11.904438 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2xqmx" Apr 04 03:50:11 crc kubenswrapper[4681]: I0404 03:50:11.926098 4681 scope.go:117] "RemoveContainer" containerID="a7936327ce9b1e930cc33f33434c6b1869f0d785c2eaad9f3e0f195d2db61a5e" Apr 04 03:50:11 crc kubenswrapper[4681]: I0404 03:50:11.948648 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2xqmx"] Apr 04 03:50:11 crc kubenswrapper[4681]: I0404 03:50:11.960845 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2xqmx"] Apr 04 03:50:11 crc kubenswrapper[4681]: I0404 03:50:11.967551 4681 scope.go:117] "RemoveContainer" containerID="b9defc4ef6a01e7b8f2376f3b2842c7a58f373afb1fcbd99033274863bcc0809" Apr 04 03:50:12 crc kubenswrapper[4681]: I0404 03:50:12.006859 4681 scope.go:117] "RemoveContainer" containerID="6af68f954770c076d9f0e41ec0cb4e8e7f8a8d9b420531d51bde085312898666" Apr 04 03:50:12 crc kubenswrapper[4681]: E0404 03:50:12.007404 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6af68f954770c076d9f0e41ec0cb4e8e7f8a8d9b420531d51bde085312898666\": container with ID starting with 6af68f954770c076d9f0e41ec0cb4e8e7f8a8d9b420531d51bde085312898666 not found: ID does not exist" containerID="6af68f954770c076d9f0e41ec0cb4e8e7f8a8d9b420531d51bde085312898666" Apr 04 03:50:12 crc kubenswrapper[4681]: I0404 03:50:12.007468 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6af68f954770c076d9f0e41ec0cb4e8e7f8a8d9b420531d51bde085312898666"} err="failed to get container status \"6af68f954770c076d9f0e41ec0cb4e8e7f8a8d9b420531d51bde085312898666\": rpc error: code = NotFound desc = could not find container \"6af68f954770c076d9f0e41ec0cb4e8e7f8a8d9b420531d51bde085312898666\": container with ID starting with 6af68f954770c076d9f0e41ec0cb4e8e7f8a8d9b420531d51bde085312898666 not found: ID does not exist" Apr 04 03:50:12 crc kubenswrapper[4681]: I0404 03:50:12.007502 4681 scope.go:117] "RemoveContainer" containerID="a7936327ce9b1e930cc33f33434c6b1869f0d785c2eaad9f3e0f195d2db61a5e" Apr 04 03:50:12 crc kubenswrapper[4681]: E0404 03:50:12.007991 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7936327ce9b1e930cc33f33434c6b1869f0d785c2eaad9f3e0f195d2db61a5e\": container with ID starting with a7936327ce9b1e930cc33f33434c6b1869f0d785c2eaad9f3e0f195d2db61a5e not found: ID does not exist" containerID="a7936327ce9b1e930cc33f33434c6b1869f0d785c2eaad9f3e0f195d2db61a5e" Apr 04 03:50:12 crc kubenswrapper[4681]: I0404 03:50:12.008024 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7936327ce9b1e930cc33f33434c6b1869f0d785c2eaad9f3e0f195d2db61a5e"} err="failed to get container status \"a7936327ce9b1e930cc33f33434c6b1869f0d785c2eaad9f3e0f195d2db61a5e\": rpc error: code = NotFound desc = could not find container \"a7936327ce9b1e930cc33f33434c6b1869f0d785c2eaad9f3e0f195d2db61a5e\": container with ID starting with a7936327ce9b1e930cc33f33434c6b1869f0d785c2eaad9f3e0f195d2db61a5e not found: ID does not exist" Apr 04 03:50:12 crc kubenswrapper[4681]: I0404 03:50:12.008044 4681 scope.go:117] "RemoveContainer" containerID="b9defc4ef6a01e7b8f2376f3b2842c7a58f373afb1fcbd99033274863bcc0809" Apr 04 03:50:12 crc kubenswrapper[4681]: E0404 03:50:12.008372 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9defc4ef6a01e7b8f2376f3b2842c7a58f373afb1fcbd99033274863bcc0809\": container with ID starting with b9defc4ef6a01e7b8f2376f3b2842c7a58f373afb1fcbd99033274863bcc0809 not found: ID does not exist" containerID="b9defc4ef6a01e7b8f2376f3b2842c7a58f373afb1fcbd99033274863bcc0809" Apr 04 03:50:12 crc kubenswrapper[4681]: I0404 03:50:12.008414 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9defc4ef6a01e7b8f2376f3b2842c7a58f373afb1fcbd99033274863bcc0809"} err="failed to get container status \"b9defc4ef6a01e7b8f2376f3b2842c7a58f373afb1fcbd99033274863bcc0809\": rpc error: code = NotFound desc = could not find container \"b9defc4ef6a01e7b8f2376f3b2842c7a58f373afb1fcbd99033274863bcc0809\": container with ID starting with b9defc4ef6a01e7b8f2376f3b2842c7a58f373afb1fcbd99033274863bcc0809 not found: ID does not exist" Apr 04 03:50:13 crc kubenswrapper[4681]: I0404 03:50:13.221247 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f2c8491-ce1c-472d-90b7-9e0da48d9f19" path="/var/lib/kubelet/pods/3f2c8491-ce1c-472d-90b7-9e0da48d9f19/volumes" Apr 04 03:50:14 crc kubenswrapper[4681]: I0404 03:50:14.201108 4681 scope.go:117] "RemoveContainer" containerID="083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649" Apr 04 03:50:14 crc kubenswrapper[4681]: E0404 03:50:14.201641 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:50:18 crc kubenswrapper[4681]: I0404 03:50:18.248943 4681 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2kdfl" podUID="24bb8be5-7c3c-4824-b8c9-7867dab6dfc9" containerName="registry-server" probeResult="failure" output=< Apr 04 03:50:18 crc kubenswrapper[4681]: timeout: failed to connect service ":50051" within 1s Apr 04 03:50:18 crc kubenswrapper[4681]: > Apr 04 03:50:19 crc kubenswrapper[4681]: I0404 03:50:19.673567 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gjxsw" Apr 04 03:50:19 crc kubenswrapper[4681]: I0404 03:50:19.738019 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gjxsw" Apr 04 03:50:20 crc kubenswrapper[4681]: I0404 03:50:20.844077 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gjxsw"] Apr 04 03:50:20 crc kubenswrapper[4681]: I0404 03:50:20.991830 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gjxsw" podUID="6389da48-b7bc-4590-896b-891bc1cca82a" containerName="registry-server" containerID="cri-o://f722f3ef07b8c8c3951e58d1eabd041999ba7e8f7ff76bb51ac44d3cd428fab1" gracePeriod=2 Apr 04 03:50:21 crc kubenswrapper[4681]: I0404 03:50:21.605223 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjxsw" Apr 04 03:50:21 crc kubenswrapper[4681]: I0404 03:50:21.794018 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6389da48-b7bc-4590-896b-891bc1cca82a-utilities\") pod \"6389da48-b7bc-4590-896b-891bc1cca82a\" (UID: \"6389da48-b7bc-4590-896b-891bc1cca82a\") " Apr 04 03:50:21 crc kubenswrapper[4681]: I0404 03:50:21.794092 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9h5dv\" (UniqueName: \"kubernetes.io/projected/6389da48-b7bc-4590-896b-891bc1cca82a-kube-api-access-9h5dv\") pod \"6389da48-b7bc-4590-896b-891bc1cca82a\" (UID: \"6389da48-b7bc-4590-896b-891bc1cca82a\") " Apr 04 03:50:21 crc kubenswrapper[4681]: I0404 03:50:21.794163 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6389da48-b7bc-4590-896b-891bc1cca82a-catalog-content\") pod \"6389da48-b7bc-4590-896b-891bc1cca82a\" (UID: \"6389da48-b7bc-4590-896b-891bc1cca82a\") " Apr 04 03:50:21 crc kubenswrapper[4681]: I0404 03:50:21.800877 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6389da48-b7bc-4590-896b-891bc1cca82a-utilities" (OuterVolumeSpecName: "utilities") pod "6389da48-b7bc-4590-896b-891bc1cca82a" (UID: "6389da48-b7bc-4590-896b-891bc1cca82a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:50:21 crc kubenswrapper[4681]: I0404 03:50:21.807519 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6389da48-b7bc-4590-896b-891bc1cca82a-kube-api-access-9h5dv" (OuterVolumeSpecName: "kube-api-access-9h5dv") pod "6389da48-b7bc-4590-896b-891bc1cca82a" (UID: "6389da48-b7bc-4590-896b-891bc1cca82a"). InnerVolumeSpecName "kube-api-access-9h5dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:50:21 crc kubenswrapper[4681]: I0404 03:50:21.845384 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6389da48-b7bc-4590-896b-891bc1cca82a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6389da48-b7bc-4590-896b-891bc1cca82a" (UID: "6389da48-b7bc-4590-896b-891bc1cca82a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:50:21 crc kubenswrapper[4681]: I0404 03:50:21.897218 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6389da48-b7bc-4590-896b-891bc1cca82a-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 03:50:21 crc kubenswrapper[4681]: I0404 03:50:21.897248 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6389da48-b7bc-4590-896b-891bc1cca82a-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 03:50:21 crc kubenswrapper[4681]: I0404 03:50:21.897258 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9h5dv\" (UniqueName: \"kubernetes.io/projected/6389da48-b7bc-4590-896b-891bc1cca82a-kube-api-access-9h5dv\") on node \"crc\" DevicePath \"\"" Apr 04 03:50:22 crc kubenswrapper[4681]: I0404 03:50:22.006641 4681 generic.go:334] "Generic (PLEG): container finished" podID="6389da48-b7bc-4590-896b-891bc1cca82a" containerID="f722f3ef07b8c8c3951e58d1eabd041999ba7e8f7ff76bb51ac44d3cd428fab1" exitCode=0 Apr 04 03:50:22 crc kubenswrapper[4681]: I0404 03:50:22.006705 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjxsw" event={"ID":"6389da48-b7bc-4590-896b-891bc1cca82a","Type":"ContainerDied","Data":"f722f3ef07b8c8c3951e58d1eabd041999ba7e8f7ff76bb51ac44d3cd428fab1"} Apr 04 03:50:22 crc kubenswrapper[4681]: I0404 03:50:22.006735 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjxsw" event={"ID":"6389da48-b7bc-4590-896b-891bc1cca82a","Type":"ContainerDied","Data":"7fe9233525e008f021334fda1256d4d02fee4ab3e352e316ad66bec3f163489e"} Apr 04 03:50:22 crc kubenswrapper[4681]: I0404 03:50:22.006762 4681 scope.go:117] "RemoveContainer" containerID="f722f3ef07b8c8c3951e58d1eabd041999ba7e8f7ff76bb51ac44d3cd428fab1" Apr 04 03:50:22 crc kubenswrapper[4681]: I0404 03:50:22.006932 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjxsw" Apr 04 03:50:22 crc kubenswrapper[4681]: I0404 03:50:22.036855 4681 scope.go:117] "RemoveContainer" containerID="7d653d6d42ee169714d4b3cff6e096bd079081688cfeba7c8237431a48890b7f" Apr 04 03:50:22 crc kubenswrapper[4681]: I0404 03:50:22.049545 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gjxsw"] Apr 04 03:50:22 crc kubenswrapper[4681]: I0404 03:50:22.054759 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gjxsw"] Apr 04 03:50:22 crc kubenswrapper[4681]: I0404 03:50:22.070656 4681 scope.go:117] "RemoveContainer" containerID="1452ec6533fc602f9b12ea9d2a9297f27471e32826d34cccc42d76bed9efbc6c" Apr 04 03:50:22 crc kubenswrapper[4681]: I0404 03:50:22.104976 4681 scope.go:117] "RemoveContainer" containerID="f722f3ef07b8c8c3951e58d1eabd041999ba7e8f7ff76bb51ac44d3cd428fab1" Apr 04 03:50:22 crc kubenswrapper[4681]: E0404 03:50:22.105537 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f722f3ef07b8c8c3951e58d1eabd041999ba7e8f7ff76bb51ac44d3cd428fab1\": container with ID starting with f722f3ef07b8c8c3951e58d1eabd041999ba7e8f7ff76bb51ac44d3cd428fab1 not found: ID does not exist" containerID="f722f3ef07b8c8c3951e58d1eabd041999ba7e8f7ff76bb51ac44d3cd428fab1" Apr 04 03:50:22 crc kubenswrapper[4681]: I0404 03:50:22.105577 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f722f3ef07b8c8c3951e58d1eabd041999ba7e8f7ff76bb51ac44d3cd428fab1"} err="failed to get container status \"f722f3ef07b8c8c3951e58d1eabd041999ba7e8f7ff76bb51ac44d3cd428fab1\": rpc error: code = NotFound desc = could not find container \"f722f3ef07b8c8c3951e58d1eabd041999ba7e8f7ff76bb51ac44d3cd428fab1\": container with ID starting with f722f3ef07b8c8c3951e58d1eabd041999ba7e8f7ff76bb51ac44d3cd428fab1 not found: ID does not exist" Apr 04 03:50:22 crc kubenswrapper[4681]: I0404 03:50:22.105606 4681 scope.go:117] "RemoveContainer" containerID="7d653d6d42ee169714d4b3cff6e096bd079081688cfeba7c8237431a48890b7f" Apr 04 03:50:22 crc kubenswrapper[4681]: E0404 03:50:22.105989 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d653d6d42ee169714d4b3cff6e096bd079081688cfeba7c8237431a48890b7f\": container with ID starting with 7d653d6d42ee169714d4b3cff6e096bd079081688cfeba7c8237431a48890b7f not found: ID does not exist" containerID="7d653d6d42ee169714d4b3cff6e096bd079081688cfeba7c8237431a48890b7f" Apr 04 03:50:22 crc kubenswrapper[4681]: I0404 03:50:22.106024 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d653d6d42ee169714d4b3cff6e096bd079081688cfeba7c8237431a48890b7f"} err="failed to get container status \"7d653d6d42ee169714d4b3cff6e096bd079081688cfeba7c8237431a48890b7f\": rpc error: code = NotFound desc = could not find container \"7d653d6d42ee169714d4b3cff6e096bd079081688cfeba7c8237431a48890b7f\": container with ID starting with 7d653d6d42ee169714d4b3cff6e096bd079081688cfeba7c8237431a48890b7f not found: ID does not exist" Apr 04 03:50:22 crc kubenswrapper[4681]: I0404 03:50:22.106050 4681 scope.go:117] "RemoveContainer" containerID="1452ec6533fc602f9b12ea9d2a9297f27471e32826d34cccc42d76bed9efbc6c" Apr 04 03:50:22 crc kubenswrapper[4681]: E0404 03:50:22.106478 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1452ec6533fc602f9b12ea9d2a9297f27471e32826d34cccc42d76bed9efbc6c\": container with ID starting with 1452ec6533fc602f9b12ea9d2a9297f27471e32826d34cccc42d76bed9efbc6c not found: ID does not exist" containerID="1452ec6533fc602f9b12ea9d2a9297f27471e32826d34cccc42d76bed9efbc6c" Apr 04 03:50:22 crc kubenswrapper[4681]: I0404 03:50:22.106498 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1452ec6533fc602f9b12ea9d2a9297f27471e32826d34cccc42d76bed9efbc6c"} err="failed to get container status \"1452ec6533fc602f9b12ea9d2a9297f27471e32826d34cccc42d76bed9efbc6c\": rpc error: code = NotFound desc = could not find container \"1452ec6533fc602f9b12ea9d2a9297f27471e32826d34cccc42d76bed9efbc6c\": container with ID starting with 1452ec6533fc602f9b12ea9d2a9297f27471e32826d34cccc42d76bed9efbc6c not found: ID does not exist" Apr 04 03:50:23 crc kubenswrapper[4681]: I0404 03:50:23.222944 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6389da48-b7bc-4590-896b-891bc1cca82a" path="/var/lib/kubelet/pods/6389da48-b7bc-4590-896b-891bc1cca82a/volumes" Apr 04 03:50:27 crc kubenswrapper[4681]: I0404 03:50:27.236463 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2kdfl" Apr 04 03:50:27 crc kubenswrapper[4681]: I0404 03:50:27.287961 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2kdfl" Apr 04 03:50:27 crc kubenswrapper[4681]: I0404 03:50:27.480959 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2kdfl"] Apr 04 03:50:29 crc kubenswrapper[4681]: I0404 03:50:29.081847 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2kdfl" podUID="24bb8be5-7c3c-4824-b8c9-7867dab6dfc9" containerName="registry-server" containerID="cri-o://939857dd69429df12ca94771dcdfba9be51f493cc07b3f04de0455380a8452ee" gracePeriod=2 Apr 04 03:50:29 crc kubenswrapper[4681]: I0404 03:50:29.201517 4681 scope.go:117] "RemoveContainer" containerID="083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649" Apr 04 03:50:29 crc kubenswrapper[4681]: E0404 03:50:29.201824 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:50:29 crc kubenswrapper[4681]: I0404 03:50:29.545617 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2kdfl" Apr 04 03:50:29 crc kubenswrapper[4681]: I0404 03:50:29.686368 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24bb8be5-7c3c-4824-b8c9-7867dab6dfc9-catalog-content\") pod \"24bb8be5-7c3c-4824-b8c9-7867dab6dfc9\" (UID: \"24bb8be5-7c3c-4824-b8c9-7867dab6dfc9\") " Apr 04 03:50:29 crc kubenswrapper[4681]: I0404 03:50:29.686762 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb8wg\" (UniqueName: \"kubernetes.io/projected/24bb8be5-7c3c-4824-b8c9-7867dab6dfc9-kube-api-access-tb8wg\") pod \"24bb8be5-7c3c-4824-b8c9-7867dab6dfc9\" (UID: \"24bb8be5-7c3c-4824-b8c9-7867dab6dfc9\") " Apr 04 03:50:29 crc kubenswrapper[4681]: I0404 03:50:29.686852 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24bb8be5-7c3c-4824-b8c9-7867dab6dfc9-utilities\") pod \"24bb8be5-7c3c-4824-b8c9-7867dab6dfc9\" (UID: \"24bb8be5-7c3c-4824-b8c9-7867dab6dfc9\") " Apr 04 03:50:29 crc kubenswrapper[4681]: I0404 03:50:29.690766 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24bb8be5-7c3c-4824-b8c9-7867dab6dfc9-utilities" (OuterVolumeSpecName: "utilities") pod "24bb8be5-7c3c-4824-b8c9-7867dab6dfc9" (UID: "24bb8be5-7c3c-4824-b8c9-7867dab6dfc9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:50:29 crc kubenswrapper[4681]: I0404 03:50:29.694446 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24bb8be5-7c3c-4824-b8c9-7867dab6dfc9-kube-api-access-tb8wg" (OuterVolumeSpecName: "kube-api-access-tb8wg") pod "24bb8be5-7c3c-4824-b8c9-7867dab6dfc9" (UID: "24bb8be5-7c3c-4824-b8c9-7867dab6dfc9"). InnerVolumeSpecName "kube-api-access-tb8wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:50:29 crc kubenswrapper[4681]: I0404 03:50:29.789222 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb8wg\" (UniqueName: \"kubernetes.io/projected/24bb8be5-7c3c-4824-b8c9-7867dab6dfc9-kube-api-access-tb8wg\") on node \"crc\" DevicePath \"\"" Apr 04 03:50:29 crc kubenswrapper[4681]: I0404 03:50:29.789254 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24bb8be5-7c3c-4824-b8c9-7867dab6dfc9-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 03:50:29 crc kubenswrapper[4681]: I0404 03:50:29.839353 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24bb8be5-7c3c-4824-b8c9-7867dab6dfc9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24bb8be5-7c3c-4824-b8c9-7867dab6dfc9" (UID: "24bb8be5-7c3c-4824-b8c9-7867dab6dfc9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:50:29 crc kubenswrapper[4681]: I0404 03:50:29.890852 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24bb8be5-7c3c-4824-b8c9-7867dab6dfc9-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 03:50:30 crc kubenswrapper[4681]: I0404 03:50:30.095910 4681 generic.go:334] "Generic (PLEG): container finished" podID="24bb8be5-7c3c-4824-b8c9-7867dab6dfc9" containerID="939857dd69429df12ca94771dcdfba9be51f493cc07b3f04de0455380a8452ee" exitCode=0 Apr 04 03:50:30 crc kubenswrapper[4681]: I0404 03:50:30.095965 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kdfl" event={"ID":"24bb8be5-7c3c-4824-b8c9-7867dab6dfc9","Type":"ContainerDied","Data":"939857dd69429df12ca94771dcdfba9be51f493cc07b3f04de0455380a8452ee"} Apr 04 03:50:30 crc kubenswrapper[4681]: I0404 03:50:30.096039 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kdfl" event={"ID":"24bb8be5-7c3c-4824-b8c9-7867dab6dfc9","Type":"ContainerDied","Data":"ec58d682984882350694ef6950ce5b67a8cd52a681cbdeb4695b81ae3280eee5"} Apr 04 03:50:30 crc kubenswrapper[4681]: I0404 03:50:30.096070 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2kdfl" Apr 04 03:50:30 crc kubenswrapper[4681]: I0404 03:50:30.096079 4681 scope.go:117] "RemoveContainer" containerID="939857dd69429df12ca94771dcdfba9be51f493cc07b3f04de0455380a8452ee" Apr 04 03:50:30 crc kubenswrapper[4681]: I0404 03:50:30.133116 4681 scope.go:117] "RemoveContainer" containerID="1f4c4f4b41be471e2be460390517c701d56f22e61bb22d2a7924bdd59ce221a8" Apr 04 03:50:30 crc kubenswrapper[4681]: I0404 03:50:30.160966 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2kdfl"] Apr 04 03:50:30 crc kubenswrapper[4681]: I0404 03:50:30.176650 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2kdfl"] Apr 04 03:50:30 crc kubenswrapper[4681]: I0404 03:50:30.177449 4681 scope.go:117] "RemoveContainer" containerID="dc5d0ae3518793caf245004e82bae8c0e04253c9af4fbd8ead027231eb496de6" Apr 04 03:50:30 crc kubenswrapper[4681]: I0404 03:50:30.248406 4681 scope.go:117] "RemoveContainer" containerID="939857dd69429df12ca94771dcdfba9be51f493cc07b3f04de0455380a8452ee" Apr 04 03:50:30 crc kubenswrapper[4681]: E0404 03:50:30.250102 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"939857dd69429df12ca94771dcdfba9be51f493cc07b3f04de0455380a8452ee\": container with ID starting with 939857dd69429df12ca94771dcdfba9be51f493cc07b3f04de0455380a8452ee not found: ID does not exist" containerID="939857dd69429df12ca94771dcdfba9be51f493cc07b3f04de0455380a8452ee" Apr 04 03:50:30 crc kubenswrapper[4681]: I0404 03:50:30.250139 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"939857dd69429df12ca94771dcdfba9be51f493cc07b3f04de0455380a8452ee"} err="failed to get container status \"939857dd69429df12ca94771dcdfba9be51f493cc07b3f04de0455380a8452ee\": rpc error: code = NotFound desc = could not find container \"939857dd69429df12ca94771dcdfba9be51f493cc07b3f04de0455380a8452ee\": container with ID starting with 939857dd69429df12ca94771dcdfba9be51f493cc07b3f04de0455380a8452ee not found: ID does not exist" Apr 04 03:50:30 crc kubenswrapper[4681]: I0404 03:50:30.250193 4681 scope.go:117] "RemoveContainer" containerID="1f4c4f4b41be471e2be460390517c701d56f22e61bb22d2a7924bdd59ce221a8" Apr 04 03:50:30 crc kubenswrapper[4681]: E0404 03:50:30.250591 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f4c4f4b41be471e2be460390517c701d56f22e61bb22d2a7924bdd59ce221a8\": container with ID starting with 1f4c4f4b41be471e2be460390517c701d56f22e61bb22d2a7924bdd59ce221a8 not found: ID does not exist" containerID="1f4c4f4b41be471e2be460390517c701d56f22e61bb22d2a7924bdd59ce221a8" Apr 04 03:50:30 crc kubenswrapper[4681]: I0404 03:50:30.250624 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f4c4f4b41be471e2be460390517c701d56f22e61bb22d2a7924bdd59ce221a8"} err="failed to get container status \"1f4c4f4b41be471e2be460390517c701d56f22e61bb22d2a7924bdd59ce221a8\": rpc error: code = NotFound desc = could not find container \"1f4c4f4b41be471e2be460390517c701d56f22e61bb22d2a7924bdd59ce221a8\": container with ID starting with 1f4c4f4b41be471e2be460390517c701d56f22e61bb22d2a7924bdd59ce221a8 not found: ID does not exist" Apr 04 03:50:30 crc kubenswrapper[4681]: I0404 03:50:30.250643 4681 scope.go:117] "RemoveContainer" containerID="dc5d0ae3518793caf245004e82bae8c0e04253c9af4fbd8ead027231eb496de6" Apr 04 03:50:30 crc kubenswrapper[4681]: E0404 03:50:30.251046 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc5d0ae3518793caf245004e82bae8c0e04253c9af4fbd8ead027231eb496de6\": container with ID starting with dc5d0ae3518793caf245004e82bae8c0e04253c9af4fbd8ead027231eb496de6 not found: ID does not exist" containerID="dc5d0ae3518793caf245004e82bae8c0e04253c9af4fbd8ead027231eb496de6" Apr 04 03:50:30 crc kubenswrapper[4681]: I0404 03:50:30.251073 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc5d0ae3518793caf245004e82bae8c0e04253c9af4fbd8ead027231eb496de6"} err="failed to get container status \"dc5d0ae3518793caf245004e82bae8c0e04253c9af4fbd8ead027231eb496de6\": rpc error: code = NotFound desc = could not find container \"dc5d0ae3518793caf245004e82bae8c0e04253c9af4fbd8ead027231eb496de6\": container with ID starting with dc5d0ae3518793caf245004e82bae8c0e04253c9af4fbd8ead027231eb496de6 not found: ID does not exist" Apr 04 03:50:31 crc kubenswrapper[4681]: I0404 03:50:31.215281 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24bb8be5-7c3c-4824-b8c9-7867dab6dfc9" path="/var/lib/kubelet/pods/24bb8be5-7c3c-4824-b8c9-7867dab6dfc9/volumes" Apr 04 03:50:36 crc kubenswrapper[4681]: I0404 03:50:36.011664 4681 scope.go:117] "RemoveContainer" containerID="279ad1e9e16f83ac7fd7211027fac9579d8d2586fe1d318e6a07d53d5eac05e7" Apr 04 03:50:39 crc kubenswrapper[4681]: I0404 03:50:39.834589 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5cb6cbf97d-96269_bce5c08c-6cdc-47ae-9454-ffc500f6e34c/barbican-api/0.log" Apr 04 03:50:39 crc kubenswrapper[4681]: I0404 03:50:39.960451 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-5cb6cbf97d-96269_bce5c08c-6cdc-47ae-9454-ffc500f6e34c/barbican-api-log/0.log" Apr 04 03:50:40 crc kubenswrapper[4681]: I0404 03:50:40.073327 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-796868c666-kk4mh_e9699275-8e01-4222-9e46-b90aa70f2a3c/barbican-keystone-listener/0.log" Apr 04 03:50:40 crc kubenswrapper[4681]: I0404 03:50:40.096987 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-796868c666-kk4mh_e9699275-8e01-4222-9e46-b90aa70f2a3c/barbican-keystone-listener-log/0.log" Apr 04 03:50:40 crc kubenswrapper[4681]: I0404 03:50:40.280968 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5699bfbbf-jpbrf_24831041-c157-474d-9e6d-55931683ed21/barbican-worker/0.log" Apr 04 03:50:40 crc kubenswrapper[4681]: I0404 03:50:40.325777 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5699bfbbf-jpbrf_24831041-c157-474d-9e6d-55931683ed21/barbican-worker-log/0.log" Apr 04 03:50:40 crc kubenswrapper[4681]: I0404 03:50:40.709343 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_130000be-4800-4c22-9a54-08918788abad/ceilometer-central-agent/0.log" Apr 04 03:50:40 crc kubenswrapper[4681]: I0404 03:50:40.713749 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_130000be-4800-4c22-9a54-08918788abad/ceilometer-notification-agent/0.log" Apr 04 03:50:40 crc kubenswrapper[4681]: I0404 03:50:40.772761 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-c7z64_00befa4c-4be8-4cc4-8e8e-46c0bb3b6592/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:50:40 crc kubenswrapper[4681]: I0404 03:50:40.803385 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_130000be-4800-4c22-9a54-08918788abad/proxy-httpd/0.log" Apr 04 03:50:40 crc kubenswrapper[4681]: I0404 03:50:40.971441 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_130000be-4800-4c22-9a54-08918788abad/sg-core/0.log" Apr 04 03:50:41 crc kubenswrapper[4681]: I0404 03:50:41.158697 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a1f293d4-d146-49d4-a75d-8e972a25b758/cinder-api-log/0.log" Apr 04 03:50:41 crc kubenswrapper[4681]: I0404 03:50:41.201508 4681 scope.go:117] "RemoveContainer" containerID="083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649" Apr 04 03:50:41 crc kubenswrapper[4681]: E0404 03:50:41.201918 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:50:41 crc kubenswrapper[4681]: I0404 03:50:41.505609 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_b36b7670-b847-4635-8dd5-8d5ea0d7825c/probe/0.log" Apr 04 03:50:41 crc kubenswrapper[4681]: I0404 03:50:41.809021 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_df8847f1-00d6-45d1-a106-b2c8c69abb35/probe/0.log" Apr 04 03:50:41 crc kubenswrapper[4681]: I0404 03:50:41.836483 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_df8847f1-00d6-45d1-a106-b2c8c69abb35/cinder-scheduler/0.log" Apr 04 03:50:42 crc kubenswrapper[4681]: I0404 03:50:42.297110 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_1ba04b4d-7697-4313-8759-e95a65957daa/probe/0.log" Apr 04 03:50:42 crc kubenswrapper[4681]: I0404 03:50:42.759731 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_b36b7670-b847-4635-8dd5-8d5ea0d7825c/cinder-backup/0.log" Apr 04 03:50:43 crc kubenswrapper[4681]: I0404 03:50:43.168075 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_c220dfdf-0f59-4093-b5dd-b2eba1a80fee/probe/0.log" Apr 04 03:50:43 crc kubenswrapper[4681]: I0404 03:50:43.173452 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a1f293d4-d146-49d4-a75d-8e972a25b758/cinder-api/0.log" Apr 04 03:50:43 crc kubenswrapper[4681]: I0404 03:50:43.219714 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_1ba04b4d-7697-4313-8759-e95a65957daa/cinder-volume/0.log" Apr 04 03:50:43 crc kubenswrapper[4681]: I0404 03:50:43.589922 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-g74jm_e1248b6b-52bc-4b4a-b901-afa695bb799f/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:50:43 crc kubenswrapper[4681]: I0404 03:50:43.758780 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_c220dfdf-0f59-4093-b5dd-b2eba1a80fee/cinder-volume/0.log" Apr 04 03:50:43 crc kubenswrapper[4681]: I0404 03:50:43.778218 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7444fdbf45-49mp6_f63bd22c-53ff-43aa-bc6d-fd388516ef62/init/0.log" Apr 04 03:50:43 crc kubenswrapper[4681]: I0404 03:50:43.833961 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-6b966_6d18b62e-86ae-4c2b-864c-315581ca4f1a/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:50:43 crc kubenswrapper[4681]: I0404 03:50:43.931824 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7444fdbf45-49mp6_f63bd22c-53ff-43aa-bc6d-fd388516ef62/init/0.log" Apr 04 03:50:44 crc kubenswrapper[4681]: I0404 03:50:44.185696 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_9af43da5-4945-49e2-ad66-afe1eefd4f4c/glance-httpd/0.log" Apr 04 03:50:44 crc kubenswrapper[4681]: I0404 03:50:44.188375 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-99rqr_b3b7061a-37ce-4302-a3a3-f06aff60e3a3/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:50:44 crc kubenswrapper[4681]: I0404 03:50:44.225744 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-7444fdbf45-49mp6_f63bd22c-53ff-43aa-bc6d-fd388516ef62/dnsmasq-dns/0.log" Apr 04 03:50:44 crc kubenswrapper[4681]: I0404 03:50:44.240039 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_9af43da5-4945-49e2-ad66-afe1eefd4f4c/glance-log/0.log" Apr 04 03:50:44 crc kubenswrapper[4681]: I0404 03:50:44.376140 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1f0d9a1d-5773-426e-adfa-6a0aae0ec79a/glance-log/0.log" Apr 04 03:50:44 crc kubenswrapper[4681]: I0404 03:50:44.421974 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1f0d9a1d-5773-426e-adfa-6a0aae0ec79a/glance-httpd/0.log" Apr 04 03:50:44 crc kubenswrapper[4681]: I0404 03:50:44.498296 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-qsnfq_76d7d624-1948-4ecc-ae72-3e40c03ec267/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:50:44 crc kubenswrapper[4681]: I0404 03:50:44.868063 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29587861-lh4pt_dfc4e081-9222-4cea-833f-d9137246664a/keystone-cron/0.log" Apr 04 03:50:45 crc kubenswrapper[4681]: I0404 03:50:45.079863 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_6e40c22d-4a3b-4321-ac7d-f623845423fc/kube-state-metrics/0.log" Apr 04 03:50:45 crc kubenswrapper[4681]: I0404 03:50:45.230921 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-cqsxv_c68459a6-8a5c-4a46-ac94-6c88e0c1f3d9/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:50:45 crc kubenswrapper[4681]: I0404 03:50:45.441105 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-5cdf6cfbdd-xgxdx_85cc490e-cee8-405f-b498-41415aae210e/keystone-api/0.log" Apr 04 03:50:45 crc kubenswrapper[4681]: I0404 03:50:45.870575 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-554fd9954f-c5kv8_99648c0a-d8f3-41f8-a03d-7a21a4a84156/neutron-api/0.log" Apr 04 03:50:45 crc kubenswrapper[4681]: I0404 03:50:45.884821 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-554fd9954f-c5kv8_99648c0a-d8f3-41f8-a03d-7a21a4a84156/neutron-httpd/0.log" Apr 04 03:50:46 crc kubenswrapper[4681]: I0404 03:50:46.098742 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-pwrwl_5c4ac822-458d-449c-b7e9-16ce85e56b63/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:50:46 crc kubenswrapper[4681]: I0404 03:50:46.115751 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_189dfe5e-4211-48c8-bc76-ea9c229c5d65/setup-container/0.log" Apr 04 03:50:46 crc kubenswrapper[4681]: I0404 03:50:46.262568 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_189dfe5e-4211-48c8-bc76-ea9c229c5d65/setup-container/0.log" Apr 04 03:50:46 crc kubenswrapper[4681]: I0404 03:50:46.345141 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_notifications-rabbitmq-server-0_189dfe5e-4211-48c8-bc76-ea9c229c5d65/rabbitmq/0.log" Apr 04 03:50:46 crc kubenswrapper[4681]: I0404 03:50:46.589688 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-hdcs2_1784fc32-2907-4203-a7cd-0053cfe1d338/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:50:47 crc kubenswrapper[4681]: I0404 03:50:47.131927 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_f63a7210-378a-4a4e-a458-33f19fbc360b/nova-cell0-conductor-conductor/0.log" Apr 04 03:50:47 crc kubenswrapper[4681]: I0404 03:50:47.341171 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_0e7ba727-658d-49f6-9e24-68da37adca06/nova-cell1-conductor-conductor/0.log" Apr 04 03:50:47 crc kubenswrapper[4681]: I0404 03:50:47.803129 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_f5986086-65b9-41b2-bb40-8ad2c6b42d11/nova-cell1-novncproxy-novncproxy/0.log" Apr 04 03:50:48 crc kubenswrapper[4681]: I0404 03:50:48.150022 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_858e598e-35ac-4ca2-a5d5-52e31278378f/nova-api-log/0.log" Apr 04 03:50:48 crc kubenswrapper[4681]: I0404 03:50:48.322219 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c85c4e78-d474-4016-b2b1-e05582da0f60/nova-metadata-log/0.log" Apr 04 03:50:49 crc kubenswrapper[4681]: I0404 03:50:49.096633 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-lf7rx_1c6f1a3c-3cad-4d39-8155-69c4a2ce1378/nova-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:50:49 crc kubenswrapper[4681]: I0404 03:50:49.161516 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_16143562-a1da-4713-a062-e3b850e170f0/nova-scheduler-scheduler/0.log" Apr 04 03:50:49 crc kubenswrapper[4681]: I0404 03:50:49.366497 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_858e598e-35ac-4ca2-a5d5-52e31278378f/nova-api-api/0.log" Apr 04 03:50:49 crc kubenswrapper[4681]: I0404 03:50:49.366603 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c85c4e78-d474-4016-b2b1-e05582da0f60/nova-metadata-metadata/0.log" Apr 04 03:50:49 crc kubenswrapper[4681]: I0404 03:50:49.366737 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7de30d66-63ae-43ca-8d87-33b3fc14f4b2/mysql-bootstrap/0.log" Apr 04 03:50:49 crc kubenswrapper[4681]: I0404 03:50:49.555636 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7de30d66-63ae-43ca-8d87-33b3fc14f4b2/mysql-bootstrap/0.log" Apr 04 03:50:49 crc kubenswrapper[4681]: I0404 03:50:49.581448 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_7de30d66-63ae-43ca-8d87-33b3fc14f4b2/galera/0.log" Apr 04 03:50:49 crc kubenswrapper[4681]: I0404 03:50:49.656611 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dd82b7b7-ba75-4588-9dc2-c47ed34762b5/mysql-bootstrap/0.log" Apr 04 03:50:49 crc kubenswrapper[4681]: I0404 03:50:49.838480 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dd82b7b7-ba75-4588-9dc2-c47ed34762b5/mysql-bootstrap/0.log" Apr 04 03:50:49 crc kubenswrapper[4681]: I0404 03:50:49.882128 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dd82b7b7-ba75-4588-9dc2-c47ed34762b5/galera/0.log" Apr 04 03:50:49 crc kubenswrapper[4681]: I0404 03:50:49.966095 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_e453c2ba-d2af-4ad5-8f25-91b386e9f9a6/openstackclient/0.log" Apr 04 03:50:50 crc kubenswrapper[4681]: I0404 03:50:50.088198 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-jz78r_616e7c64-534b-41e8-8ad9-0abf8f05d3d5/ovn-controller/0.log" Apr 04 03:50:50 crc kubenswrapper[4681]: I0404 03:50:50.153389 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-nlvvn_209debba-9c1c-4486-82c7-38424335f889/openstack-network-exporter/0.log" Apr 04 03:50:50 crc kubenswrapper[4681]: I0404 03:50:50.328837 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ndgrb_38cc8476-2432-47d7-ad56-fd155b7680a5/ovsdb-server-init/0.log" Apr 04 03:50:50 crc kubenswrapper[4681]: I0404 03:50:50.545786 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ndgrb_38cc8476-2432-47d7-ad56-fd155b7680a5/ovsdb-server-init/0.log" Apr 04 03:50:50 crc kubenswrapper[4681]: I0404 03:50:50.568096 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ndgrb_38cc8476-2432-47d7-ad56-fd155b7680a5/ovsdb-server/0.log" Apr 04 03:50:50 crc kubenswrapper[4681]: I0404 03:50:50.787335 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_4c79dddc-8bad-4bfb-920f-434aea2c400c/openstack-network-exporter/0.log" Apr 04 03:50:50 crc kubenswrapper[4681]: I0404 03:50:50.816562 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ndgrb_38cc8476-2432-47d7-ad56-fd155b7680a5/ovs-vswitchd/0.log" Apr 04 03:50:50 crc kubenswrapper[4681]: I0404 03:50:50.955216 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-fh7b5_bc5eea0b-38e8-42f7-b1bf-c3443b4cd9ad/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:50:51 crc kubenswrapper[4681]: I0404 03:50:51.077120 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_4c79dddc-8bad-4bfb-920f-434aea2c400c/ovn-northd/0.log" Apr 04 03:50:51 crc kubenswrapper[4681]: I0404 03:50:51.131485 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_30fe1cfd-59db-4c85-bf2c-a476faeabd9c/openstack-network-exporter/0.log" Apr 04 03:50:51 crc kubenswrapper[4681]: I0404 03:50:51.175174 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_30fe1cfd-59db-4c85-bf2c-a476faeabd9c/ovsdbserver-nb/0.log" Apr 04 03:50:51 crc kubenswrapper[4681]: I0404 03:50:51.558250 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f2a3604e-5c76-460f-aebb-5e2e89688d74/ovsdbserver-sb/0.log" Apr 04 03:50:51 crc kubenswrapper[4681]: I0404 03:50:51.609716 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f2a3604e-5c76-460f-aebb-5e2e89688d74/openstack-network-exporter/0.log" Apr 04 03:50:51 crc kubenswrapper[4681]: I0404 03:50:51.869394 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_757762e6-7520-4fec-8323-41bf4a53a889/init-config-reloader/0.log" Apr 04 03:50:52 crc kubenswrapper[4681]: I0404 03:50:52.058032 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_757762e6-7520-4fec-8323-41bf4a53a889/config-reloader/0.log" Apr 04 03:50:52 crc kubenswrapper[4681]: I0404 03:50:52.090659 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_757762e6-7520-4fec-8323-41bf4a53a889/init-config-reloader/0.log" Apr 04 03:50:52 crc kubenswrapper[4681]: I0404 03:50:52.132442 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-674794d9f6-5s9ps_b5b3ede0-d5ce-41d0-a320-ee0e732c8f86/placement-api/0.log" Apr 04 03:50:52 crc kubenswrapper[4681]: I0404 03:50:52.206867 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-674794d9f6-5s9ps_b5b3ede0-d5ce-41d0-a320-ee0e732c8f86/placement-log/0.log" Apr 04 03:50:52 crc kubenswrapper[4681]: I0404 03:50:52.252649 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_757762e6-7520-4fec-8323-41bf4a53a889/prometheus/0.log" Apr 04 03:50:52 crc kubenswrapper[4681]: I0404 03:50:52.330649 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_757762e6-7520-4fec-8323-41bf4a53a889/thanos-sidecar/0.log" Apr 04 03:50:52 crc kubenswrapper[4681]: I0404 03:50:52.408639 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_dfd8bf26-d103-4fa4-92d1-b463c9012169/setup-container/0.log" Apr 04 03:50:52 crc kubenswrapper[4681]: I0404 03:50:52.613964 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_dfd8bf26-d103-4fa4-92d1-b463c9012169/setup-container/0.log" Apr 04 03:50:52 crc kubenswrapper[4681]: I0404 03:50:52.701729 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_caa29c68-1123-4e1c-ba0a-8a34a9be0135/setup-container/0.log" Apr 04 03:50:52 crc kubenswrapper[4681]: I0404 03:50:52.746527 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_dfd8bf26-d103-4fa4-92d1-b463c9012169/rabbitmq/0.log" Apr 04 03:50:52 crc kubenswrapper[4681]: I0404 03:50:52.924621 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_caa29c68-1123-4e1c-ba0a-8a34a9be0135/rabbitmq/0.log" Apr 04 03:50:52 crc kubenswrapper[4681]: I0404 03:50:52.934879 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_caa29c68-1123-4e1c-ba0a-8a34a9be0135/setup-container/0.log" Apr 04 03:50:52 crc kubenswrapper[4681]: I0404 03:50:52.988792 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-tmm4h_0b001b06-583d-4b8d-974e-e7cf078a514d/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:50:53 crc kubenswrapper[4681]: I0404 03:50:53.186502 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-4x4ws_3291d540-df5f-43ec-a016-a06df4e58ce6/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:50:53 crc kubenswrapper[4681]: I0404 03:50:53.234167 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-t8d79_3deb575c-2d6c-41a6-9650-3dddc756bb67/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:50:53 crc kubenswrapper[4681]: I0404 03:50:53.462948 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-cwn7s_17d6bc83-830a-47e3-b5c6-96ae2ecfad52/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:50:53 crc kubenswrapper[4681]: I0404 03:50:53.486179 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-bdcnq_00fa5e33-c452-4b88-bd67-bc0e6094d232/ssh-known-hosts-edpm-deployment/0.log" Apr 04 03:50:53 crc kubenswrapper[4681]: I0404 03:50:53.715858 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-8456d9bb7c-dcjw6_cb09ea7e-aac7-4a55-962c-ca71e66e26a8/proxy-server/0.log" Apr 04 03:50:53 crc kubenswrapper[4681]: I0404 03:50:53.912021 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-gz57l_6d76298d-bafc-4c57-9e19-f77f982a3187/swift-ring-rebalance/0.log" Apr 04 03:50:53 crc kubenswrapper[4681]: I0404 03:50:53.919122 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-8456d9bb7c-dcjw6_cb09ea7e-aac7-4a55-962c-ca71e66e26a8/proxy-httpd/0.log" Apr 04 03:50:54 crc kubenswrapper[4681]: I0404 03:50:54.035428 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdc00a76-b945-4eca-98d7-1f126a78785f/account-auditor/0.log" Apr 04 03:50:54 crc kubenswrapper[4681]: I0404 03:50:54.150097 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdc00a76-b945-4eca-98d7-1f126a78785f/account-reaper/0.log" Apr 04 03:50:54 crc kubenswrapper[4681]: I0404 03:50:54.178926 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdc00a76-b945-4eca-98d7-1f126a78785f/account-replicator/0.log" Apr 04 03:50:54 crc kubenswrapper[4681]: I0404 03:50:54.200640 4681 scope.go:117] "RemoveContainer" containerID="083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649" Apr 04 03:50:54 crc kubenswrapper[4681]: E0404 03:50:54.200971 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:50:54 crc kubenswrapper[4681]: I0404 03:50:54.246306 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdc00a76-b945-4eca-98d7-1f126a78785f/container-auditor/0.log" Apr 04 03:50:54 crc kubenswrapper[4681]: I0404 03:50:54.253017 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdc00a76-b945-4eca-98d7-1f126a78785f/account-server/0.log" Apr 04 03:50:54 crc kubenswrapper[4681]: I0404 03:50:54.376671 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdc00a76-b945-4eca-98d7-1f126a78785f/container-replicator/0.log" Apr 04 03:50:54 crc kubenswrapper[4681]: I0404 03:50:54.389204 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdc00a76-b945-4eca-98d7-1f126a78785f/container-server/0.log" Apr 04 03:50:54 crc kubenswrapper[4681]: I0404 03:50:54.449396 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdc00a76-b945-4eca-98d7-1f126a78785f/container-updater/0.log" Apr 04 03:50:54 crc kubenswrapper[4681]: I0404 03:50:54.488076 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdc00a76-b945-4eca-98d7-1f126a78785f/object-auditor/0.log" Apr 04 03:50:54 crc kubenswrapper[4681]: I0404 03:50:54.530715 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdc00a76-b945-4eca-98d7-1f126a78785f/object-expirer/0.log" Apr 04 03:50:54 crc kubenswrapper[4681]: I0404 03:50:54.619037 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdc00a76-b945-4eca-98d7-1f126a78785f/object-replicator/0.log" Apr 04 03:50:54 crc kubenswrapper[4681]: I0404 03:50:54.683284 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdc00a76-b945-4eca-98d7-1f126a78785f/object-server/0.log" Apr 04 03:50:54 crc kubenswrapper[4681]: I0404 03:50:54.695587 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdc00a76-b945-4eca-98d7-1f126a78785f/object-updater/0.log" Apr 04 03:50:54 crc kubenswrapper[4681]: I0404 03:50:54.743405 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdc00a76-b945-4eca-98d7-1f126a78785f/rsync/0.log" Apr 04 03:50:54 crc kubenswrapper[4681]: I0404 03:50:54.853347 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_cdc00a76-b945-4eca-98d7-1f126a78785f/swift-recon-cron/0.log" Apr 04 03:50:55 crc kubenswrapper[4681]: I0404 03:50:55.082227 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_9d245209-8139-42b0-aae0-5cafddfc00dd/tempest-tests-tempest-tests-runner/0.log" Apr 04 03:50:55 crc kubenswrapper[4681]: I0404 03:50:55.237527 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_4e1e5191-e069-4447-ad1d-00e07ba61407/test-operator-logs-container/0.log" Apr 04 03:50:55 crc kubenswrapper[4681]: I0404 03:50:55.441611 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-g6dgm_79cff0ca-47f9-4198-abf2-a488089c2ade/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:50:55 crc kubenswrapper[4681]: I0404 03:50:55.493846 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-9j5bn_d7ef2b80-e8d5-4f17-8617-d3a88ef35137/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Apr 04 03:50:56 crc kubenswrapper[4681]: I0404 03:50:56.290154 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_8abb1419-6466-40ac-b2ec-2d6306e02026/watcher-applier/0.log" Apr 04 03:50:57 crc kubenswrapper[4681]: I0404 03:50:57.065429 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_7f43afd0-4f66-4841-a564-7f47a84be4b1/watcher-api-log/0.log" Apr 04 03:51:00 crc kubenswrapper[4681]: I0404 03:51:00.237121 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_4c83a5da-4f75-4ce3-8ed1-77404dd4f2b0/watcher-decision-engine/0.log" Apr 04 03:51:01 crc kubenswrapper[4681]: I0404 03:51:01.462845 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_7f43afd0-4f66-4841-a564-7f47a84be4b1/watcher-api/0.log" Apr 04 03:51:06 crc kubenswrapper[4681]: I0404 03:51:06.200708 4681 scope.go:117] "RemoveContainer" containerID="083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649" Apr 04 03:51:06 crc kubenswrapper[4681]: I0404 03:51:06.487776 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerStarted","Data":"fc8164135cc3b87b1c027469dcf913456cc05ebb2cc330fa19efa84a01296906"} Apr 04 03:51:09 crc kubenswrapper[4681]: I0404 03:51:09.802035 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_160ce09d-ccb7-4ce9-8bbe-574e115fcc3f/memcached/0.log" Apr 04 03:51:26 crc kubenswrapper[4681]: I0404 03:51:26.367497 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86644c9c9c-kvnd9_895bcf63-b464-4408-a0f2-8217d1a6179b/manager/0.log" Apr 04 03:51:26 crc kubenswrapper[4681]: I0404 03:51:26.617588 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-58689c6fff-rnnzd_6a8bc05a-a0b8-4ba3-b778-b3cb57b19a99/manager/0.log" Apr 04 03:51:26 crc kubenswrapper[4681]: I0404 03:51:26.665572 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d46cccfb9-ttwtp_23b37abe-289b-45e9-b55b-e2985e411401/manager/0.log" Apr 04 03:51:26 crc kubenswrapper[4681]: I0404 03:51:26.836019 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx_622ff9fc-9cc4-4167-86f8-012c6031b393/util/0.log" Apr 04 03:51:26 crc kubenswrapper[4681]: I0404 03:51:26.995471 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx_622ff9fc-9cc4-4167-86f8-012c6031b393/util/0.log" Apr 04 03:51:27 crc kubenswrapper[4681]: I0404 03:51:27.022897 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx_622ff9fc-9cc4-4167-86f8-012c6031b393/pull/0.log" Apr 04 03:51:27 crc kubenswrapper[4681]: I0404 03:51:27.027785 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx_622ff9fc-9cc4-4167-86f8-012c6031b393/pull/0.log" Apr 04 03:51:27 crc kubenswrapper[4681]: I0404 03:51:27.177071 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx_622ff9fc-9cc4-4167-86f8-012c6031b393/extract/0.log" Apr 04 03:51:27 crc kubenswrapper[4681]: I0404 03:51:27.182660 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx_622ff9fc-9cc4-4167-86f8-012c6031b393/util/0.log" Apr 04 03:51:27 crc kubenswrapper[4681]: I0404 03:51:27.185115 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_f33e3d0660f3b32a1e5784662bc887bd3479b46d125400bfca1a248379mnfrx_622ff9fc-9cc4-4167-86f8-012c6031b393/pull/0.log" Apr 04 03:51:27 crc kubenswrapper[4681]: I0404 03:51:27.441987 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-8684f86954-4z752_06717285-4d9d-4b9d-919e-106dd0ec0274/manager/0.log" Apr 04 03:51:27 crc kubenswrapper[4681]: I0404 03:51:27.444864 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-648bdc7f99-skr68_4513182b-1bdb-40a2-ba02-2e8aa8567819/manager/0.log" Apr 04 03:51:27 crc kubenswrapper[4681]: I0404 03:51:27.656278 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6ccfd84cb4-sq9cm_be876d09-d6fd-46f7-a03c-8c13f72bee75/manager/0.log" Apr 04 03:51:27 crc kubenswrapper[4681]: I0404 03:51:27.926166 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f96574b5-82k6f_1a4403a6-7904-4764-aba4-02a2bcc4bc19/manager/0.log" Apr 04 03:51:28 crc kubenswrapper[4681]: I0404 03:51:28.007238 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7ffb6b7cdc-gcbfv_4536a628-89aa-4f79-b180-9199d3cf390a/manager/0.log" Apr 04 03:51:28 crc kubenswrapper[4681]: I0404 03:51:28.065423 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-dbf8bb784-4gx6m_80023eb5-5d8e-49ff-bd8d-d0fb3e290ccf/manager/0.log" Apr 04 03:51:28 crc kubenswrapper[4681]: I0404 03:51:28.114587 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6b7497dc59-tllnk_82ce5791-77cb-418c-b3d2-7f49f625ccf1/manager/0.log" Apr 04 03:51:28 crc kubenswrapper[4681]: I0404 03:51:28.542926 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6554749d88-tj6wj_28828ebb-13dc-4ba1-98e1-39c6f38e9245/manager/0.log" Apr 04 03:51:28 crc kubenswrapper[4681]: I0404 03:51:28.703299 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-767865f676-74tl4_856d74a1-4df8-446a-a82b-3dcc76f1af70/manager/0.log" Apr 04 03:51:28 crc kubenswrapper[4681]: I0404 03:51:28.824088 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5d6f9fd68c-x7x9p_3ac3008b-06b0-4ab7-a59f-3e7682627410/manager/0.log" Apr 04 03:51:28 crc kubenswrapper[4681]: I0404 03:51:28.896776 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7594f57946-c9j8w_d8331de2-1469-4856-a56c-f1e107779ca4/manager/0.log" Apr 04 03:51:29 crc kubenswrapper[4681]: I0404 03:51:29.029080 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-b7b49d78f-skbql_b54a4f45-de00-4dd5-95d4-f96a21d34189/manager/0.log" Apr 04 03:51:29 crc kubenswrapper[4681]: I0404 03:51:29.237949 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5645c5b4f-jmkvr_08d96c31-f9c1-4308-ba34-bb5135a86eb8/operator/0.log" Apr 04 03:51:29 crc kubenswrapper[4681]: I0404 03:51:29.533473 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-nc9vv_4551c34d-f733-4478-9613-7618e59322b5/registry-server/0.log" Apr 04 03:51:29 crc kubenswrapper[4681]: I0404 03:51:29.735356 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-84464c7c78-brc8n_b50e9e2f-832f-4de0-b38c-cb9f5f0d62ab/manager/0.log" Apr 04 03:51:29 crc kubenswrapper[4681]: I0404 03:51:29.933238 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-559d8fdb6b-tmg65_6479782a-b4ab-4e90-a9bd-29ef0a41f9d7/manager/0.log" Apr 04 03:51:30 crc kubenswrapper[4681]: I0404 03:51:30.101880 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-zv8zs_2acd20f5-b31c-411a-989c-f0ad12628894/operator/0.log" Apr 04 03:51:30 crc kubenswrapper[4681]: I0404 03:51:30.242770 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-fbdcf7f7b-844tj_eb76f1dc-bae9-491f-a58e-3cc1f9c15571/manager/0.log" Apr 04 03:51:30 crc kubenswrapper[4681]: I0404 03:51:30.576526 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56ccc97cf5-j87hk_da0d70da-b61c-41ee-938b-f4a931300f75/manager/0.log" Apr 04 03:51:30 crc kubenswrapper[4681]: I0404 03:51:30.673209 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-d6f76d4c7-2vrfg_8e44912b-0956-49e8-ad3e-140b3d60838e/manager/0.log" Apr 04 03:51:30 crc kubenswrapper[4681]: I0404 03:51:30.832414 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-667cfd88d7-2k5wm_df89fca6-3fb4-4d85-95df-4b48e4a1e884/manager/0.log" Apr 04 03:51:31 crc kubenswrapper[4681]: I0404 03:51:31.012919 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-58b78987f4-nwsmd_a2899081-691e-4ad2-8e98-4fb8b955a0cd/manager/0.log" Apr 04 03:51:51 crc kubenswrapper[4681]: I0404 03:51:51.240555 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-c76br_b8f9c5e4-05ac-48dd-8e04-81b8087e3a72/control-plane-machine-set-operator/0.log" Apr 04 03:51:51 crc kubenswrapper[4681]: I0404 03:51:51.472975 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mftw8_15b64868-afa1-4d70-bfda-799ed31decdb/kube-rbac-proxy/0.log" Apr 04 03:51:51 crc kubenswrapper[4681]: I0404 03:51:51.489796 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-mftw8_15b64868-afa1-4d70-bfda-799ed31decdb/machine-api-operator/0.log" Apr 04 03:52:00 crc kubenswrapper[4681]: I0404 03:52:00.150537 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587912-824ll"] Apr 04 03:52:00 crc kubenswrapper[4681]: E0404 03:52:00.151690 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24bb8be5-7c3c-4824-b8c9-7867dab6dfc9" containerName="extract-content" Apr 04 03:52:00 crc kubenswrapper[4681]: I0404 03:52:00.151712 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="24bb8be5-7c3c-4824-b8c9-7867dab6dfc9" containerName="extract-content" Apr 04 03:52:00 crc kubenswrapper[4681]: E0404 03:52:00.151740 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f2c8491-ce1c-472d-90b7-9e0da48d9f19" containerName="extract-utilities" Apr 04 03:52:00 crc kubenswrapper[4681]: I0404 03:52:00.151751 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f2c8491-ce1c-472d-90b7-9e0da48d9f19" containerName="extract-utilities" Apr 04 03:52:00 crc kubenswrapper[4681]: E0404 03:52:00.151765 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24bb8be5-7c3c-4824-b8c9-7867dab6dfc9" containerName="registry-server" Apr 04 03:52:00 crc kubenswrapper[4681]: I0404 03:52:00.151780 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="24bb8be5-7c3c-4824-b8c9-7867dab6dfc9" containerName="registry-server" Apr 04 03:52:00 crc kubenswrapper[4681]: E0404 03:52:00.151810 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6389da48-b7bc-4590-896b-891bc1cca82a" containerName="registry-server" Apr 04 03:52:00 crc kubenswrapper[4681]: I0404 03:52:00.151822 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6389da48-b7bc-4590-896b-891bc1cca82a" containerName="registry-server" Apr 04 03:52:00 crc kubenswrapper[4681]: E0404 03:52:00.151844 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f2c8491-ce1c-472d-90b7-9e0da48d9f19" containerName="registry-server" Apr 04 03:52:00 crc kubenswrapper[4681]: I0404 03:52:00.151855 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f2c8491-ce1c-472d-90b7-9e0da48d9f19" containerName="registry-server" Apr 04 03:52:00 crc kubenswrapper[4681]: E0404 03:52:00.151877 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6389da48-b7bc-4590-896b-891bc1cca82a" containerName="extract-content" Apr 04 03:52:00 crc kubenswrapper[4681]: I0404 03:52:00.151887 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6389da48-b7bc-4590-896b-891bc1cca82a" containerName="extract-content" Apr 04 03:52:00 crc kubenswrapper[4681]: E0404 03:52:00.151905 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f2c8491-ce1c-472d-90b7-9e0da48d9f19" containerName="extract-content" Apr 04 03:52:00 crc kubenswrapper[4681]: I0404 03:52:00.151913 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f2c8491-ce1c-472d-90b7-9e0da48d9f19" containerName="extract-content" Apr 04 03:52:00 crc kubenswrapper[4681]: E0404 03:52:00.151935 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6389da48-b7bc-4590-896b-891bc1cca82a" containerName="extract-utilities" Apr 04 03:52:00 crc kubenswrapper[4681]: I0404 03:52:00.151943 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="6389da48-b7bc-4590-896b-891bc1cca82a" containerName="extract-utilities" Apr 04 03:52:00 crc kubenswrapper[4681]: E0404 03:52:00.151967 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24bb8be5-7c3c-4824-b8c9-7867dab6dfc9" containerName="extract-utilities" Apr 04 03:52:00 crc kubenswrapper[4681]: I0404 03:52:00.151975 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="24bb8be5-7c3c-4824-b8c9-7867dab6dfc9" containerName="extract-utilities" Apr 04 03:52:00 crc kubenswrapper[4681]: E0404 03:52:00.152001 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0c05a9b-d399-4e3d-b738-aa14b9f23067" containerName="oc" Apr 04 03:52:00 crc kubenswrapper[4681]: I0404 03:52:00.152010 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0c05a9b-d399-4e3d-b738-aa14b9f23067" containerName="oc" Apr 04 03:52:00 crc kubenswrapper[4681]: I0404 03:52:00.152240 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f2c8491-ce1c-472d-90b7-9e0da48d9f19" containerName="registry-server" Apr 04 03:52:00 crc kubenswrapper[4681]: I0404 03:52:00.152251 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0c05a9b-d399-4e3d-b738-aa14b9f23067" containerName="oc" Apr 04 03:52:00 crc kubenswrapper[4681]: I0404 03:52:00.152299 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="6389da48-b7bc-4590-896b-891bc1cca82a" containerName="registry-server" Apr 04 03:52:00 crc kubenswrapper[4681]: I0404 03:52:00.152312 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="24bb8be5-7c3c-4824-b8c9-7867dab6dfc9" containerName="registry-server" Apr 04 03:52:00 crc kubenswrapper[4681]: I0404 03:52:00.153182 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587912-824ll" Apr 04 03:52:00 crc kubenswrapper[4681]: I0404 03:52:00.155724 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 03:52:00 crc kubenswrapper[4681]: I0404 03:52:00.156901 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 03:52:00 crc kubenswrapper[4681]: I0404 03:52:00.157154 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 03:52:00 crc kubenswrapper[4681]: I0404 03:52:00.167074 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587912-824ll"] Apr 04 03:52:00 crc kubenswrapper[4681]: I0404 03:52:00.293809 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptw8p\" (UniqueName: \"kubernetes.io/projected/c693e1eb-5790-4fef-b910-b6f710ea18cf-kube-api-access-ptw8p\") pod \"auto-csr-approver-29587912-824ll\" (UID: \"c693e1eb-5790-4fef-b910-b6f710ea18cf\") " pod="openshift-infra/auto-csr-approver-29587912-824ll" Apr 04 03:52:00 crc kubenswrapper[4681]: I0404 03:52:00.396590 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptw8p\" (UniqueName: \"kubernetes.io/projected/c693e1eb-5790-4fef-b910-b6f710ea18cf-kube-api-access-ptw8p\") pod \"auto-csr-approver-29587912-824ll\" (UID: \"c693e1eb-5790-4fef-b910-b6f710ea18cf\") " pod="openshift-infra/auto-csr-approver-29587912-824ll" Apr 04 03:52:00 crc kubenswrapper[4681]: I0404 03:52:00.412862 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptw8p\" (UniqueName: \"kubernetes.io/projected/c693e1eb-5790-4fef-b910-b6f710ea18cf-kube-api-access-ptw8p\") pod \"auto-csr-approver-29587912-824ll\" (UID: \"c693e1eb-5790-4fef-b910-b6f710ea18cf\") " pod="openshift-infra/auto-csr-approver-29587912-824ll" Apr 04 03:52:00 crc kubenswrapper[4681]: I0404 03:52:00.473212 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587912-824ll" Apr 04 03:52:00 crc kubenswrapper[4681]: I0404 03:52:00.929129 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587912-824ll"] Apr 04 03:52:01 crc kubenswrapper[4681]: I0404 03:52:01.030641 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587912-824ll" event={"ID":"c693e1eb-5790-4fef-b910-b6f710ea18cf","Type":"ContainerStarted","Data":"26854264cfc6a3a47e7a98adbf633765ed6cf63fd1794c9eaa6bab747332b1f4"} Apr 04 03:52:03 crc kubenswrapper[4681]: I0404 03:52:03.073632 4681 generic.go:334] "Generic (PLEG): container finished" podID="c693e1eb-5790-4fef-b910-b6f710ea18cf" containerID="aea7ca15948bf7f495710ef6bcd39e27fb499a5ccb8428dc717ea78a8fdfc198" exitCode=0 Apr 04 03:52:03 crc kubenswrapper[4681]: I0404 03:52:03.074159 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587912-824ll" event={"ID":"c693e1eb-5790-4fef-b910-b6f710ea18cf","Type":"ContainerDied","Data":"aea7ca15948bf7f495710ef6bcd39e27fb499a5ccb8428dc717ea78a8fdfc198"} Apr 04 03:52:04 crc kubenswrapper[4681]: I0404 03:52:04.417613 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587912-824ll" Apr 04 03:52:04 crc kubenswrapper[4681]: I0404 03:52:04.513369 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptw8p\" (UniqueName: \"kubernetes.io/projected/c693e1eb-5790-4fef-b910-b6f710ea18cf-kube-api-access-ptw8p\") pod \"c693e1eb-5790-4fef-b910-b6f710ea18cf\" (UID: \"c693e1eb-5790-4fef-b910-b6f710ea18cf\") " Apr 04 03:52:04 crc kubenswrapper[4681]: I0404 03:52:04.531648 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c693e1eb-5790-4fef-b910-b6f710ea18cf-kube-api-access-ptw8p" (OuterVolumeSpecName: "kube-api-access-ptw8p") pod "c693e1eb-5790-4fef-b910-b6f710ea18cf" (UID: "c693e1eb-5790-4fef-b910-b6f710ea18cf"). InnerVolumeSpecName "kube-api-access-ptw8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:52:04 crc kubenswrapper[4681]: I0404 03:52:04.616212 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptw8p\" (UniqueName: \"kubernetes.io/projected/c693e1eb-5790-4fef-b910-b6f710ea18cf-kube-api-access-ptw8p\") on node \"crc\" DevicePath \"\"" Apr 04 03:52:05 crc kubenswrapper[4681]: I0404 03:52:05.096363 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587912-824ll" event={"ID":"c693e1eb-5790-4fef-b910-b6f710ea18cf","Type":"ContainerDied","Data":"26854264cfc6a3a47e7a98adbf633765ed6cf63fd1794c9eaa6bab747332b1f4"} Apr 04 03:52:05 crc kubenswrapper[4681]: I0404 03:52:05.096418 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26854264cfc6a3a47e7a98adbf633765ed6cf63fd1794c9eaa6bab747332b1f4" Apr 04 03:52:05 crc kubenswrapper[4681]: I0404 03:52:05.096478 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587912-824ll" Apr 04 03:52:05 crc kubenswrapper[4681]: I0404 03:52:05.485900 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587906-pk76c"] Apr 04 03:52:05 crc kubenswrapper[4681]: I0404 03:52:05.497506 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587906-pk76c"] Apr 04 03:52:05 crc kubenswrapper[4681]: I0404 03:52:05.755279 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-b7xpz_d71066c7-07f6-471d-9d8d-6746b3f229e9/cert-manager-controller/0.log" Apr 04 03:52:05 crc kubenswrapper[4681]: I0404 03:52:05.990438 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-2vkfl_9fde43ea-36ff-4f94-ba5e-8e1ea1338b1e/cert-manager-cainjector/0.log" Apr 04 03:52:06 crc kubenswrapper[4681]: I0404 03:52:06.005393 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-6ppdd_4d2e304b-f02c-427a-b2a2-f1e8cc7efb70/cert-manager-webhook/0.log" Apr 04 03:52:07 crc kubenswrapper[4681]: I0404 03:52:07.211740 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c60aa57-c814-4a04-a0ba-383b8e59e477" path="/var/lib/kubelet/pods/1c60aa57-c814-4a04-a0ba-383b8e59e477/volumes" Apr 04 03:52:19 crc kubenswrapper[4681]: I0404 03:52:19.876776 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7b5ddc4dc7-cg7df_04ddbb2f-cda2-457e-8e65-e3b1e1d9ae53/nmstate-console-plugin/0.log" Apr 04 03:52:20 crc kubenswrapper[4681]: I0404 03:52:20.069691 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-l6xdx_e174b98a-0ca7-4dfc-846f-b0395cb9b4a4/nmstate-handler/0.log" Apr 04 03:52:20 crc kubenswrapper[4681]: I0404 03:52:20.124367 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-gc988_ec47a21e-ac21-4720-ac9c-b0b9f50bfc85/kube-rbac-proxy/0.log" Apr 04 03:52:20 crc kubenswrapper[4681]: I0404 03:52:20.169848 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-gc988_ec47a21e-ac21-4720-ac9c-b0b9f50bfc85/nmstate-metrics/0.log" Apr 04 03:52:20 crc kubenswrapper[4681]: I0404 03:52:20.344921 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6b8c6447b-vd5sz_dd449ba7-bc18-4cdb-8f0f-05c997e2274e/nmstate-operator/0.log" Apr 04 03:52:20 crc kubenswrapper[4681]: I0404 03:52:20.399908 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-jn9q8_fbb2aa57-946f-43fb-9380-83a69cced169/nmstate-webhook/0.log" Apr 04 03:52:35 crc kubenswrapper[4681]: I0404 03:52:35.422882 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-86dff4bf76-pkfp4_b23b52f2-8062-48f1-a937-590414fcb369/prometheus-operator/0.log" Apr 04 03:52:35 crc kubenswrapper[4681]: I0404 03:52:35.608164 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-57bc579d78-6gdgv_c11383e1-c1fe-4d1e-ab47-234adca1f589/prometheus-operator-admission-webhook/0.log" Apr 04 03:52:35 crc kubenswrapper[4681]: I0404 03:52:35.727874 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-57bc579d78-bpz42_ca6efa76-cc20-4742-9c8b-1ef70ff6acff/prometheus-operator-admission-webhook/0.log" Apr 04 03:52:35 crc kubenswrapper[4681]: I0404 03:52:35.896804 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-dd944d769-szx5n_a510961d-019d-41d4-8a75-66f69f5d6728/operator/0.log" Apr 04 03:52:35 crc kubenswrapper[4681]: I0404 03:52:35.911259 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-74445bf4b8-4fv89_0c2979a7-06b5-4451-875e-f8e64da75780/perses-operator/0.log" Apr 04 03:52:36 crc kubenswrapper[4681]: I0404 03:52:36.189376 4681 scope.go:117] "RemoveContainer" containerID="38b7eb56b05feacd37ca7367b559afe3fd69f8db6f8d3065cb4935fc3c745891" Apr 04 03:52:51 crc kubenswrapper[4681]: I0404 03:52:51.025218 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bb64cd5d7-vnp7h_10953a36-52e8-4614-af9d-7df97c580ffc/kube-rbac-proxy/0.log" Apr 04 03:52:51 crc kubenswrapper[4681]: I0404 03:52:51.177354 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bb64cd5d7-vnp7h_10953a36-52e8-4614-af9d-7df97c580ffc/controller/0.log" Apr 04 03:52:51 crc kubenswrapper[4681]: I0404 03:52:51.292409 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/cp-frr-files/0.log" Apr 04 03:52:51 crc kubenswrapper[4681]: I0404 03:52:51.435182 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/cp-metrics/0.log" Apr 04 03:52:51 crc kubenswrapper[4681]: I0404 03:52:51.448553 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/cp-frr-files/0.log" Apr 04 03:52:51 crc kubenswrapper[4681]: I0404 03:52:51.477592 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/cp-reloader/0.log" Apr 04 03:52:51 crc kubenswrapper[4681]: I0404 03:52:51.529601 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/cp-reloader/0.log" Apr 04 03:52:51 crc kubenswrapper[4681]: I0404 03:52:51.760110 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/cp-reloader/0.log" Apr 04 03:52:51 crc kubenswrapper[4681]: I0404 03:52:51.795311 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/cp-metrics/0.log" Apr 04 03:52:51 crc kubenswrapper[4681]: I0404 03:52:51.797299 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/cp-metrics/0.log" Apr 04 03:52:51 crc kubenswrapper[4681]: I0404 03:52:51.802646 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/cp-frr-files/0.log" Apr 04 03:52:51 crc kubenswrapper[4681]: I0404 03:52:51.966126 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/cp-metrics/0.log" Apr 04 03:52:52 crc kubenswrapper[4681]: I0404 03:52:52.020766 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/cp-reloader/0.log" Apr 04 03:52:52 crc kubenswrapper[4681]: I0404 03:52:52.026864 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/cp-frr-files/0.log" Apr 04 03:52:52 crc kubenswrapper[4681]: I0404 03:52:52.033722 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/controller/0.log" Apr 04 03:52:52 crc kubenswrapper[4681]: I0404 03:52:52.218101 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/kube-rbac-proxy-frr/0.log" Apr 04 03:52:52 crc kubenswrapper[4681]: I0404 03:52:52.239377 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/kube-rbac-proxy/0.log" Apr 04 03:52:52 crc kubenswrapper[4681]: I0404 03:52:52.251457 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/frr-metrics/0.log" Apr 04 03:52:52 crc kubenswrapper[4681]: I0404 03:52:52.407923 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/reloader/0.log" Apr 04 03:52:52 crc kubenswrapper[4681]: I0404 03:52:52.453114 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-wn7bd_82278f5d-bc0c-45d9-9efd-170e322295dd/frr-k8s-webhook-server/0.log" Apr 04 03:52:52 crc kubenswrapper[4681]: I0404 03:52:52.695864 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6b949c746f-bbmhk_bbb46a7c-3e17-4b01-8a75-20a864bee1d3/manager/0.log" Apr 04 03:52:52 crc kubenswrapper[4681]: I0404 03:52:52.867255 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6b54d9cb4b-kbxnx_22266062-5a6f-4352-80ea-f9cb334bf963/webhook-server/0.log" Apr 04 03:52:52 crc kubenswrapper[4681]: I0404 03:52:52.997079 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lvp5w_b9303934-434e-47f9-8c2b-36d6e6320ab2/kube-rbac-proxy/0.log" Apr 04 03:52:53 crc kubenswrapper[4681]: I0404 03:52:53.582358 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-lvp5w_b9303934-434e-47f9-8c2b-36d6e6320ab2/speaker/0.log" Apr 04 03:52:54 crc kubenswrapper[4681]: I0404 03:52:54.313386 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-kj4kn_873d7ebe-9962-4fd0-84e5-4dbc1c576644/frr/0.log" Apr 04 03:53:05 crc kubenswrapper[4681]: I0404 03:53:05.983220 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9_992f104f-096e-415f-a791-d2f2d0bd17a7/util/0.log" Apr 04 03:53:06 crc kubenswrapper[4681]: I0404 03:53:06.165044 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9_992f104f-096e-415f-a791-d2f2d0bd17a7/pull/0.log" Apr 04 03:53:06 crc kubenswrapper[4681]: I0404 03:53:06.180970 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9_992f104f-096e-415f-a791-d2f2d0bd17a7/pull/0.log" Apr 04 03:53:06 crc kubenswrapper[4681]: I0404 03:53:06.198185 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9_992f104f-096e-415f-a791-d2f2d0bd17a7/util/0.log" Apr 04 03:53:06 crc kubenswrapper[4681]: I0404 03:53:06.341959 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9_992f104f-096e-415f-a791-d2f2d0bd17a7/util/0.log" Apr 04 03:53:06 crc kubenswrapper[4681]: I0404 03:53:06.370015 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9_992f104f-096e-415f-a791-d2f2d0bd17a7/extract/0.log" Apr 04 03:53:06 crc kubenswrapper[4681]: I0404 03:53:06.381726 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_4d7975b5a0e69e817bb81974b87da6fc0f34c999595997c20f9b4da3efwktd9_992f104f-096e-415f-a791-d2f2d0bd17a7/pull/0.log" Apr 04 03:53:06 crc kubenswrapper[4681]: I0404 03:53:06.541611 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw_c7308318-787f-4347-8939-2f27b367b588/util/0.log" Apr 04 03:53:06 crc kubenswrapper[4681]: I0404 03:53:06.726560 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw_c7308318-787f-4347-8939-2f27b367b588/util/0.log" Apr 04 03:53:06 crc kubenswrapper[4681]: I0404 03:53:06.760847 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw_c7308318-787f-4347-8939-2f27b367b588/pull/0.log" Apr 04 03:53:06 crc kubenswrapper[4681]: I0404 03:53:06.761476 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw_c7308318-787f-4347-8939-2f27b367b588/pull/0.log" Apr 04 03:53:06 crc kubenswrapper[4681]: I0404 03:53:06.886248 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw_c7308318-787f-4347-8939-2f27b367b588/pull/0.log" Apr 04 03:53:06 crc kubenswrapper[4681]: I0404 03:53:06.888600 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw_c7308318-787f-4347-8939-2f27b367b588/util/0.log" Apr 04 03:53:06 crc kubenswrapper[4681]: I0404 03:53:06.953238 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5899fb83590e6455a3f628fcbbcb1091eafa68b265018139ed84146645vvzbw_c7308318-787f-4347-8939-2f27b367b588/extract/0.log" Apr 04 03:53:07 crc kubenswrapper[4681]: I0404 03:53:07.094184 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668_d625d583-bc5e-4cf4-914b-09f9452b7633/util/0.log" Apr 04 03:53:07 crc kubenswrapper[4681]: I0404 03:53:07.241574 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668_d625d583-bc5e-4cf4-914b-09f9452b7633/pull/0.log" Apr 04 03:53:07 crc kubenswrapper[4681]: I0404 03:53:07.249329 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668_d625d583-bc5e-4cf4-914b-09f9452b7633/util/0.log" Apr 04 03:53:07 crc kubenswrapper[4681]: I0404 03:53:07.258926 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668_d625d583-bc5e-4cf4-914b-09f9452b7633/pull/0.log" Apr 04 03:53:07 crc kubenswrapper[4681]: I0404 03:53:07.390288 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668_d625d583-bc5e-4cf4-914b-09f9452b7633/util/0.log" Apr 04 03:53:07 crc kubenswrapper[4681]: I0404 03:53:07.444100 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668_d625d583-bc5e-4cf4-914b-09f9452b7633/extract/0.log" Apr 04 03:53:07 crc kubenswrapper[4681]: I0404 03:53:07.446458 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf92397268j668_d625d583-bc5e-4cf4-914b-09f9452b7633/pull/0.log" Apr 04 03:53:07 crc kubenswrapper[4681]: I0404 03:53:07.582186 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s7ppd_bb4a8188-2d15-4ecb-8b44-46d49acb6dd8/extract-utilities/0.log" Apr 04 03:53:07 crc kubenswrapper[4681]: I0404 03:53:07.762941 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s7ppd_bb4a8188-2d15-4ecb-8b44-46d49acb6dd8/extract-utilities/0.log" Apr 04 03:53:07 crc kubenswrapper[4681]: I0404 03:53:07.822118 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s7ppd_bb4a8188-2d15-4ecb-8b44-46d49acb6dd8/extract-content/0.log" Apr 04 03:53:07 crc kubenswrapper[4681]: I0404 03:53:07.825116 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s7ppd_bb4a8188-2d15-4ecb-8b44-46d49acb6dd8/extract-content/0.log" Apr 04 03:53:07 crc kubenswrapper[4681]: I0404 03:53:07.969047 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s7ppd_bb4a8188-2d15-4ecb-8b44-46d49acb6dd8/extract-utilities/0.log" Apr 04 03:53:07 crc kubenswrapper[4681]: I0404 03:53:07.994989 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s7ppd_bb4a8188-2d15-4ecb-8b44-46d49acb6dd8/extract-content/0.log" Apr 04 03:53:08 crc kubenswrapper[4681]: I0404 03:53:08.185556 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pc5lk_f5161cee-bf19-458a-95a3-8cf593f8f78c/extract-utilities/0.log" Apr 04 03:53:08 crc kubenswrapper[4681]: I0404 03:53:08.514904 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pc5lk_f5161cee-bf19-458a-95a3-8cf593f8f78c/extract-content/0.log" Apr 04 03:53:08 crc kubenswrapper[4681]: I0404 03:53:08.522984 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pc5lk_f5161cee-bf19-458a-95a3-8cf593f8f78c/extract-content/0.log" Apr 04 03:53:08 crc kubenswrapper[4681]: I0404 03:53:08.527245 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pc5lk_f5161cee-bf19-458a-95a3-8cf593f8f78c/extract-utilities/0.log" Apr 04 03:53:08 crc kubenswrapper[4681]: I0404 03:53:08.720916 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pc5lk_f5161cee-bf19-458a-95a3-8cf593f8f78c/extract-utilities/0.log" Apr 04 03:53:08 crc kubenswrapper[4681]: I0404 03:53:08.776609 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pc5lk_f5161cee-bf19-458a-95a3-8cf593f8f78c/extract-content/0.log" Apr 04 03:53:08 crc kubenswrapper[4681]: I0404 03:53:08.956392 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl_2b4070a8-f657-4819-8ab7-b105f33e5560/util/0.log" Apr 04 03:53:08 crc kubenswrapper[4681]: I0404 03:53:08.991696 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s7ppd_bb4a8188-2d15-4ecb-8b44-46d49acb6dd8/registry-server/0.log" Apr 04 03:53:09 crc kubenswrapper[4681]: I0404 03:53:09.245935 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl_2b4070a8-f657-4819-8ab7-b105f33e5560/pull/0.log" Apr 04 03:53:09 crc kubenswrapper[4681]: I0404 03:53:09.279971 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl_2b4070a8-f657-4819-8ab7-b105f33e5560/pull/0.log" Apr 04 03:53:09 crc kubenswrapper[4681]: I0404 03:53:09.282588 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl_2b4070a8-f657-4819-8ab7-b105f33e5560/util/0.log" Apr 04 03:53:09 crc kubenswrapper[4681]: I0404 03:53:09.435252 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl_2b4070a8-f657-4819-8ab7-b105f33e5560/util/0.log" Apr 04 03:53:09 crc kubenswrapper[4681]: I0404 03:53:09.506441 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl_2b4070a8-f657-4819-8ab7-b105f33e5560/pull/0.log" Apr 04 03:53:09 crc kubenswrapper[4681]: I0404 03:53:09.563330 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_ff4f5ee0f799971b32a0c10c1c4dbe619002004ad8a7fdbcfb49ebac0d7g6kl_2b4070a8-f657-4819-8ab7-b105f33e5560/extract/0.log" Apr 04 03:53:09 crc kubenswrapper[4681]: I0404 03:53:09.790209 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-hdr5h_76a1fdd0-d5af-45fe-8f41-bed5f036a8e1/marketplace-operator/0.log" Apr 04 03:53:09 crc kubenswrapper[4681]: I0404 03:53:09.796192 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-pc5lk_f5161cee-bf19-458a-95a3-8cf593f8f78c/registry-server/0.log" Apr 04 03:53:09 crc kubenswrapper[4681]: I0404 03:53:09.903050 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rlcg4_2ed9fccc-c563-4589-8289-6293e52869e4/extract-utilities/0.log" Apr 04 03:53:10 crc kubenswrapper[4681]: I0404 03:53:10.043590 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rlcg4_2ed9fccc-c563-4589-8289-6293e52869e4/extract-utilities/0.log" Apr 04 03:53:10 crc kubenswrapper[4681]: I0404 03:53:10.081285 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rlcg4_2ed9fccc-c563-4589-8289-6293e52869e4/extract-content/0.log" Apr 04 03:53:10 crc kubenswrapper[4681]: I0404 03:53:10.081486 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rlcg4_2ed9fccc-c563-4589-8289-6293e52869e4/extract-content/0.log" Apr 04 03:53:10 crc kubenswrapper[4681]: I0404 03:53:10.282279 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vvbw2_07348284-8b90-475b-92e4-f92c9a4ec127/extract-utilities/0.log" Apr 04 03:53:10 crc kubenswrapper[4681]: I0404 03:53:10.295332 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rlcg4_2ed9fccc-c563-4589-8289-6293e52869e4/extract-utilities/0.log" Apr 04 03:53:10 crc kubenswrapper[4681]: I0404 03:53:10.301600 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rlcg4_2ed9fccc-c563-4589-8289-6293e52869e4/extract-content/0.log" Apr 04 03:53:10 crc kubenswrapper[4681]: I0404 03:53:10.483417 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-rlcg4_2ed9fccc-c563-4589-8289-6293e52869e4/registry-server/0.log" Apr 04 03:53:10 crc kubenswrapper[4681]: I0404 03:53:10.507308 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vvbw2_07348284-8b90-475b-92e4-f92c9a4ec127/extract-content/0.log" Apr 04 03:53:10 crc kubenswrapper[4681]: I0404 03:53:10.536325 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vvbw2_07348284-8b90-475b-92e4-f92c9a4ec127/extract-content/0.log" Apr 04 03:53:10 crc kubenswrapper[4681]: I0404 03:53:10.552660 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vvbw2_07348284-8b90-475b-92e4-f92c9a4ec127/extract-utilities/0.log" Apr 04 03:53:10 crc kubenswrapper[4681]: I0404 03:53:10.715612 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vvbw2_07348284-8b90-475b-92e4-f92c9a4ec127/extract-utilities/0.log" Apr 04 03:53:10 crc kubenswrapper[4681]: I0404 03:53:10.734722 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vvbw2_07348284-8b90-475b-92e4-f92c9a4ec127/extract-content/0.log" Apr 04 03:53:11 crc kubenswrapper[4681]: I0404 03:53:11.558885 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vvbw2_07348284-8b90-475b-92e4-f92c9a4ec127/registry-server/0.log" Apr 04 03:53:23 crc kubenswrapper[4681]: I0404 03:53:23.667575 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-57bc579d78-bpz42_ca6efa76-cc20-4742-9c8b-1ef70ff6acff/prometheus-operator-admission-webhook/0.log" Apr 04 03:53:23 crc kubenswrapper[4681]: I0404 03:53:23.692752 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-86dff4bf76-pkfp4_b23b52f2-8062-48f1-a937-590414fcb369/prometheus-operator/0.log" Apr 04 03:53:23 crc kubenswrapper[4681]: I0404 03:53:23.708586 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-57bc579d78-6gdgv_c11383e1-c1fe-4d1e-ab47-234adca1f589/prometheus-operator-admission-webhook/0.log" Apr 04 03:53:23 crc kubenswrapper[4681]: I0404 03:53:23.876515 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-dd944d769-szx5n_a510961d-019d-41d4-8a75-66f69f5d6728/operator/0.log" Apr 04 03:53:23 crc kubenswrapper[4681]: I0404 03:53:23.922955 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-74445bf4b8-4fv89_0c2979a7-06b5-4451-875e-f8e64da75780/perses-operator/0.log" Apr 04 03:53:26 crc kubenswrapper[4681]: I0404 03:53:26.529964 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 03:53:26 crc kubenswrapper[4681]: I0404 03:53:26.530036 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 03:53:56 crc kubenswrapper[4681]: I0404 03:53:56.525490 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 03:53:56 crc kubenswrapper[4681]: I0404 03:53:56.526112 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 03:54:00 crc kubenswrapper[4681]: I0404 03:54:00.141220 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587914-7pkxp"] Apr 04 03:54:00 crc kubenswrapper[4681]: E0404 03:54:00.141931 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c693e1eb-5790-4fef-b910-b6f710ea18cf" containerName="oc" Apr 04 03:54:00 crc kubenswrapper[4681]: I0404 03:54:00.141948 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="c693e1eb-5790-4fef-b910-b6f710ea18cf" containerName="oc" Apr 04 03:54:00 crc kubenswrapper[4681]: I0404 03:54:00.142184 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="c693e1eb-5790-4fef-b910-b6f710ea18cf" containerName="oc" Apr 04 03:54:00 crc kubenswrapper[4681]: I0404 03:54:00.142869 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587914-7pkxp" Apr 04 03:54:00 crc kubenswrapper[4681]: I0404 03:54:00.145204 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 03:54:00 crc kubenswrapper[4681]: I0404 03:54:00.145553 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 03:54:00 crc kubenswrapper[4681]: I0404 03:54:00.146451 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 03:54:00 crc kubenswrapper[4681]: I0404 03:54:00.151285 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587914-7pkxp"] Apr 04 03:54:00 crc kubenswrapper[4681]: I0404 03:54:00.234210 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wdff\" (UniqueName: \"kubernetes.io/projected/7e6a4d29-81f9-4681-8edb-c4c48b272073-kube-api-access-2wdff\") pod \"auto-csr-approver-29587914-7pkxp\" (UID: \"7e6a4d29-81f9-4681-8edb-c4c48b272073\") " pod="openshift-infra/auto-csr-approver-29587914-7pkxp" Apr 04 03:54:00 crc kubenswrapper[4681]: I0404 03:54:00.336480 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wdff\" (UniqueName: \"kubernetes.io/projected/7e6a4d29-81f9-4681-8edb-c4c48b272073-kube-api-access-2wdff\") pod \"auto-csr-approver-29587914-7pkxp\" (UID: \"7e6a4d29-81f9-4681-8edb-c4c48b272073\") " pod="openshift-infra/auto-csr-approver-29587914-7pkxp" Apr 04 03:54:00 crc kubenswrapper[4681]: I0404 03:54:00.358619 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wdff\" (UniqueName: \"kubernetes.io/projected/7e6a4d29-81f9-4681-8edb-c4c48b272073-kube-api-access-2wdff\") pod \"auto-csr-approver-29587914-7pkxp\" (UID: \"7e6a4d29-81f9-4681-8edb-c4c48b272073\") " pod="openshift-infra/auto-csr-approver-29587914-7pkxp" Apr 04 03:54:00 crc kubenswrapper[4681]: I0404 03:54:00.463672 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587914-7pkxp" Apr 04 03:54:01 crc kubenswrapper[4681]: I0404 03:54:01.004416 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587914-7pkxp"] Apr 04 03:54:01 crc kubenswrapper[4681]: I0404 03:54:01.767158 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587914-7pkxp" event={"ID":"7e6a4d29-81f9-4681-8edb-c4c48b272073","Type":"ContainerStarted","Data":"07d55d8708f2824c4f0aef656b453e56ff8906866c3e0fcd9a1c0cba7fbb10d9"} Apr 04 03:54:02 crc kubenswrapper[4681]: I0404 03:54:02.783904 4681 generic.go:334] "Generic (PLEG): container finished" podID="7e6a4d29-81f9-4681-8edb-c4c48b272073" containerID="362954ca56b297c3ce1c3afe3d5c2c991ee8318e989e321b30e3b15925921afb" exitCode=0 Apr 04 03:54:02 crc kubenswrapper[4681]: I0404 03:54:02.783984 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587914-7pkxp" event={"ID":"7e6a4d29-81f9-4681-8edb-c4c48b272073","Type":"ContainerDied","Data":"362954ca56b297c3ce1c3afe3d5c2c991ee8318e989e321b30e3b15925921afb"} Apr 04 03:54:04 crc kubenswrapper[4681]: I0404 03:54:04.303445 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587914-7pkxp" Apr 04 03:54:04 crc kubenswrapper[4681]: I0404 03:54:04.436663 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wdff\" (UniqueName: \"kubernetes.io/projected/7e6a4d29-81f9-4681-8edb-c4c48b272073-kube-api-access-2wdff\") pod \"7e6a4d29-81f9-4681-8edb-c4c48b272073\" (UID: \"7e6a4d29-81f9-4681-8edb-c4c48b272073\") " Apr 04 03:54:04 crc kubenswrapper[4681]: I0404 03:54:04.447647 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e6a4d29-81f9-4681-8edb-c4c48b272073-kube-api-access-2wdff" (OuterVolumeSpecName: "kube-api-access-2wdff") pod "7e6a4d29-81f9-4681-8edb-c4c48b272073" (UID: "7e6a4d29-81f9-4681-8edb-c4c48b272073"). InnerVolumeSpecName "kube-api-access-2wdff". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:54:04 crc kubenswrapper[4681]: I0404 03:54:04.540439 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wdff\" (UniqueName: \"kubernetes.io/projected/7e6a4d29-81f9-4681-8edb-c4c48b272073-kube-api-access-2wdff\") on node \"crc\" DevicePath \"\"" Apr 04 03:54:04 crc kubenswrapper[4681]: I0404 03:54:04.814999 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587914-7pkxp" event={"ID":"7e6a4d29-81f9-4681-8edb-c4c48b272073","Type":"ContainerDied","Data":"07d55d8708f2824c4f0aef656b453e56ff8906866c3e0fcd9a1c0cba7fbb10d9"} Apr 04 03:54:04 crc kubenswrapper[4681]: I0404 03:54:04.815330 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07d55d8708f2824c4f0aef656b453e56ff8906866c3e0fcd9a1c0cba7fbb10d9" Apr 04 03:54:04 crc kubenswrapper[4681]: I0404 03:54:04.815094 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587914-7pkxp" Apr 04 03:54:05 crc kubenswrapper[4681]: I0404 03:54:05.403035 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587908-gfh2l"] Apr 04 03:54:05 crc kubenswrapper[4681]: I0404 03:54:05.415996 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587908-gfh2l"] Apr 04 03:54:07 crc kubenswrapper[4681]: I0404 03:54:07.226527 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="073524c9-3e47-4f2f-a3ef-df6456d79233" path="/var/lib/kubelet/pods/073524c9-3e47-4f2f-a3ef-df6456d79233/volumes" Apr 04 03:54:26 crc kubenswrapper[4681]: I0404 03:54:26.524048 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 03:54:26 crc kubenswrapper[4681]: I0404 03:54:26.524791 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 03:54:26 crc kubenswrapper[4681]: I0404 03:54:26.524853 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 03:54:26 crc kubenswrapper[4681]: I0404 03:54:26.525936 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fc8164135cc3b87b1c027469dcf913456cc05ebb2cc330fa19efa84a01296906"} pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Apr 04 03:54:26 crc kubenswrapper[4681]: I0404 03:54:26.526035 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" containerID="cri-o://fc8164135cc3b87b1c027469dcf913456cc05ebb2cc330fa19efa84a01296906" gracePeriod=600 Apr 04 03:54:27 crc kubenswrapper[4681]: I0404 03:54:27.092852 4681 generic.go:334] "Generic (PLEG): container finished" podID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerID="fc8164135cc3b87b1c027469dcf913456cc05ebb2cc330fa19efa84a01296906" exitCode=0 Apr 04 03:54:27 crc kubenswrapper[4681]: I0404 03:54:27.092923 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerDied","Data":"fc8164135cc3b87b1c027469dcf913456cc05ebb2cc330fa19efa84a01296906"} Apr 04 03:54:27 crc kubenswrapper[4681]: I0404 03:54:27.093204 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerStarted","Data":"180682ed0d3b3de36eeb8df5018598c89cc067a2cf2a904470cf9a396a739e1e"} Apr 04 03:54:27 crc kubenswrapper[4681]: I0404 03:54:27.093229 4681 scope.go:117] "RemoveContainer" containerID="083703548ac9ca34f2603630b8bcf1c52b8c7285f6fdc531508849efefdc4649" Apr 04 03:54:36 crc kubenswrapper[4681]: I0404 03:54:36.317857 4681 scope.go:117] "RemoveContainer" containerID="7284fe24a1257b3c6ac51ccc986e4afa1574270dad4337663ee0f05999511b4a" Apr 04 03:54:54 crc kubenswrapper[4681]: I0404 03:54:54.686011 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n78wv"] Apr 04 03:54:54 crc kubenswrapper[4681]: E0404 03:54:54.687745 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e6a4d29-81f9-4681-8edb-c4c48b272073" containerName="oc" Apr 04 03:54:54 crc kubenswrapper[4681]: I0404 03:54:54.687778 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e6a4d29-81f9-4681-8edb-c4c48b272073" containerName="oc" Apr 04 03:54:54 crc kubenswrapper[4681]: I0404 03:54:54.688298 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e6a4d29-81f9-4681-8edb-c4c48b272073" containerName="oc" Apr 04 03:54:54 crc kubenswrapper[4681]: I0404 03:54:54.691855 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n78wv" Apr 04 03:54:54 crc kubenswrapper[4681]: I0404 03:54:54.703684 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n78wv"] Apr 04 03:54:54 crc kubenswrapper[4681]: I0404 03:54:54.822760 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brp55\" (UniqueName: \"kubernetes.io/projected/a7672c62-25a9-4b0c-a8dc-6749e411c155-kube-api-access-brp55\") pod \"redhat-marketplace-n78wv\" (UID: \"a7672c62-25a9-4b0c-a8dc-6749e411c155\") " pod="openshift-marketplace/redhat-marketplace-n78wv" Apr 04 03:54:54 crc kubenswrapper[4681]: I0404 03:54:54.822867 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7672c62-25a9-4b0c-a8dc-6749e411c155-catalog-content\") pod \"redhat-marketplace-n78wv\" (UID: \"a7672c62-25a9-4b0c-a8dc-6749e411c155\") " pod="openshift-marketplace/redhat-marketplace-n78wv" Apr 04 03:54:54 crc kubenswrapper[4681]: I0404 03:54:54.822896 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7672c62-25a9-4b0c-a8dc-6749e411c155-utilities\") pod \"redhat-marketplace-n78wv\" (UID: \"a7672c62-25a9-4b0c-a8dc-6749e411c155\") " pod="openshift-marketplace/redhat-marketplace-n78wv" Apr 04 03:54:54 crc kubenswrapper[4681]: I0404 03:54:54.925268 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brp55\" (UniqueName: \"kubernetes.io/projected/a7672c62-25a9-4b0c-a8dc-6749e411c155-kube-api-access-brp55\") pod \"redhat-marketplace-n78wv\" (UID: \"a7672c62-25a9-4b0c-a8dc-6749e411c155\") " pod="openshift-marketplace/redhat-marketplace-n78wv" Apr 04 03:54:54 crc kubenswrapper[4681]: I0404 03:54:54.925361 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7672c62-25a9-4b0c-a8dc-6749e411c155-catalog-content\") pod \"redhat-marketplace-n78wv\" (UID: \"a7672c62-25a9-4b0c-a8dc-6749e411c155\") " pod="openshift-marketplace/redhat-marketplace-n78wv" Apr 04 03:54:54 crc kubenswrapper[4681]: I0404 03:54:54.925383 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7672c62-25a9-4b0c-a8dc-6749e411c155-utilities\") pod \"redhat-marketplace-n78wv\" (UID: \"a7672c62-25a9-4b0c-a8dc-6749e411c155\") " pod="openshift-marketplace/redhat-marketplace-n78wv" Apr 04 03:54:54 crc kubenswrapper[4681]: I0404 03:54:54.925897 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7672c62-25a9-4b0c-a8dc-6749e411c155-utilities\") pod \"redhat-marketplace-n78wv\" (UID: \"a7672c62-25a9-4b0c-a8dc-6749e411c155\") " pod="openshift-marketplace/redhat-marketplace-n78wv" Apr 04 03:54:54 crc kubenswrapper[4681]: I0404 03:54:54.926027 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7672c62-25a9-4b0c-a8dc-6749e411c155-catalog-content\") pod \"redhat-marketplace-n78wv\" (UID: \"a7672c62-25a9-4b0c-a8dc-6749e411c155\") " pod="openshift-marketplace/redhat-marketplace-n78wv" Apr 04 03:54:54 crc kubenswrapper[4681]: I0404 03:54:54.950322 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brp55\" (UniqueName: \"kubernetes.io/projected/a7672c62-25a9-4b0c-a8dc-6749e411c155-kube-api-access-brp55\") pod \"redhat-marketplace-n78wv\" (UID: \"a7672c62-25a9-4b0c-a8dc-6749e411c155\") " pod="openshift-marketplace/redhat-marketplace-n78wv" Apr 04 03:54:55 crc kubenswrapper[4681]: I0404 03:54:55.036154 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n78wv" Apr 04 03:54:55 crc kubenswrapper[4681]: I0404 03:54:55.615386 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n78wv"] Apr 04 03:54:55 crc kubenswrapper[4681]: W0404 03:54:55.620493 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7672c62_25a9_4b0c_a8dc_6749e411c155.slice/crio-425639e64b1b0d71ac5c4d64592765ea4b1ba28384e32480cb4f437f83196827 WatchSource:0}: Error finding container 425639e64b1b0d71ac5c4d64592765ea4b1ba28384e32480cb4f437f83196827: Status 404 returned error can't find the container with id 425639e64b1b0d71ac5c4d64592765ea4b1ba28384e32480cb4f437f83196827 Apr 04 03:54:56 crc kubenswrapper[4681]: I0404 03:54:56.532619 4681 generic.go:334] "Generic (PLEG): container finished" podID="a7672c62-25a9-4b0c-a8dc-6749e411c155" containerID="712273936ce78efd2113f25de304ed176e93491898b0e66a4ab0e3eda461de06" exitCode=0 Apr 04 03:54:56 crc kubenswrapper[4681]: I0404 03:54:56.532699 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n78wv" event={"ID":"a7672c62-25a9-4b0c-a8dc-6749e411c155","Type":"ContainerDied","Data":"712273936ce78efd2113f25de304ed176e93491898b0e66a4ab0e3eda461de06"} Apr 04 03:54:56 crc kubenswrapper[4681]: I0404 03:54:56.533521 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n78wv" event={"ID":"a7672c62-25a9-4b0c-a8dc-6749e411c155","Type":"ContainerStarted","Data":"425639e64b1b0d71ac5c4d64592765ea4b1ba28384e32480cb4f437f83196827"} Apr 04 03:54:56 crc kubenswrapper[4681]: I0404 03:54:56.535903 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 04 03:54:57 crc kubenswrapper[4681]: I0404 03:54:57.544464 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n78wv" event={"ID":"a7672c62-25a9-4b0c-a8dc-6749e411c155","Type":"ContainerStarted","Data":"88162a939e73fad378304e44cf292607631ceb06c2dead175780a220a9e80182"} Apr 04 03:54:58 crc kubenswrapper[4681]: I0404 03:54:58.561359 4681 generic.go:334] "Generic (PLEG): container finished" podID="a7672c62-25a9-4b0c-a8dc-6749e411c155" containerID="88162a939e73fad378304e44cf292607631ceb06c2dead175780a220a9e80182" exitCode=0 Apr 04 03:54:58 crc kubenswrapper[4681]: I0404 03:54:58.561475 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n78wv" event={"ID":"a7672c62-25a9-4b0c-a8dc-6749e411c155","Type":"ContainerDied","Data":"88162a939e73fad378304e44cf292607631ceb06c2dead175780a220a9e80182"} Apr 04 03:54:59 crc kubenswrapper[4681]: I0404 03:54:59.572677 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n78wv" event={"ID":"a7672c62-25a9-4b0c-a8dc-6749e411c155","Type":"ContainerStarted","Data":"25769e4cb95702ccf79c1fd33325c7cbf8303daf822fb6f80929a7703e008395"} Apr 04 03:54:59 crc kubenswrapper[4681]: I0404 03:54:59.598640 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n78wv" podStartSLOduration=3.139034508 podStartE2EDuration="5.598621163s" podCreationTimestamp="2026-04-04 03:54:54 +0000 UTC" firstStartedPulling="2026-04-04 03:54:56.535670655 +0000 UTC m=+7176.201445775" lastFinishedPulling="2026-04-04 03:54:58.99525727 +0000 UTC m=+7178.661032430" observedRunningTime="2026-04-04 03:54:59.595085705 +0000 UTC m=+7179.260860825" watchObservedRunningTime="2026-04-04 03:54:59.598621163 +0000 UTC m=+7179.264396293" Apr 04 03:55:05 crc kubenswrapper[4681]: I0404 03:55:05.036518 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n78wv" Apr 04 03:55:05 crc kubenswrapper[4681]: I0404 03:55:05.037125 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n78wv" Apr 04 03:55:05 crc kubenswrapper[4681]: I0404 03:55:05.126519 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n78wv" Apr 04 03:55:05 crc kubenswrapper[4681]: I0404 03:55:05.787213 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n78wv" Apr 04 03:55:05 crc kubenswrapper[4681]: I0404 03:55:05.854998 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n78wv"] Apr 04 03:55:07 crc kubenswrapper[4681]: I0404 03:55:07.699553 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n78wv" podUID="a7672c62-25a9-4b0c-a8dc-6749e411c155" containerName="registry-server" containerID="cri-o://25769e4cb95702ccf79c1fd33325c7cbf8303daf822fb6f80929a7703e008395" gracePeriod=2 Apr 04 03:55:08 crc kubenswrapper[4681]: I0404 03:55:08.263103 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n78wv" Apr 04 03:55:08 crc kubenswrapper[4681]: I0404 03:55:08.358674 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brp55\" (UniqueName: \"kubernetes.io/projected/a7672c62-25a9-4b0c-a8dc-6749e411c155-kube-api-access-brp55\") pod \"a7672c62-25a9-4b0c-a8dc-6749e411c155\" (UID: \"a7672c62-25a9-4b0c-a8dc-6749e411c155\") " Apr 04 03:55:08 crc kubenswrapper[4681]: I0404 03:55:08.358867 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7672c62-25a9-4b0c-a8dc-6749e411c155-utilities\") pod \"a7672c62-25a9-4b0c-a8dc-6749e411c155\" (UID: \"a7672c62-25a9-4b0c-a8dc-6749e411c155\") " Apr 04 03:55:08 crc kubenswrapper[4681]: I0404 03:55:08.358981 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7672c62-25a9-4b0c-a8dc-6749e411c155-catalog-content\") pod \"a7672c62-25a9-4b0c-a8dc-6749e411c155\" (UID: \"a7672c62-25a9-4b0c-a8dc-6749e411c155\") " Apr 04 03:55:08 crc kubenswrapper[4681]: I0404 03:55:08.360640 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7672c62-25a9-4b0c-a8dc-6749e411c155-utilities" (OuterVolumeSpecName: "utilities") pod "a7672c62-25a9-4b0c-a8dc-6749e411c155" (UID: "a7672c62-25a9-4b0c-a8dc-6749e411c155"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:55:08 crc kubenswrapper[4681]: I0404 03:55:08.364109 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7672c62-25a9-4b0c-a8dc-6749e411c155-kube-api-access-brp55" (OuterVolumeSpecName: "kube-api-access-brp55") pod "a7672c62-25a9-4b0c-a8dc-6749e411c155" (UID: "a7672c62-25a9-4b0c-a8dc-6749e411c155"). InnerVolumeSpecName "kube-api-access-brp55". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:55:08 crc kubenswrapper[4681]: I0404 03:55:08.384899 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7672c62-25a9-4b0c-a8dc-6749e411c155-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7672c62-25a9-4b0c-a8dc-6749e411c155" (UID: "a7672c62-25a9-4b0c-a8dc-6749e411c155"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:55:08 crc kubenswrapper[4681]: I0404 03:55:08.461417 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7672c62-25a9-4b0c-a8dc-6749e411c155-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 03:55:08 crc kubenswrapper[4681]: I0404 03:55:08.461459 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7672c62-25a9-4b0c-a8dc-6749e411c155-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 03:55:08 crc kubenswrapper[4681]: I0404 03:55:08.461476 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brp55\" (UniqueName: \"kubernetes.io/projected/a7672c62-25a9-4b0c-a8dc-6749e411c155-kube-api-access-brp55\") on node \"crc\" DevicePath \"\"" Apr 04 03:55:08 crc kubenswrapper[4681]: I0404 03:55:08.713846 4681 generic.go:334] "Generic (PLEG): container finished" podID="a7672c62-25a9-4b0c-a8dc-6749e411c155" containerID="25769e4cb95702ccf79c1fd33325c7cbf8303daf822fb6f80929a7703e008395" exitCode=0 Apr 04 03:55:08 crc kubenswrapper[4681]: I0404 03:55:08.713915 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n78wv" event={"ID":"a7672c62-25a9-4b0c-a8dc-6749e411c155","Type":"ContainerDied","Data":"25769e4cb95702ccf79c1fd33325c7cbf8303daf822fb6f80929a7703e008395"} Apr 04 03:55:08 crc kubenswrapper[4681]: I0404 03:55:08.713954 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n78wv" Apr 04 03:55:08 crc kubenswrapper[4681]: I0404 03:55:08.713963 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n78wv" event={"ID":"a7672c62-25a9-4b0c-a8dc-6749e411c155","Type":"ContainerDied","Data":"425639e64b1b0d71ac5c4d64592765ea4b1ba28384e32480cb4f437f83196827"} Apr 04 03:55:08 crc kubenswrapper[4681]: I0404 03:55:08.714040 4681 scope.go:117] "RemoveContainer" containerID="25769e4cb95702ccf79c1fd33325c7cbf8303daf822fb6f80929a7703e008395" Apr 04 03:55:08 crc kubenswrapper[4681]: I0404 03:55:08.741302 4681 scope.go:117] "RemoveContainer" containerID="88162a939e73fad378304e44cf292607631ceb06c2dead175780a220a9e80182" Apr 04 03:55:08 crc kubenswrapper[4681]: I0404 03:55:08.775534 4681 scope.go:117] "RemoveContainer" containerID="712273936ce78efd2113f25de304ed176e93491898b0e66a4ab0e3eda461de06" Apr 04 03:55:08 crc kubenswrapper[4681]: I0404 03:55:08.791350 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n78wv"] Apr 04 03:55:08 crc kubenswrapper[4681]: I0404 03:55:08.801832 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n78wv"] Apr 04 03:55:08 crc kubenswrapper[4681]: I0404 03:55:08.849293 4681 scope.go:117] "RemoveContainer" containerID="25769e4cb95702ccf79c1fd33325c7cbf8303daf822fb6f80929a7703e008395" Apr 04 03:55:08 crc kubenswrapper[4681]: E0404 03:55:08.849786 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25769e4cb95702ccf79c1fd33325c7cbf8303daf822fb6f80929a7703e008395\": container with ID starting with 25769e4cb95702ccf79c1fd33325c7cbf8303daf822fb6f80929a7703e008395 not found: ID does not exist" containerID="25769e4cb95702ccf79c1fd33325c7cbf8303daf822fb6f80929a7703e008395" Apr 04 03:55:08 crc kubenswrapper[4681]: I0404 03:55:08.849827 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25769e4cb95702ccf79c1fd33325c7cbf8303daf822fb6f80929a7703e008395"} err="failed to get container status \"25769e4cb95702ccf79c1fd33325c7cbf8303daf822fb6f80929a7703e008395\": rpc error: code = NotFound desc = could not find container \"25769e4cb95702ccf79c1fd33325c7cbf8303daf822fb6f80929a7703e008395\": container with ID starting with 25769e4cb95702ccf79c1fd33325c7cbf8303daf822fb6f80929a7703e008395 not found: ID does not exist" Apr 04 03:55:08 crc kubenswrapper[4681]: I0404 03:55:08.849855 4681 scope.go:117] "RemoveContainer" containerID="88162a939e73fad378304e44cf292607631ceb06c2dead175780a220a9e80182" Apr 04 03:55:08 crc kubenswrapper[4681]: E0404 03:55:08.850312 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88162a939e73fad378304e44cf292607631ceb06c2dead175780a220a9e80182\": container with ID starting with 88162a939e73fad378304e44cf292607631ceb06c2dead175780a220a9e80182 not found: ID does not exist" containerID="88162a939e73fad378304e44cf292607631ceb06c2dead175780a220a9e80182" Apr 04 03:55:08 crc kubenswrapper[4681]: I0404 03:55:08.854329 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88162a939e73fad378304e44cf292607631ceb06c2dead175780a220a9e80182"} err="failed to get container status \"88162a939e73fad378304e44cf292607631ceb06c2dead175780a220a9e80182\": rpc error: code = NotFound desc = could not find container \"88162a939e73fad378304e44cf292607631ceb06c2dead175780a220a9e80182\": container with ID starting with 88162a939e73fad378304e44cf292607631ceb06c2dead175780a220a9e80182 not found: ID does not exist" Apr 04 03:55:08 crc kubenswrapper[4681]: I0404 03:55:08.854494 4681 scope.go:117] "RemoveContainer" containerID="712273936ce78efd2113f25de304ed176e93491898b0e66a4ab0e3eda461de06" Apr 04 03:55:08 crc kubenswrapper[4681]: E0404 03:55:08.857257 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"712273936ce78efd2113f25de304ed176e93491898b0e66a4ab0e3eda461de06\": container with ID starting with 712273936ce78efd2113f25de304ed176e93491898b0e66a4ab0e3eda461de06 not found: ID does not exist" containerID="712273936ce78efd2113f25de304ed176e93491898b0e66a4ab0e3eda461de06" Apr 04 03:55:08 crc kubenswrapper[4681]: I0404 03:55:08.857345 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"712273936ce78efd2113f25de304ed176e93491898b0e66a4ab0e3eda461de06"} err="failed to get container status \"712273936ce78efd2113f25de304ed176e93491898b0e66a4ab0e3eda461de06\": rpc error: code = NotFound desc = could not find container \"712273936ce78efd2113f25de304ed176e93491898b0e66a4ab0e3eda461de06\": container with ID starting with 712273936ce78efd2113f25de304ed176e93491898b0e66a4ab0e3eda461de06 not found: ID does not exist" Apr 04 03:55:09 crc kubenswrapper[4681]: I0404 03:55:09.217647 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7672c62-25a9-4b0c-a8dc-6749e411c155" path="/var/lib/kubelet/pods/a7672c62-25a9-4b0c-a8dc-6749e411c155/volumes" Apr 04 03:55:28 crc kubenswrapper[4681]: I0404 03:55:28.058602 4681 generic.go:334] "Generic (PLEG): container finished" podID="ed8fde71-911f-4017-8b4d-05022b816eb3" containerID="a9aac46f39d170e70365d09fbd71b802885a9d54a0aa9d07a5edb065db1a131f" exitCode=0 Apr 04 03:55:28 crc kubenswrapper[4681]: I0404 03:55:28.058707 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-shsrj/must-gather-cz9zd" event={"ID":"ed8fde71-911f-4017-8b4d-05022b816eb3","Type":"ContainerDied","Data":"a9aac46f39d170e70365d09fbd71b802885a9d54a0aa9d07a5edb065db1a131f"} Apr 04 03:55:28 crc kubenswrapper[4681]: I0404 03:55:28.059906 4681 scope.go:117] "RemoveContainer" containerID="a9aac46f39d170e70365d09fbd71b802885a9d54a0aa9d07a5edb065db1a131f" Apr 04 03:55:28 crc kubenswrapper[4681]: I0404 03:55:28.960730 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-shsrj_must-gather-cz9zd_ed8fde71-911f-4017-8b4d-05022b816eb3/gather/0.log" Apr 04 03:55:42 crc kubenswrapper[4681]: I0404 03:55:42.106842 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-shsrj/must-gather-cz9zd"] Apr 04 03:55:42 crc kubenswrapper[4681]: I0404 03:55:42.107601 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-shsrj/must-gather-cz9zd" podUID="ed8fde71-911f-4017-8b4d-05022b816eb3" containerName="copy" containerID="cri-o://0d31dcd1b540e27b71f8c94420d80fd4ab2a099778d9be9239c89b7d1529b069" gracePeriod=2 Apr 04 03:55:42 crc kubenswrapper[4681]: I0404 03:55:42.124901 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-shsrj/must-gather-cz9zd"] Apr 04 03:55:42 crc kubenswrapper[4681]: I0404 03:55:42.564003 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-shsrj_must-gather-cz9zd_ed8fde71-911f-4017-8b4d-05022b816eb3/copy/0.log" Apr 04 03:55:42 crc kubenswrapper[4681]: I0404 03:55:42.564888 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-shsrj/must-gather-cz9zd" Apr 04 03:55:42 crc kubenswrapper[4681]: I0404 03:55:42.670991 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ed8fde71-911f-4017-8b4d-05022b816eb3-must-gather-output\") pod \"ed8fde71-911f-4017-8b4d-05022b816eb3\" (UID: \"ed8fde71-911f-4017-8b4d-05022b816eb3\") " Apr 04 03:55:42 crc kubenswrapper[4681]: I0404 03:55:42.671346 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4rh4\" (UniqueName: \"kubernetes.io/projected/ed8fde71-911f-4017-8b4d-05022b816eb3-kube-api-access-q4rh4\") pod \"ed8fde71-911f-4017-8b4d-05022b816eb3\" (UID: \"ed8fde71-911f-4017-8b4d-05022b816eb3\") " Apr 04 03:55:42 crc kubenswrapper[4681]: I0404 03:55:42.687516 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed8fde71-911f-4017-8b4d-05022b816eb3-kube-api-access-q4rh4" (OuterVolumeSpecName: "kube-api-access-q4rh4") pod "ed8fde71-911f-4017-8b4d-05022b816eb3" (UID: "ed8fde71-911f-4017-8b4d-05022b816eb3"). InnerVolumeSpecName "kube-api-access-q4rh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:55:42 crc kubenswrapper[4681]: I0404 03:55:42.773576 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4rh4\" (UniqueName: \"kubernetes.io/projected/ed8fde71-911f-4017-8b4d-05022b816eb3-kube-api-access-q4rh4\") on node \"crc\" DevicePath \"\"" Apr 04 03:55:42 crc kubenswrapper[4681]: I0404 03:55:42.884435 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed8fde71-911f-4017-8b4d-05022b816eb3-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "ed8fde71-911f-4017-8b4d-05022b816eb3" (UID: "ed8fde71-911f-4017-8b4d-05022b816eb3"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 03:55:42 crc kubenswrapper[4681]: I0404 03:55:42.977691 4681 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ed8fde71-911f-4017-8b4d-05022b816eb3-must-gather-output\") on node \"crc\" DevicePath \"\"" Apr 04 03:55:43 crc kubenswrapper[4681]: I0404 03:55:43.212958 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed8fde71-911f-4017-8b4d-05022b816eb3" path="/var/lib/kubelet/pods/ed8fde71-911f-4017-8b4d-05022b816eb3/volumes" Apr 04 03:55:43 crc kubenswrapper[4681]: I0404 03:55:43.243219 4681 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-shsrj_must-gather-cz9zd_ed8fde71-911f-4017-8b4d-05022b816eb3/copy/0.log" Apr 04 03:55:43 crc kubenswrapper[4681]: I0404 03:55:43.243781 4681 generic.go:334] "Generic (PLEG): container finished" podID="ed8fde71-911f-4017-8b4d-05022b816eb3" containerID="0d31dcd1b540e27b71f8c94420d80fd4ab2a099778d9be9239c89b7d1529b069" exitCode=143 Apr 04 03:55:43 crc kubenswrapper[4681]: I0404 03:55:43.243826 4681 scope.go:117] "RemoveContainer" containerID="0d31dcd1b540e27b71f8c94420d80fd4ab2a099778d9be9239c89b7d1529b069" Apr 04 03:55:43 crc kubenswrapper[4681]: I0404 03:55:43.243856 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-shsrj/must-gather-cz9zd" Apr 04 03:55:43 crc kubenswrapper[4681]: I0404 03:55:43.275109 4681 scope.go:117] "RemoveContainer" containerID="a9aac46f39d170e70365d09fbd71b802885a9d54a0aa9d07a5edb065db1a131f" Apr 04 03:55:43 crc kubenswrapper[4681]: I0404 03:55:43.342448 4681 scope.go:117] "RemoveContainer" containerID="0d31dcd1b540e27b71f8c94420d80fd4ab2a099778d9be9239c89b7d1529b069" Apr 04 03:55:43 crc kubenswrapper[4681]: E0404 03:55:43.342816 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d31dcd1b540e27b71f8c94420d80fd4ab2a099778d9be9239c89b7d1529b069\": container with ID starting with 0d31dcd1b540e27b71f8c94420d80fd4ab2a099778d9be9239c89b7d1529b069 not found: ID does not exist" containerID="0d31dcd1b540e27b71f8c94420d80fd4ab2a099778d9be9239c89b7d1529b069" Apr 04 03:55:43 crc kubenswrapper[4681]: I0404 03:55:43.342858 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d31dcd1b540e27b71f8c94420d80fd4ab2a099778d9be9239c89b7d1529b069"} err="failed to get container status \"0d31dcd1b540e27b71f8c94420d80fd4ab2a099778d9be9239c89b7d1529b069\": rpc error: code = NotFound desc = could not find container \"0d31dcd1b540e27b71f8c94420d80fd4ab2a099778d9be9239c89b7d1529b069\": container with ID starting with 0d31dcd1b540e27b71f8c94420d80fd4ab2a099778d9be9239c89b7d1529b069 not found: ID does not exist" Apr 04 03:55:43 crc kubenswrapper[4681]: I0404 03:55:43.342884 4681 scope.go:117] "RemoveContainer" containerID="a9aac46f39d170e70365d09fbd71b802885a9d54a0aa9d07a5edb065db1a131f" Apr 04 03:55:43 crc kubenswrapper[4681]: E0404 03:55:43.343124 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9aac46f39d170e70365d09fbd71b802885a9d54a0aa9d07a5edb065db1a131f\": container with ID starting with a9aac46f39d170e70365d09fbd71b802885a9d54a0aa9d07a5edb065db1a131f not found: ID does not exist" containerID="a9aac46f39d170e70365d09fbd71b802885a9d54a0aa9d07a5edb065db1a131f" Apr 04 03:55:43 crc kubenswrapper[4681]: I0404 03:55:43.343153 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9aac46f39d170e70365d09fbd71b802885a9d54a0aa9d07a5edb065db1a131f"} err="failed to get container status \"a9aac46f39d170e70365d09fbd71b802885a9d54a0aa9d07a5edb065db1a131f\": rpc error: code = NotFound desc = could not find container \"a9aac46f39d170e70365d09fbd71b802885a9d54a0aa9d07a5edb065db1a131f\": container with ID starting with a9aac46f39d170e70365d09fbd71b802885a9d54a0aa9d07a5edb065db1a131f not found: ID does not exist" Apr 04 03:56:00 crc kubenswrapper[4681]: I0404 03:56:00.142547 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587916-kn9w2"] Apr 04 03:56:00 crc kubenswrapper[4681]: E0404 03:56:00.146518 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed8fde71-911f-4017-8b4d-05022b816eb3" containerName="gather" Apr 04 03:56:00 crc kubenswrapper[4681]: I0404 03:56:00.146637 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed8fde71-911f-4017-8b4d-05022b816eb3" containerName="gather" Apr 04 03:56:00 crc kubenswrapper[4681]: E0404 03:56:00.146712 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7672c62-25a9-4b0c-a8dc-6749e411c155" containerName="registry-server" Apr 04 03:56:00 crc kubenswrapper[4681]: I0404 03:56:00.146768 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7672c62-25a9-4b0c-a8dc-6749e411c155" containerName="registry-server" Apr 04 03:56:00 crc kubenswrapper[4681]: E0404 03:56:00.146844 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed8fde71-911f-4017-8b4d-05022b816eb3" containerName="copy" Apr 04 03:56:00 crc kubenswrapper[4681]: I0404 03:56:00.146910 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed8fde71-911f-4017-8b4d-05022b816eb3" containerName="copy" Apr 04 03:56:00 crc kubenswrapper[4681]: E0404 03:56:00.146989 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7672c62-25a9-4b0c-a8dc-6749e411c155" containerName="extract-utilities" Apr 04 03:56:00 crc kubenswrapper[4681]: I0404 03:56:00.147055 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7672c62-25a9-4b0c-a8dc-6749e411c155" containerName="extract-utilities" Apr 04 03:56:00 crc kubenswrapper[4681]: E0404 03:56:00.147149 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7672c62-25a9-4b0c-a8dc-6749e411c155" containerName="extract-content" Apr 04 03:56:00 crc kubenswrapper[4681]: I0404 03:56:00.147222 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7672c62-25a9-4b0c-a8dc-6749e411c155" containerName="extract-content" Apr 04 03:56:00 crc kubenswrapper[4681]: I0404 03:56:00.147506 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed8fde71-911f-4017-8b4d-05022b816eb3" containerName="gather" Apr 04 03:56:00 crc kubenswrapper[4681]: I0404 03:56:00.147596 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7672c62-25a9-4b0c-a8dc-6749e411c155" containerName="registry-server" Apr 04 03:56:00 crc kubenswrapper[4681]: I0404 03:56:00.147657 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed8fde71-911f-4017-8b4d-05022b816eb3" containerName="copy" Apr 04 03:56:00 crc kubenswrapper[4681]: I0404 03:56:00.148348 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587916-kn9w2" Apr 04 03:56:00 crc kubenswrapper[4681]: I0404 03:56:00.150653 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 03:56:00 crc kubenswrapper[4681]: I0404 03:56:00.151581 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 03:56:00 crc kubenswrapper[4681]: I0404 03:56:00.152231 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 03:56:00 crc kubenswrapper[4681]: I0404 03:56:00.169559 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587916-kn9w2"] Apr 04 03:56:00 crc kubenswrapper[4681]: I0404 03:56:00.275108 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jtrb\" (UniqueName: \"kubernetes.io/projected/83ee0782-fbec-4618-bac8-fde274cefda2-kube-api-access-4jtrb\") pod \"auto-csr-approver-29587916-kn9w2\" (UID: \"83ee0782-fbec-4618-bac8-fde274cefda2\") " pod="openshift-infra/auto-csr-approver-29587916-kn9w2" Apr 04 03:56:00 crc kubenswrapper[4681]: I0404 03:56:00.377642 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jtrb\" (UniqueName: \"kubernetes.io/projected/83ee0782-fbec-4618-bac8-fde274cefda2-kube-api-access-4jtrb\") pod \"auto-csr-approver-29587916-kn9w2\" (UID: \"83ee0782-fbec-4618-bac8-fde274cefda2\") " pod="openshift-infra/auto-csr-approver-29587916-kn9w2" Apr 04 03:56:00 crc kubenswrapper[4681]: I0404 03:56:00.401619 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jtrb\" (UniqueName: \"kubernetes.io/projected/83ee0782-fbec-4618-bac8-fde274cefda2-kube-api-access-4jtrb\") pod \"auto-csr-approver-29587916-kn9w2\" (UID: \"83ee0782-fbec-4618-bac8-fde274cefda2\") " pod="openshift-infra/auto-csr-approver-29587916-kn9w2" Apr 04 03:56:00 crc kubenswrapper[4681]: I0404 03:56:00.467229 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587916-kn9w2" Apr 04 03:56:00 crc kubenswrapper[4681]: I0404 03:56:00.895285 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587916-kn9w2"] Apr 04 03:56:00 crc kubenswrapper[4681]: W0404 03:56:00.899483 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83ee0782_fbec_4618_bac8_fde274cefda2.slice/crio-1a28ccd1ad48eaf0dd526d4bd48b962064a3367a055d158c9b854cf7e225d65f WatchSource:0}: Error finding container 1a28ccd1ad48eaf0dd526d4bd48b962064a3367a055d158c9b854cf7e225d65f: Status 404 returned error can't find the container with id 1a28ccd1ad48eaf0dd526d4bd48b962064a3367a055d158c9b854cf7e225d65f Apr 04 03:56:01 crc kubenswrapper[4681]: I0404 03:56:01.468106 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587916-kn9w2" event={"ID":"83ee0782-fbec-4618-bac8-fde274cefda2","Type":"ContainerStarted","Data":"1a28ccd1ad48eaf0dd526d4bd48b962064a3367a055d158c9b854cf7e225d65f"} Apr 04 03:56:02 crc kubenswrapper[4681]: I0404 03:56:02.481547 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587916-kn9w2" event={"ID":"83ee0782-fbec-4618-bac8-fde274cefda2","Type":"ContainerStarted","Data":"c2a6e74a3f6ade0a909c912fec7df989fc75fd1de5a42d002dce3928b0120426"} Apr 04 03:56:02 crc kubenswrapper[4681]: I0404 03:56:02.514571 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29587916-kn9w2" podStartSLOduration=1.382403847 podStartE2EDuration="2.514550835s" podCreationTimestamp="2026-04-04 03:56:00 +0000 UTC" firstStartedPulling="2026-04-04 03:56:00.903192521 +0000 UTC m=+7240.568967651" lastFinishedPulling="2026-04-04 03:56:02.035339499 +0000 UTC m=+7241.701114639" observedRunningTime="2026-04-04 03:56:02.508221171 +0000 UTC m=+7242.173996331" watchObservedRunningTime="2026-04-04 03:56:02.514550835 +0000 UTC m=+7242.180325965" Apr 04 03:56:03 crc kubenswrapper[4681]: I0404 03:56:03.495944 4681 generic.go:334] "Generic (PLEG): container finished" podID="83ee0782-fbec-4618-bac8-fde274cefda2" containerID="c2a6e74a3f6ade0a909c912fec7df989fc75fd1de5a42d002dce3928b0120426" exitCode=0 Apr 04 03:56:03 crc kubenswrapper[4681]: I0404 03:56:03.496051 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587916-kn9w2" event={"ID":"83ee0782-fbec-4618-bac8-fde274cefda2","Type":"ContainerDied","Data":"c2a6e74a3f6ade0a909c912fec7df989fc75fd1de5a42d002dce3928b0120426"} Apr 04 03:56:04 crc kubenswrapper[4681]: I0404 03:56:04.992524 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587916-kn9w2" Apr 04 03:56:05 crc kubenswrapper[4681]: I0404 03:56:05.087254 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jtrb\" (UniqueName: \"kubernetes.io/projected/83ee0782-fbec-4618-bac8-fde274cefda2-kube-api-access-4jtrb\") pod \"83ee0782-fbec-4618-bac8-fde274cefda2\" (UID: \"83ee0782-fbec-4618-bac8-fde274cefda2\") " Apr 04 03:56:05 crc kubenswrapper[4681]: I0404 03:56:05.102496 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83ee0782-fbec-4618-bac8-fde274cefda2-kube-api-access-4jtrb" (OuterVolumeSpecName: "kube-api-access-4jtrb") pod "83ee0782-fbec-4618-bac8-fde274cefda2" (UID: "83ee0782-fbec-4618-bac8-fde274cefda2"). InnerVolumeSpecName "kube-api-access-4jtrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:56:05 crc kubenswrapper[4681]: I0404 03:56:05.190431 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jtrb\" (UniqueName: \"kubernetes.io/projected/83ee0782-fbec-4618-bac8-fde274cefda2-kube-api-access-4jtrb\") on node \"crc\" DevicePath \"\"" Apr 04 03:56:05 crc kubenswrapper[4681]: I0404 03:56:05.524332 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587916-kn9w2" event={"ID":"83ee0782-fbec-4618-bac8-fde274cefda2","Type":"ContainerDied","Data":"1a28ccd1ad48eaf0dd526d4bd48b962064a3367a055d158c9b854cf7e225d65f"} Apr 04 03:56:05 crc kubenswrapper[4681]: I0404 03:56:05.524394 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a28ccd1ad48eaf0dd526d4bd48b962064a3367a055d158c9b854cf7e225d65f" Apr 04 03:56:05 crc kubenswrapper[4681]: I0404 03:56:05.524468 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587916-kn9w2" Apr 04 03:56:05 crc kubenswrapper[4681]: I0404 03:56:05.593811 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587910-jwxww"] Apr 04 03:56:05 crc kubenswrapper[4681]: I0404 03:56:05.614758 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587910-jwxww"] Apr 04 03:56:07 crc kubenswrapper[4681]: I0404 03:56:07.221242 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0c05a9b-d399-4e3d-b738-aa14b9f23067" path="/var/lib/kubelet/pods/d0c05a9b-d399-4e3d-b738-aa14b9f23067/volumes" Apr 04 03:56:26 crc kubenswrapper[4681]: I0404 03:56:26.524621 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 03:56:26 crc kubenswrapper[4681]: I0404 03:56:26.525253 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 03:56:36 crc kubenswrapper[4681]: I0404 03:56:36.478931 4681 scope.go:117] "RemoveContainer" containerID="b50b6a74e8263be1077722438d5cded0a552aab000461ffe00960f6f02ff2254" Apr 04 03:56:56 crc kubenswrapper[4681]: I0404 03:56:56.524098 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 03:56:56 crc kubenswrapper[4681]: I0404 03:56:56.525070 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 03:57:26 crc kubenswrapper[4681]: I0404 03:57:26.524591 4681 patch_prober.go:28] interesting pod/machine-config-daemon-v6mjr container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Apr 04 03:57:26 crc kubenswrapper[4681]: I0404 03:57:26.525428 4681 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Apr 04 03:57:26 crc kubenswrapper[4681]: I0404 03:57:26.525605 4681 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" Apr 04 03:57:26 crc kubenswrapper[4681]: I0404 03:57:26.527147 4681 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"180682ed0d3b3de36eeb8df5018598c89cc067a2cf2a904470cf9a396a739e1e"} pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Apr 04 03:57:26 crc kubenswrapper[4681]: I0404 03:57:26.527321 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerName="machine-config-daemon" containerID="cri-o://180682ed0d3b3de36eeb8df5018598c89cc067a2cf2a904470cf9a396a739e1e" gracePeriod=600 Apr 04 03:57:26 crc kubenswrapper[4681]: E0404 03:57:26.654996 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:57:27 crc kubenswrapper[4681]: I0404 03:57:27.506108 4681 generic.go:334] "Generic (PLEG): container finished" podID="d457ca0b-43c6-4bab-940c-5aa4ab124992" containerID="180682ed0d3b3de36eeb8df5018598c89cc067a2cf2a904470cf9a396a739e1e" exitCode=0 Apr 04 03:57:27 crc kubenswrapper[4681]: I0404 03:57:27.506201 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" event={"ID":"d457ca0b-43c6-4bab-940c-5aa4ab124992","Type":"ContainerDied","Data":"180682ed0d3b3de36eeb8df5018598c89cc067a2cf2a904470cf9a396a739e1e"} Apr 04 03:57:27 crc kubenswrapper[4681]: I0404 03:57:27.506515 4681 scope.go:117] "RemoveContainer" containerID="fc8164135cc3b87b1c027469dcf913456cc05ebb2cc330fa19efa84a01296906" Apr 04 03:57:27 crc kubenswrapper[4681]: I0404 03:57:27.507472 4681 scope.go:117] "RemoveContainer" containerID="180682ed0d3b3de36eeb8df5018598c89cc067a2cf2a904470cf9a396a739e1e" Apr 04 03:57:27 crc kubenswrapper[4681]: E0404 03:57:27.508053 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:57:39 crc kubenswrapper[4681]: I0404 03:57:39.200801 4681 scope.go:117] "RemoveContainer" containerID="180682ed0d3b3de36eeb8df5018598c89cc067a2cf2a904470cf9a396a739e1e" Apr 04 03:57:39 crc kubenswrapper[4681]: E0404 03:57:39.201611 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:57:53 crc kubenswrapper[4681]: I0404 03:57:53.201087 4681 scope.go:117] "RemoveContainer" containerID="180682ed0d3b3de36eeb8df5018598c89cc067a2cf2a904470cf9a396a739e1e" Apr 04 03:57:53 crc kubenswrapper[4681]: E0404 03:57:53.202305 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:58:00 crc kubenswrapper[4681]: I0404 03:58:00.164060 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587918-cx6dk"] Apr 04 03:58:00 crc kubenswrapper[4681]: E0404 03:58:00.165254 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83ee0782-fbec-4618-bac8-fde274cefda2" containerName="oc" Apr 04 03:58:00 crc kubenswrapper[4681]: I0404 03:58:00.165297 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="83ee0782-fbec-4618-bac8-fde274cefda2" containerName="oc" Apr 04 03:58:00 crc kubenswrapper[4681]: I0404 03:58:00.165792 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="83ee0782-fbec-4618-bac8-fde274cefda2" containerName="oc" Apr 04 03:58:00 crc kubenswrapper[4681]: I0404 03:58:00.166891 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587918-cx6dk" Apr 04 03:58:00 crc kubenswrapper[4681]: I0404 03:58:00.169335 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 03:58:00 crc kubenswrapper[4681]: I0404 03:58:00.169498 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 03:58:00 crc kubenswrapper[4681]: I0404 03:58:00.175523 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 03:58:00 crc kubenswrapper[4681]: I0404 03:58:00.178084 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587918-cx6dk"] Apr 04 03:58:00 crc kubenswrapper[4681]: I0404 03:58:00.302175 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sx4w\" (UniqueName: \"kubernetes.io/projected/57b5371c-111e-4ecb-bdbf-56f17d6e5dff-kube-api-access-6sx4w\") pod \"auto-csr-approver-29587918-cx6dk\" (UID: \"57b5371c-111e-4ecb-bdbf-56f17d6e5dff\") " pod="openshift-infra/auto-csr-approver-29587918-cx6dk" Apr 04 03:58:00 crc kubenswrapper[4681]: I0404 03:58:00.406307 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sx4w\" (UniqueName: \"kubernetes.io/projected/57b5371c-111e-4ecb-bdbf-56f17d6e5dff-kube-api-access-6sx4w\") pod \"auto-csr-approver-29587918-cx6dk\" (UID: \"57b5371c-111e-4ecb-bdbf-56f17d6e5dff\") " pod="openshift-infra/auto-csr-approver-29587918-cx6dk" Apr 04 03:58:00 crc kubenswrapper[4681]: I0404 03:58:00.426856 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sx4w\" (UniqueName: \"kubernetes.io/projected/57b5371c-111e-4ecb-bdbf-56f17d6e5dff-kube-api-access-6sx4w\") pod \"auto-csr-approver-29587918-cx6dk\" (UID: \"57b5371c-111e-4ecb-bdbf-56f17d6e5dff\") " pod="openshift-infra/auto-csr-approver-29587918-cx6dk" Apr 04 03:58:00 crc kubenswrapper[4681]: I0404 03:58:00.491839 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587918-cx6dk" Apr 04 03:58:01 crc kubenswrapper[4681]: I0404 03:58:01.066872 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587918-cx6dk"] Apr 04 03:58:01 crc kubenswrapper[4681]: I0404 03:58:01.914802 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587918-cx6dk" event={"ID":"57b5371c-111e-4ecb-bdbf-56f17d6e5dff","Type":"ContainerStarted","Data":"76d5fb9bc7c2ea60198e40686a3e8143b871fa9946bd511a75afb2789fb5a79d"} Apr 04 03:58:02 crc kubenswrapper[4681]: I0404 03:58:02.933619 4681 generic.go:334] "Generic (PLEG): container finished" podID="57b5371c-111e-4ecb-bdbf-56f17d6e5dff" containerID="205198e539b21eb88122dac305f9a5dda51cd204851103c3e5a27c98009b93f9" exitCode=0 Apr 04 03:58:02 crc kubenswrapper[4681]: I0404 03:58:02.933879 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587918-cx6dk" event={"ID":"57b5371c-111e-4ecb-bdbf-56f17d6e5dff","Type":"ContainerDied","Data":"205198e539b21eb88122dac305f9a5dda51cd204851103c3e5a27c98009b93f9"} Apr 04 03:58:04 crc kubenswrapper[4681]: I0404 03:58:04.358153 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587918-cx6dk" Apr 04 03:58:04 crc kubenswrapper[4681]: I0404 03:58:04.397711 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sx4w\" (UniqueName: \"kubernetes.io/projected/57b5371c-111e-4ecb-bdbf-56f17d6e5dff-kube-api-access-6sx4w\") pod \"57b5371c-111e-4ecb-bdbf-56f17d6e5dff\" (UID: \"57b5371c-111e-4ecb-bdbf-56f17d6e5dff\") " Apr 04 03:58:04 crc kubenswrapper[4681]: I0404 03:58:04.405940 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57b5371c-111e-4ecb-bdbf-56f17d6e5dff-kube-api-access-6sx4w" (OuterVolumeSpecName: "kube-api-access-6sx4w") pod "57b5371c-111e-4ecb-bdbf-56f17d6e5dff" (UID: "57b5371c-111e-4ecb-bdbf-56f17d6e5dff"). InnerVolumeSpecName "kube-api-access-6sx4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 03:58:04 crc kubenswrapper[4681]: I0404 03:58:04.500489 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sx4w\" (UniqueName: \"kubernetes.io/projected/57b5371c-111e-4ecb-bdbf-56f17d6e5dff-kube-api-access-6sx4w\") on node \"crc\" DevicePath \"\"" Apr 04 03:58:04 crc kubenswrapper[4681]: I0404 03:58:04.961776 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587918-cx6dk" event={"ID":"57b5371c-111e-4ecb-bdbf-56f17d6e5dff","Type":"ContainerDied","Data":"76d5fb9bc7c2ea60198e40686a3e8143b871fa9946bd511a75afb2789fb5a79d"} Apr 04 03:58:04 crc kubenswrapper[4681]: I0404 03:58:04.961831 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76d5fb9bc7c2ea60198e40686a3e8143b871fa9946bd511a75afb2789fb5a79d" Apr 04 03:58:04 crc kubenswrapper[4681]: I0404 03:58:04.961932 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587918-cx6dk" Apr 04 03:58:05 crc kubenswrapper[4681]: I0404 03:58:05.446972 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587912-824ll"] Apr 04 03:58:05 crc kubenswrapper[4681]: I0404 03:58:05.466810 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587912-824ll"] Apr 04 03:58:07 crc kubenswrapper[4681]: I0404 03:58:07.221848 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c693e1eb-5790-4fef-b910-b6f710ea18cf" path="/var/lib/kubelet/pods/c693e1eb-5790-4fef-b910-b6f710ea18cf/volumes" Apr 04 03:58:08 crc kubenswrapper[4681]: I0404 03:58:08.201728 4681 scope.go:117] "RemoveContainer" containerID="180682ed0d3b3de36eeb8df5018598c89cc067a2cf2a904470cf9a396a739e1e" Apr 04 03:58:08 crc kubenswrapper[4681]: E0404 03:58:08.202588 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:58:21 crc kubenswrapper[4681]: I0404 03:58:21.213309 4681 scope.go:117] "RemoveContainer" containerID="180682ed0d3b3de36eeb8df5018598c89cc067a2cf2a904470cf9a396a739e1e" Apr 04 03:58:21 crc kubenswrapper[4681]: E0404 03:58:21.214820 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:58:35 crc kubenswrapper[4681]: I0404 03:58:35.203178 4681 scope.go:117] "RemoveContainer" containerID="180682ed0d3b3de36eeb8df5018598c89cc067a2cf2a904470cf9a396a739e1e" Apr 04 03:58:35 crc kubenswrapper[4681]: E0404 03:58:35.203923 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:58:36 crc kubenswrapper[4681]: I0404 03:58:36.613030 4681 scope.go:117] "RemoveContainer" containerID="aea7ca15948bf7f495710ef6bcd39e27fb499a5ccb8428dc717ea78a8fdfc198" Apr 04 03:58:46 crc kubenswrapper[4681]: I0404 03:58:46.202395 4681 scope.go:117] "RemoveContainer" containerID="180682ed0d3b3de36eeb8df5018598c89cc067a2cf2a904470cf9a396a739e1e" Apr 04 03:58:46 crc kubenswrapper[4681]: E0404 03:58:46.203456 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:58:59 crc kubenswrapper[4681]: I0404 03:58:59.201114 4681 scope.go:117] "RemoveContainer" containerID="180682ed0d3b3de36eeb8df5018598c89cc067a2cf2a904470cf9a396a739e1e" Apr 04 03:58:59 crc kubenswrapper[4681]: E0404 03:58:59.201996 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:59:10 crc kubenswrapper[4681]: I0404 03:59:10.201859 4681 scope.go:117] "RemoveContainer" containerID="180682ed0d3b3de36eeb8df5018598c89cc067a2cf2a904470cf9a396a739e1e" Apr 04 03:59:10 crc kubenswrapper[4681]: E0404 03:59:10.202908 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:59:24 crc kubenswrapper[4681]: I0404 03:59:24.201396 4681 scope.go:117] "RemoveContainer" containerID="180682ed0d3b3de36eeb8df5018598c89cc067a2cf2a904470cf9a396a739e1e" Apr 04 03:59:24 crc kubenswrapper[4681]: E0404 03:59:24.202244 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:59:38 crc kubenswrapper[4681]: I0404 03:59:38.202369 4681 scope.go:117] "RemoveContainer" containerID="180682ed0d3b3de36eeb8df5018598c89cc067a2cf2a904470cf9a396a739e1e" Apr 04 03:59:38 crc kubenswrapper[4681]: E0404 03:59:38.203286 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:59:49 crc kubenswrapper[4681]: I0404 03:59:49.201392 4681 scope.go:117] "RemoveContainer" containerID="180682ed0d3b3de36eeb8df5018598c89cc067a2cf2a904470cf9a396a739e1e" Apr 04 03:59:49 crc kubenswrapper[4681]: E0404 03:59:49.202544 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 03:59:56 crc kubenswrapper[4681]: I0404 03:59:56.600342 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l8csl"] Apr 04 03:59:56 crc kubenswrapper[4681]: E0404 03:59:56.601356 4681 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57b5371c-111e-4ecb-bdbf-56f17d6e5dff" containerName="oc" Apr 04 03:59:56 crc kubenswrapper[4681]: I0404 03:59:56.601373 4681 state_mem.go:107] "Deleted CPUSet assignment" podUID="57b5371c-111e-4ecb-bdbf-56f17d6e5dff" containerName="oc" Apr 04 03:59:56 crc kubenswrapper[4681]: I0404 03:59:56.601616 4681 memory_manager.go:354] "RemoveStaleState removing state" podUID="57b5371c-111e-4ecb-bdbf-56f17d6e5dff" containerName="oc" Apr 04 03:59:56 crc kubenswrapper[4681]: I0404 03:59:56.603486 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l8csl" Apr 04 03:59:56 crc kubenswrapper[4681]: I0404 03:59:56.615346 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l8csl"] Apr 04 03:59:56 crc kubenswrapper[4681]: I0404 03:59:56.641845 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17-catalog-content\") pod \"certified-operators-l8csl\" (UID: \"a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17\") " pod="openshift-marketplace/certified-operators-l8csl" Apr 04 03:59:56 crc kubenswrapper[4681]: I0404 03:59:56.641888 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nlmv\" (UniqueName: \"kubernetes.io/projected/a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17-kube-api-access-9nlmv\") pod \"certified-operators-l8csl\" (UID: \"a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17\") " pod="openshift-marketplace/certified-operators-l8csl" Apr 04 03:59:56 crc kubenswrapper[4681]: I0404 03:59:56.641944 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17-utilities\") pod \"certified-operators-l8csl\" (UID: \"a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17\") " pod="openshift-marketplace/certified-operators-l8csl" Apr 04 03:59:56 crc kubenswrapper[4681]: I0404 03:59:56.742962 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17-catalog-content\") pod \"certified-operators-l8csl\" (UID: \"a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17\") " pod="openshift-marketplace/certified-operators-l8csl" Apr 04 03:59:56 crc kubenswrapper[4681]: I0404 03:59:56.743004 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nlmv\" (UniqueName: \"kubernetes.io/projected/a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17-kube-api-access-9nlmv\") pod \"certified-operators-l8csl\" (UID: \"a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17\") " pod="openshift-marketplace/certified-operators-l8csl" Apr 04 03:59:56 crc kubenswrapper[4681]: I0404 03:59:56.743061 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17-utilities\") pod \"certified-operators-l8csl\" (UID: \"a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17\") " pod="openshift-marketplace/certified-operators-l8csl" Apr 04 03:59:56 crc kubenswrapper[4681]: I0404 03:59:56.743517 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17-utilities\") pod \"certified-operators-l8csl\" (UID: \"a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17\") " pod="openshift-marketplace/certified-operators-l8csl" Apr 04 03:59:56 crc kubenswrapper[4681]: I0404 03:59:56.743723 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17-catalog-content\") pod \"certified-operators-l8csl\" (UID: \"a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17\") " pod="openshift-marketplace/certified-operators-l8csl" Apr 04 03:59:56 crc kubenswrapper[4681]: I0404 03:59:56.762719 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nlmv\" (UniqueName: \"kubernetes.io/projected/a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17-kube-api-access-9nlmv\") pod \"certified-operators-l8csl\" (UID: \"a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17\") " pod="openshift-marketplace/certified-operators-l8csl" Apr 04 03:59:56 crc kubenswrapper[4681]: I0404 03:59:56.938982 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l8csl" Apr 04 03:59:57 crc kubenswrapper[4681]: I0404 03:59:57.471414 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l8csl"] Apr 04 03:59:58 crc kubenswrapper[4681]: I0404 03:59:58.415834 4681 generic.go:334] "Generic (PLEG): container finished" podID="a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17" containerID="737c98ad8bbc5f9ece3c51ff089e4268c90e59287ca95d474d94861e58328a7c" exitCode=0 Apr 04 03:59:58 crc kubenswrapper[4681]: I0404 03:59:58.416156 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l8csl" event={"ID":"a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17","Type":"ContainerDied","Data":"737c98ad8bbc5f9ece3c51ff089e4268c90e59287ca95d474d94861e58328a7c"} Apr 04 03:59:58 crc kubenswrapper[4681]: I0404 03:59:58.416198 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l8csl" event={"ID":"a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17","Type":"ContainerStarted","Data":"9c173654c53e4b2749cb196de9dfb0f1006dfa130f0d992dc36f18b725636c5f"} Apr 04 03:59:58 crc kubenswrapper[4681]: I0404 03:59:58.418721 4681 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Apr 04 03:59:59 crc kubenswrapper[4681]: I0404 03:59:59.428095 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l8csl" event={"ID":"a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17","Type":"ContainerStarted","Data":"ab8941283c44c528fafb86cd5eb30c84f359a0090e5265cf8d15036ca7c77a19"} Apr 04 04:00:00 crc kubenswrapper[4681]: I0404 04:00:00.164069 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29587920-nrsdt"] Apr 04 04:00:00 crc kubenswrapper[4681]: I0404 04:00:00.165650 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587920-nrsdt" Apr 04 04:00:00 crc kubenswrapper[4681]: I0404 04:00:00.173765 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Apr 04 04:00:00 crc kubenswrapper[4681]: I0404 04:00:00.175380 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-xsscc" Apr 04 04:00:00 crc kubenswrapper[4681]: I0404 04:00:00.177775 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Apr 04 04:00:00 crc kubenswrapper[4681]: I0404 04:00:00.181130 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587920-nrsdt"] Apr 04 04:00:00 crc kubenswrapper[4681]: I0404 04:00:00.196496 4681 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587920-mxdvk"] Apr 04 04:00:00 crc kubenswrapper[4681]: I0404 04:00:00.197804 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587920-mxdvk" Apr 04 04:00:00 crc kubenswrapper[4681]: I0404 04:00:00.199568 4681 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Apr 04 04:00:00 crc kubenswrapper[4681]: I0404 04:00:00.200455 4681 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Apr 04 04:00:00 crc kubenswrapper[4681]: I0404 04:00:00.217886 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587920-mxdvk"] Apr 04 04:00:00 crc kubenswrapper[4681]: I0404 04:00:00.319420 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wl9b\" (UniqueName: \"kubernetes.io/projected/b5c91b0f-74e2-4467-8177-7a2f6421bd70-kube-api-access-2wl9b\") pod \"collect-profiles-29587920-mxdvk\" (UID: \"b5c91b0f-74e2-4467-8177-7a2f6421bd70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587920-mxdvk" Apr 04 04:00:00 crc kubenswrapper[4681]: I0404 04:00:00.319549 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nfd4\" (UniqueName: \"kubernetes.io/projected/267ed5f6-3a1c-40c4-bc57-b8494f169261-kube-api-access-2nfd4\") pod \"auto-csr-approver-29587920-nrsdt\" (UID: \"267ed5f6-3a1c-40c4-bc57-b8494f169261\") " pod="openshift-infra/auto-csr-approver-29587920-nrsdt" Apr 04 04:00:00 crc kubenswrapper[4681]: I0404 04:00:00.319599 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5c91b0f-74e2-4467-8177-7a2f6421bd70-config-volume\") pod \"collect-profiles-29587920-mxdvk\" (UID: \"b5c91b0f-74e2-4467-8177-7a2f6421bd70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587920-mxdvk" Apr 04 04:00:00 crc kubenswrapper[4681]: I0404 04:00:00.319628 4681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5c91b0f-74e2-4467-8177-7a2f6421bd70-secret-volume\") pod \"collect-profiles-29587920-mxdvk\" (UID: \"b5c91b0f-74e2-4467-8177-7a2f6421bd70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587920-mxdvk" Apr 04 04:00:00 crc kubenswrapper[4681]: I0404 04:00:00.421942 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5c91b0f-74e2-4467-8177-7a2f6421bd70-config-volume\") pod \"collect-profiles-29587920-mxdvk\" (UID: \"b5c91b0f-74e2-4467-8177-7a2f6421bd70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587920-mxdvk" Apr 04 04:00:00 crc kubenswrapper[4681]: I0404 04:00:00.422026 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5c91b0f-74e2-4467-8177-7a2f6421bd70-secret-volume\") pod \"collect-profiles-29587920-mxdvk\" (UID: \"b5c91b0f-74e2-4467-8177-7a2f6421bd70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587920-mxdvk" Apr 04 04:00:00 crc kubenswrapper[4681]: I0404 04:00:00.422327 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wl9b\" (UniqueName: \"kubernetes.io/projected/b5c91b0f-74e2-4467-8177-7a2f6421bd70-kube-api-access-2wl9b\") pod \"collect-profiles-29587920-mxdvk\" (UID: \"b5c91b0f-74e2-4467-8177-7a2f6421bd70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587920-mxdvk" Apr 04 04:00:00 crc kubenswrapper[4681]: I0404 04:00:00.422469 4681 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nfd4\" (UniqueName: \"kubernetes.io/projected/267ed5f6-3a1c-40c4-bc57-b8494f169261-kube-api-access-2nfd4\") pod \"auto-csr-approver-29587920-nrsdt\" (UID: \"267ed5f6-3a1c-40c4-bc57-b8494f169261\") " pod="openshift-infra/auto-csr-approver-29587920-nrsdt" Apr 04 04:00:00 crc kubenswrapper[4681]: I0404 04:00:00.422953 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5c91b0f-74e2-4467-8177-7a2f6421bd70-config-volume\") pod \"collect-profiles-29587920-mxdvk\" (UID: \"b5c91b0f-74e2-4467-8177-7a2f6421bd70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587920-mxdvk" Apr 04 04:00:00 crc kubenswrapper[4681]: I0404 04:00:00.428728 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5c91b0f-74e2-4467-8177-7a2f6421bd70-secret-volume\") pod \"collect-profiles-29587920-mxdvk\" (UID: \"b5c91b0f-74e2-4467-8177-7a2f6421bd70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587920-mxdvk" Apr 04 04:00:00 crc kubenswrapper[4681]: I0404 04:00:00.445047 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wl9b\" (UniqueName: \"kubernetes.io/projected/b5c91b0f-74e2-4467-8177-7a2f6421bd70-kube-api-access-2wl9b\") pod \"collect-profiles-29587920-mxdvk\" (UID: \"b5c91b0f-74e2-4467-8177-7a2f6421bd70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29587920-mxdvk" Apr 04 04:00:00 crc kubenswrapper[4681]: I0404 04:00:00.451657 4681 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nfd4\" (UniqueName: \"kubernetes.io/projected/267ed5f6-3a1c-40c4-bc57-b8494f169261-kube-api-access-2nfd4\") pod \"auto-csr-approver-29587920-nrsdt\" (UID: \"267ed5f6-3a1c-40c4-bc57-b8494f169261\") " pod="openshift-infra/auto-csr-approver-29587920-nrsdt" Apr 04 04:00:00 crc kubenswrapper[4681]: I0404 04:00:00.494375 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587920-nrsdt" Apr 04 04:00:00 crc kubenswrapper[4681]: I0404 04:00:00.517874 4681 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587920-mxdvk" Apr 04 04:00:00 crc kubenswrapper[4681]: I0404 04:00:00.814504 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29587920-nrsdt"] Apr 04 04:00:01 crc kubenswrapper[4681]: W0404 04:00:01.073569 4681 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5c91b0f_74e2_4467_8177_7a2f6421bd70.slice/crio-d85173ad19fecab31d6314ef9e94a3b6d1eec07ec9f68b03adaebaeecad84d51 WatchSource:0}: Error finding container d85173ad19fecab31d6314ef9e94a3b6d1eec07ec9f68b03adaebaeecad84d51: Status 404 returned error can't find the container with id d85173ad19fecab31d6314ef9e94a3b6d1eec07ec9f68b03adaebaeecad84d51 Apr 04 04:00:01 crc kubenswrapper[4681]: I0404 04:00:01.073746 4681 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587920-mxdvk"] Apr 04 04:00:01 crc kubenswrapper[4681]: I0404 04:00:01.212750 4681 scope.go:117] "RemoveContainer" containerID="180682ed0d3b3de36eeb8df5018598c89cc067a2cf2a904470cf9a396a739e1e" Apr 04 04:00:01 crc kubenswrapper[4681]: E0404 04:00:01.215868 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 04:00:01 crc kubenswrapper[4681]: I0404 04:00:01.450845 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587920-nrsdt" event={"ID":"267ed5f6-3a1c-40c4-bc57-b8494f169261","Type":"ContainerStarted","Data":"79f8c680938023291a9cc0d405e851768658bbae7d89e66e05e54304b49466ef"} Apr 04 04:00:01 crc kubenswrapper[4681]: I0404 04:00:01.454913 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587920-mxdvk" event={"ID":"b5c91b0f-74e2-4467-8177-7a2f6421bd70","Type":"ContainerStarted","Data":"d84fa2277b874e81f982f98e3c1c8e95e0d92b4d4da08581be4aa479d369cd52"} Apr 04 04:00:01 crc kubenswrapper[4681]: I0404 04:00:01.454994 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587920-mxdvk" event={"ID":"b5c91b0f-74e2-4467-8177-7a2f6421bd70","Type":"ContainerStarted","Data":"d85173ad19fecab31d6314ef9e94a3b6d1eec07ec9f68b03adaebaeecad84d51"} Apr 04 04:00:01 crc kubenswrapper[4681]: I0404 04:00:01.458800 4681 generic.go:334] "Generic (PLEG): container finished" podID="a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17" containerID="ab8941283c44c528fafb86cd5eb30c84f359a0090e5265cf8d15036ca7c77a19" exitCode=0 Apr 04 04:00:01 crc kubenswrapper[4681]: I0404 04:00:01.458858 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l8csl" event={"ID":"a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17","Type":"ContainerDied","Data":"ab8941283c44c528fafb86cd5eb30c84f359a0090e5265cf8d15036ca7c77a19"} Apr 04 04:00:01 crc kubenswrapper[4681]: I0404 04:00:01.488549 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29587920-mxdvk" podStartSLOduration=1.488529413 podStartE2EDuration="1.488529413s" podCreationTimestamp="2026-04-04 04:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-04 04:00:01.48138739 +0000 UTC m=+7481.147162510" watchObservedRunningTime="2026-04-04 04:00:01.488529413 +0000 UTC m=+7481.154304533" Apr 04 04:00:02 crc kubenswrapper[4681]: I0404 04:00:02.472080 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l8csl" event={"ID":"a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17","Type":"ContainerStarted","Data":"9c9d8c88faf739b3b1cb6c4c3577c412af11cc44396801f1a117fcb063f81b93"} Apr 04 04:00:02 crc kubenswrapper[4681]: I0404 04:00:02.480651 4681 generic.go:334] "Generic (PLEG): container finished" podID="b5c91b0f-74e2-4467-8177-7a2f6421bd70" containerID="d84fa2277b874e81f982f98e3c1c8e95e0d92b4d4da08581be4aa479d369cd52" exitCode=0 Apr 04 04:00:02 crc kubenswrapper[4681]: I0404 04:00:02.480706 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587920-mxdvk" event={"ID":"b5c91b0f-74e2-4467-8177-7a2f6421bd70","Type":"ContainerDied","Data":"d84fa2277b874e81f982f98e3c1c8e95e0d92b4d4da08581be4aa479d369cd52"} Apr 04 04:00:02 crc kubenswrapper[4681]: I0404 04:00:02.503206 4681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l8csl" podStartSLOduration=2.979170673 podStartE2EDuration="6.503184936s" podCreationTimestamp="2026-04-04 03:59:56 +0000 UTC" firstStartedPulling="2026-04-04 03:59:58.418367523 +0000 UTC m=+7478.084142683" lastFinishedPulling="2026-04-04 04:00:01.942381836 +0000 UTC m=+7481.608156946" observedRunningTime="2026-04-04 04:00:02.500768791 +0000 UTC m=+7482.166543921" watchObservedRunningTime="2026-04-04 04:00:02.503184936 +0000 UTC m=+7482.168960066" Apr 04 04:00:03 crc kubenswrapper[4681]: I0404 04:00:03.863173 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587920-mxdvk" Apr 04 04:00:04 crc kubenswrapper[4681]: I0404 04:00:04.014165 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5c91b0f-74e2-4467-8177-7a2f6421bd70-config-volume\") pod \"b5c91b0f-74e2-4467-8177-7a2f6421bd70\" (UID: \"b5c91b0f-74e2-4467-8177-7a2f6421bd70\") " Apr 04 04:00:04 crc kubenswrapper[4681]: I0404 04:00:04.014436 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5c91b0f-74e2-4467-8177-7a2f6421bd70-secret-volume\") pod \"b5c91b0f-74e2-4467-8177-7a2f6421bd70\" (UID: \"b5c91b0f-74e2-4467-8177-7a2f6421bd70\") " Apr 04 04:00:04 crc kubenswrapper[4681]: I0404 04:00:04.014758 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2wl9b\" (UniqueName: \"kubernetes.io/projected/b5c91b0f-74e2-4467-8177-7a2f6421bd70-kube-api-access-2wl9b\") pod \"b5c91b0f-74e2-4467-8177-7a2f6421bd70\" (UID: \"b5c91b0f-74e2-4467-8177-7a2f6421bd70\") " Apr 04 04:00:04 crc kubenswrapper[4681]: I0404 04:00:04.015054 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5c91b0f-74e2-4467-8177-7a2f6421bd70-config-volume" (OuterVolumeSpecName: "config-volume") pod "b5c91b0f-74e2-4467-8177-7a2f6421bd70" (UID: "b5c91b0f-74e2-4467-8177-7a2f6421bd70"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 04 04:00:04 crc kubenswrapper[4681]: I0404 04:00:04.015484 4681 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5c91b0f-74e2-4467-8177-7a2f6421bd70-config-volume\") on node \"crc\" DevicePath \"\"" Apr 04 04:00:04 crc kubenswrapper[4681]: I0404 04:00:04.020842 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5c91b0f-74e2-4467-8177-7a2f6421bd70-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b5c91b0f-74e2-4467-8177-7a2f6421bd70" (UID: "b5c91b0f-74e2-4467-8177-7a2f6421bd70"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Apr 04 04:00:04 crc kubenswrapper[4681]: I0404 04:00:04.039514 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5c91b0f-74e2-4467-8177-7a2f6421bd70-kube-api-access-2wl9b" (OuterVolumeSpecName: "kube-api-access-2wl9b") pod "b5c91b0f-74e2-4467-8177-7a2f6421bd70" (UID: "b5c91b0f-74e2-4467-8177-7a2f6421bd70"). InnerVolumeSpecName "kube-api-access-2wl9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 04:00:04 crc kubenswrapper[4681]: I0404 04:00:04.117018 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2wl9b\" (UniqueName: \"kubernetes.io/projected/b5c91b0f-74e2-4467-8177-7a2f6421bd70-kube-api-access-2wl9b\") on node \"crc\" DevicePath \"\"" Apr 04 04:00:04 crc kubenswrapper[4681]: I0404 04:00:04.117050 4681 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5c91b0f-74e2-4467-8177-7a2f6421bd70-secret-volume\") on node \"crc\" DevicePath \"\"" Apr 04 04:00:04 crc kubenswrapper[4681]: I0404 04:00:04.322463 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587875-dcgj5"] Apr 04 04:00:04 crc kubenswrapper[4681]: I0404 04:00:04.331071 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29587875-dcgj5"] Apr 04 04:00:04 crc kubenswrapper[4681]: I0404 04:00:04.504797 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29587920-mxdvk" Apr 04 04:00:04 crc kubenswrapper[4681]: I0404 04:00:04.504821 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29587920-mxdvk" event={"ID":"b5c91b0f-74e2-4467-8177-7a2f6421bd70","Type":"ContainerDied","Data":"d85173ad19fecab31d6314ef9e94a3b6d1eec07ec9f68b03adaebaeecad84d51"} Apr 04 04:00:04 crc kubenswrapper[4681]: I0404 04:00:04.505255 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d85173ad19fecab31d6314ef9e94a3b6d1eec07ec9f68b03adaebaeecad84d51" Apr 04 04:00:04 crc kubenswrapper[4681]: I0404 04:00:04.507601 4681 generic.go:334] "Generic (PLEG): container finished" podID="267ed5f6-3a1c-40c4-bc57-b8494f169261" containerID="4759354e325dd85dd8647d00c0f3a21e04ffe77b60b1135b9c0e359d79e0e1e0" exitCode=0 Apr 04 04:00:04 crc kubenswrapper[4681]: I0404 04:00:04.507685 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587920-nrsdt" event={"ID":"267ed5f6-3a1c-40c4-bc57-b8494f169261","Type":"ContainerDied","Data":"4759354e325dd85dd8647d00c0f3a21e04ffe77b60b1135b9c0e359d79e0e1e0"} Apr 04 04:00:05 crc kubenswrapper[4681]: I0404 04:00:05.217930 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6fab752-2d68-40be-9ce6-c8146c76a53f" path="/var/lib/kubelet/pods/d6fab752-2d68-40be-9ce6-c8146c76a53f/volumes" Apr 04 04:00:05 crc kubenswrapper[4681]: I0404 04:00:05.969985 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587920-nrsdt" Apr 04 04:00:06 crc kubenswrapper[4681]: I0404 04:00:06.072718 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nfd4\" (UniqueName: \"kubernetes.io/projected/267ed5f6-3a1c-40c4-bc57-b8494f169261-kube-api-access-2nfd4\") pod \"267ed5f6-3a1c-40c4-bc57-b8494f169261\" (UID: \"267ed5f6-3a1c-40c4-bc57-b8494f169261\") " Apr 04 04:00:06 crc kubenswrapper[4681]: I0404 04:00:06.078301 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/267ed5f6-3a1c-40c4-bc57-b8494f169261-kube-api-access-2nfd4" (OuterVolumeSpecName: "kube-api-access-2nfd4") pod "267ed5f6-3a1c-40c4-bc57-b8494f169261" (UID: "267ed5f6-3a1c-40c4-bc57-b8494f169261"). InnerVolumeSpecName "kube-api-access-2nfd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 04:00:06 crc kubenswrapper[4681]: I0404 04:00:06.176229 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nfd4\" (UniqueName: \"kubernetes.io/projected/267ed5f6-3a1c-40c4-bc57-b8494f169261-kube-api-access-2nfd4\") on node \"crc\" DevicePath \"\"" Apr 04 04:00:06 crc kubenswrapper[4681]: I0404 04:00:06.541671 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29587920-nrsdt" event={"ID":"267ed5f6-3a1c-40c4-bc57-b8494f169261","Type":"ContainerDied","Data":"79f8c680938023291a9cc0d405e851768658bbae7d89e66e05e54304b49466ef"} Apr 04 04:00:06 crc kubenswrapper[4681]: I0404 04:00:06.542118 4681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79f8c680938023291a9cc0d405e851768658bbae7d89e66e05e54304b49466ef" Apr 04 04:00:06 crc kubenswrapper[4681]: I0404 04:00:06.541901 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29587920-nrsdt" Apr 04 04:00:06 crc kubenswrapper[4681]: I0404 04:00:06.939237 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l8csl" Apr 04 04:00:06 crc kubenswrapper[4681]: I0404 04:00:06.939340 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l8csl" Apr 04 04:00:07 crc kubenswrapper[4681]: I0404 04:00:07.025191 4681 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l8csl" Apr 04 04:00:07 crc kubenswrapper[4681]: I0404 04:00:07.057854 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29587914-7pkxp"] Apr 04 04:00:07 crc kubenswrapper[4681]: I0404 04:00:07.070192 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29587914-7pkxp"] Apr 04 04:00:07 crc kubenswrapper[4681]: I0404 04:00:07.215558 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e6a4d29-81f9-4681-8edb-c4c48b272073" path="/var/lib/kubelet/pods/7e6a4d29-81f9-4681-8edb-c4c48b272073/volumes" Apr 04 04:00:07 crc kubenswrapper[4681]: I0404 04:00:07.630065 4681 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l8csl" Apr 04 04:00:07 crc kubenswrapper[4681]: I0404 04:00:07.702358 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l8csl"] Apr 04 04:00:09 crc kubenswrapper[4681]: I0404 04:00:09.580028 4681 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l8csl" podUID="a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17" containerName="registry-server" containerID="cri-o://9c9d8c88faf739b3b1cb6c4c3577c412af11cc44396801f1a117fcb063f81b93" gracePeriod=2 Apr 04 04:00:10 crc kubenswrapper[4681]: I0404 04:00:10.118635 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l8csl" Apr 04 04:00:10 crc kubenswrapper[4681]: I0404 04:00:10.281372 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17-utilities\") pod \"a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17\" (UID: \"a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17\") " Apr 04 04:00:10 crc kubenswrapper[4681]: I0404 04:00:10.281825 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17-catalog-content\") pod \"a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17\" (UID: \"a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17\") " Apr 04 04:00:10 crc kubenswrapper[4681]: I0404 04:00:10.281905 4681 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nlmv\" (UniqueName: \"kubernetes.io/projected/a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17-kube-api-access-9nlmv\") pod \"a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17\" (UID: \"a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17\") " Apr 04 04:00:10 crc kubenswrapper[4681]: I0404 04:00:10.283128 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17-utilities" (OuterVolumeSpecName: "utilities") pod "a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17" (UID: "a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 04:00:10 crc kubenswrapper[4681]: I0404 04:00:10.293019 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17-kube-api-access-9nlmv" (OuterVolumeSpecName: "kube-api-access-9nlmv") pod "a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17" (UID: "a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17"). InnerVolumeSpecName "kube-api-access-9nlmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 04 04:00:10 crc kubenswrapper[4681]: I0404 04:00:10.374412 4681 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17" (UID: "a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Apr 04 04:00:10 crc kubenswrapper[4681]: I0404 04:00:10.385443 4681 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17-utilities\") on node \"crc\" DevicePath \"\"" Apr 04 04:00:10 crc kubenswrapper[4681]: I0404 04:00:10.385478 4681 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17-catalog-content\") on node \"crc\" DevicePath \"\"" Apr 04 04:00:10 crc kubenswrapper[4681]: I0404 04:00:10.385491 4681 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nlmv\" (UniqueName: \"kubernetes.io/projected/a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17-kube-api-access-9nlmv\") on node \"crc\" DevicePath \"\"" Apr 04 04:00:10 crc kubenswrapper[4681]: I0404 04:00:10.600951 4681 generic.go:334] "Generic (PLEG): container finished" podID="a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17" containerID="9c9d8c88faf739b3b1cb6c4c3577c412af11cc44396801f1a117fcb063f81b93" exitCode=0 Apr 04 04:00:10 crc kubenswrapper[4681]: I0404 04:00:10.601024 4681 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l8csl" Apr 04 04:00:10 crc kubenswrapper[4681]: I0404 04:00:10.601034 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l8csl" event={"ID":"a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17","Type":"ContainerDied","Data":"9c9d8c88faf739b3b1cb6c4c3577c412af11cc44396801f1a117fcb063f81b93"} Apr 04 04:00:10 crc kubenswrapper[4681]: I0404 04:00:10.601105 4681 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l8csl" event={"ID":"a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17","Type":"ContainerDied","Data":"9c173654c53e4b2749cb196de9dfb0f1006dfa130f0d992dc36f18b725636c5f"} Apr 04 04:00:10 crc kubenswrapper[4681]: I0404 04:00:10.601150 4681 scope.go:117] "RemoveContainer" containerID="9c9d8c88faf739b3b1cb6c4c3577c412af11cc44396801f1a117fcb063f81b93" Apr 04 04:00:10 crc kubenswrapper[4681]: I0404 04:00:10.659652 4681 scope.go:117] "RemoveContainer" containerID="ab8941283c44c528fafb86cd5eb30c84f359a0090e5265cf8d15036ca7c77a19" Apr 04 04:00:10 crc kubenswrapper[4681]: I0404 04:00:10.679429 4681 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l8csl"] Apr 04 04:00:10 crc kubenswrapper[4681]: I0404 04:00:10.696755 4681 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l8csl"] Apr 04 04:00:10 crc kubenswrapper[4681]: I0404 04:00:10.703929 4681 scope.go:117] "RemoveContainer" containerID="737c98ad8bbc5f9ece3c51ff089e4268c90e59287ca95d474d94861e58328a7c" Apr 04 04:00:10 crc kubenswrapper[4681]: I0404 04:00:10.772651 4681 scope.go:117] "RemoveContainer" containerID="9c9d8c88faf739b3b1cb6c4c3577c412af11cc44396801f1a117fcb063f81b93" Apr 04 04:00:10 crc kubenswrapper[4681]: E0404 04:00:10.773241 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c9d8c88faf739b3b1cb6c4c3577c412af11cc44396801f1a117fcb063f81b93\": container with ID starting with 9c9d8c88faf739b3b1cb6c4c3577c412af11cc44396801f1a117fcb063f81b93 not found: ID does not exist" containerID="9c9d8c88faf739b3b1cb6c4c3577c412af11cc44396801f1a117fcb063f81b93" Apr 04 04:00:10 crc kubenswrapper[4681]: I0404 04:00:10.773556 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c9d8c88faf739b3b1cb6c4c3577c412af11cc44396801f1a117fcb063f81b93"} err="failed to get container status \"9c9d8c88faf739b3b1cb6c4c3577c412af11cc44396801f1a117fcb063f81b93\": rpc error: code = NotFound desc = could not find container \"9c9d8c88faf739b3b1cb6c4c3577c412af11cc44396801f1a117fcb063f81b93\": container with ID starting with 9c9d8c88faf739b3b1cb6c4c3577c412af11cc44396801f1a117fcb063f81b93 not found: ID does not exist" Apr 04 04:00:10 crc kubenswrapper[4681]: I0404 04:00:10.773598 4681 scope.go:117] "RemoveContainer" containerID="ab8941283c44c528fafb86cd5eb30c84f359a0090e5265cf8d15036ca7c77a19" Apr 04 04:00:10 crc kubenswrapper[4681]: E0404 04:00:10.774136 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab8941283c44c528fafb86cd5eb30c84f359a0090e5265cf8d15036ca7c77a19\": container with ID starting with ab8941283c44c528fafb86cd5eb30c84f359a0090e5265cf8d15036ca7c77a19 not found: ID does not exist" containerID="ab8941283c44c528fafb86cd5eb30c84f359a0090e5265cf8d15036ca7c77a19" Apr 04 04:00:10 crc kubenswrapper[4681]: I0404 04:00:10.774206 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab8941283c44c528fafb86cd5eb30c84f359a0090e5265cf8d15036ca7c77a19"} err="failed to get container status \"ab8941283c44c528fafb86cd5eb30c84f359a0090e5265cf8d15036ca7c77a19\": rpc error: code = NotFound desc = could not find container \"ab8941283c44c528fafb86cd5eb30c84f359a0090e5265cf8d15036ca7c77a19\": container with ID starting with ab8941283c44c528fafb86cd5eb30c84f359a0090e5265cf8d15036ca7c77a19 not found: ID does not exist" Apr 04 04:00:10 crc kubenswrapper[4681]: I0404 04:00:10.774247 4681 scope.go:117] "RemoveContainer" containerID="737c98ad8bbc5f9ece3c51ff089e4268c90e59287ca95d474d94861e58328a7c" Apr 04 04:00:10 crc kubenswrapper[4681]: E0404 04:00:10.774928 4681 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"737c98ad8bbc5f9ece3c51ff089e4268c90e59287ca95d474d94861e58328a7c\": container with ID starting with 737c98ad8bbc5f9ece3c51ff089e4268c90e59287ca95d474d94861e58328a7c not found: ID does not exist" containerID="737c98ad8bbc5f9ece3c51ff089e4268c90e59287ca95d474d94861e58328a7c" Apr 04 04:00:10 crc kubenswrapper[4681]: I0404 04:00:10.774961 4681 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"737c98ad8bbc5f9ece3c51ff089e4268c90e59287ca95d474d94861e58328a7c"} err="failed to get container status \"737c98ad8bbc5f9ece3c51ff089e4268c90e59287ca95d474d94861e58328a7c\": rpc error: code = NotFound desc = could not find container \"737c98ad8bbc5f9ece3c51ff089e4268c90e59287ca95d474d94861e58328a7c\": container with ID starting with 737c98ad8bbc5f9ece3c51ff089e4268c90e59287ca95d474d94861e58328a7c not found: ID does not exist" Apr 04 04:00:11 crc kubenswrapper[4681]: I0404 04:00:11.221863 4681 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17" path="/var/lib/kubelet/pods/a5ed9fa7-710d-4042-a7b0-b2d9d0d79f17/volumes" Apr 04 04:00:16 crc kubenswrapper[4681]: I0404 04:00:16.201711 4681 scope.go:117] "RemoveContainer" containerID="180682ed0d3b3de36eeb8df5018598c89cc067a2cf2a904470cf9a396a739e1e" Apr 04 04:00:16 crc kubenswrapper[4681]: E0404 04:00:16.202707 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 04:00:30 crc kubenswrapper[4681]: I0404 04:00:30.200835 4681 scope.go:117] "RemoveContainer" containerID="180682ed0d3b3de36eeb8df5018598c89cc067a2cf2a904470cf9a396a739e1e" Apr 04 04:00:30 crc kubenswrapper[4681]: E0404 04:00:30.201801 4681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-v6mjr_openshift-machine-config-operator(d457ca0b-43c6-4bab-940c-5aa4ab124992)\"" pod="openshift-machine-config-operator/machine-config-daemon-v6mjr" podUID="d457ca0b-43c6-4bab-940c-5aa4ab124992" Apr 04 04:00:36 crc kubenswrapper[4681]: I0404 04:00:36.766842 4681 scope.go:117] "RemoveContainer" containerID="362954ca56b297c3ce1c3afe3d5c2c991ee8318e989e321b30e3b15925921afb" Apr 04 04:00:36 crc kubenswrapper[4681]: I0404 04:00:36.858172 4681 scope.go:117] "RemoveContainer" containerID="730d5a884c75bee50776cf5a79c05491a6d9926b6cd8cb3425d02a126bb6504c"